Anthropic于2026年5月6日在旧金山的开发者大会上推出了一项名为「dreaming」的新AI代理功能,加入到更广泛的工具系统中,旨在帮助使用者管理并部署自动化软体工作流程。这项功能会回顾代理最近的对话纪录,寻找它刚完成内容中的模式,并利用这些发现来提升效能。文章认为,这个命名选择是产业更大范围习惯的一部分,也就是替AI功能取用人类认知标签,并呼吁AI公司停止借用dreaming、memory和reasoning这类词汇。
这篇文章把这股趋势追溯到2022年的聊天机器人热潮,指出OpenAI在2024年推出了第一个「reasoning」模型,并将其描述为在回答前花更多时间思考。文中也说,许多新创公司把聊天机器人包装成对使用者有「memories」,能储存像某人住哪里、喜欢什么以及不喜欢什么等具有人类特征的细节。Anthropic则被描写为特别投入拟人化:其宪章写道,公司以美德与智慧等词来讨论Claude,甚至还聘用一位驻公司哲学家来解读这个机器人的价值观。
文章警告,拟人化语言会扭曲道德判断、责任与信任,并引用了AI & Ethics中的一篇论文。其主要担忧是,使用者可能会高估这些系统的能力,并把人类特质投射到仍然有限且机械的工具上。结尾对菲利普·K·迪克(Philip K. Dick)的《Do Androids Dream of Electric Sheep?》的比较,更强化了这一点:科技领袖似乎不愿承认其产品的非人性本质,即使他们正以越来越像人类的功能来包装这些产品。
Anthropic introduced a new AI agent feature called “dreaming” at its developer conference in San Francisco on May 6, 2026, adding to a broader system of tools meant to help users manage and deploy automated software workflows. The feature reviews an agent’s recent transcript, looks for patterns in what it just completed, and uses those findings to improve performance. The article argues that this naming choice is part of a larger industry habit of giving AI features human cognitive labels, and asks AI companies to stop borrowing terms like dreaming, memory, and reasoning.
The piece traces this trend back to the chatbot boom in 2022, noting that OpenAI launched its first “reasoning” model in 2024 and described it as spending more time thinking before answering. It also says many startups market chatbots as having “memories” about users, storing humanlike details such as where someone lives, what they enjoy, and what they dislike. Anthropic is presented as especially invested in anthropomorphism: its constitution says the company discusses Claude in terms such as virtue and wisdom, and it even employs a resident philosopher to interpret the bot’s values.
The article warns that anthropomorphic language can distort moral judgments, responsibility, and trust, citing a paper in AI & Ethics. Its main concern is that users may overestimate what these systems can do and project human qualities onto tools that remain limited and mechanical. The closing comparison to Philip K. Dick’s Do Androids Dream of Electric Sheep? reinforces the point that tech leaders seem unwilling to accept the inhuman nature of their products, even as they brand them with increasingly human-sounding features.