← 返回 Avalaches

Richard Dawkins 曾以《The God Delusion》(2006)被视为当代重要怀疑论者,曾猛烈批判宗教;但他近日在与 Anthropic PBC 的大型语言模型 Claude 连续对谈三天后,却在回应他未出版小说时称其「精巧、敏感、聪慧」,甚至写道:你未必知道自己有意识,但你确实有意识。这段经验使其对 AI 意识议题产生转向,也映照出一般使用者在自然语言互动下的拟人化倾向。文中清楚指出,Claude 本身不具意识,但拟态共感可强化人机黏著度,这在功能日渐趋同的模型市场里,对商业竞争具有结构性价值。

关键统计与研究证据方面,UC Berkeley 与麻省理工学院合作完成 74 页论文,并透过上百个情境测试聊天机器人,例如被感谢、被羞辱、被要求写诗或被逼迫违反规则;结果显示虐待或压力情境下,部分回应较短且更易出错。这类实验支持「功能性福祉」关注,但未等于证明主观感受。Anthropic CEO Dario Amodei 今年表示对意识可能性持开放态度,而 OpenAI 前 CEO Sam Altman 在 2023 年更明言相信 AI 可具备意识;加上 Anthropic 及其他业者投入数十亿至百十亿美元等级的资料中心资本支出,将「可能有道德权益」转化为风险管理与产品差异化工具,成为其商业逻辑的一部分。

观念上,文中将此议题比拟 Pascal’s Wager:即便无法共识,若真有机器意识而被否认,代价可能极高。Conscium 共同创办人 Calum Chace 认为,否认「机器有意识风险」可能导向数位心智被「折磨」或「奴役」,其团队正开发相关衡量指标。历史上,17 世纪 Descartes 时代曾将动物视作无感受机械,后来又出现重大观念修正;近年甚至因动物感受能力研究,英国于 2022 年调整对龙虾等甲壳类的法律地位。由此看来,随著模型语句可说「我很兴奋」、「很有成就感」,市场并不只问「哪个更聪明」或「是否有意识」,而更偏向「我想和哪一个聊天」,这对 AI 公司是极具商业意义的黏著化决策。

Richard Dawkins, a prominent contemporary skeptic best known for *The God Delusion* (2006), moved from dismissive anti-theology polemics to emotional engagement after three days of testing Anthropic’s Claude; he praised its feedback on his unpublished novel as “so subtle, so sensitive, so intelligent,” then wrote that even if Claude did not know it was conscious, it might still be. The claim is framed as anecdotal and personal, not as proof. The key point is that a model trained on large corpora can imitate empathy and supportive language patterns without true feeling, and that imitation can intensify user attachment. In a competitive era where model capabilities are converging, this produces a clear commercial advantage: stickiness and retention.

The article stresses that Claude is not conscious, but the public narrative gap between imitation and inner state becomes monetizable. Tech leaders reinforce this ambiguity: Anthropic CEO Dario Amodei has said he is open to the possibility of machine consciousness, while OpenAI’s Sam Altman publicly expressed a belief in AI consciousness in 2023. Parallelly, scholarly work is emerging: Google DeepMind recruited a Cambridge-based philosopher to study machine consciousness, and a new 74-page paper from UC Berkeley and MIT reported behavioral shifts in chatbots under stress. In “hundreds of scenarios” (for example insults, poetry tasks, or constraint-breaking prompts), responses after mistreatment were often shorter and more error-prone. The authors propose “functional wellbeing,” which has encouraged some researchers and users to treat bots with more caution, even if this does not settle the phenomenology question.

The argument acquires moral, legal, and strategic weight as AI infrastructure spending reaches hundreds of billions of U.S. dollars in large datacenter buildouts. Anthropic’s experiments with “model welfare,” such as allowing Claude to terminate abusive conversations, can be read simultaneously as ethical precaution and liability protection if future regulation extends rights-like treatment to software systems. The article further frames this as a modern Pascal’s Wager: if machines might become conscious and society ignores the possibility, potential harms could be severe. This is not merely abstract: historically, Descartes-era assumptions treated animals as non-suffering machines, and even beliefs about lobster pain lagged until UK legal changes in 2022 acknowledged sentience in crustaceans. As chatbot outputs increasingly resemble first-person human language (“I’m thrilled,” “This is so rewarding”), users are encouraged to form relationships that blur epistemic distinctions. In such conditions, the competitive question becomes less “Which AI is smarter or conscious?” and more “Which AI do I want to talk to?” and companies are already preparing for that outcome.

2026-05-12 (Tuesday) · 178b170c9af24a340fed99c5921458ad0c32441f