← 返回 Avalaches

麦可・波伦于2026年2月24日提出关于人工智慧伦理的重要观点,布雷克・勒莫因针对AI技术的发展趋势进行了深入分析,双方均强调在快速技术革新中维持人类价值观的重要性。

核心证据是2023年的巴特林报告,这是一份由19位电脑科学家与哲学家撰写的88页预印本,主张当前人工智慧并无意识,但建造有意识人工智慧不存在明显障碍。波伦透过攻击其核心假设「计算功能主义」来提出质疑;该假设把意识视为与物质基底无关的计算,因此在设计上允许大脑与电脑可互换。他认为此观点在神经生物学细节下会崩解:大脑受化学调节、会被经验持续重接线,且高度互连,单一神经元可连到多达10,000个其他神经元,而研究者距离哪怕粗略的全连结图仍有数十年之遥;他也提到,有说法称一个皮质神经元即可匹配原本归于整个深度神经网路的功能。他进一步批评该报告的验证方法,指出其使用约半打彼此仍有争议的意识理论所导出的指标,并警告模拟出类似理论的架构并不能证明存在意识。

这意味著,当前对短期内出现具意识 AI 的信心在方法论上是循环论证、在伦理上也为时过早:如果你先假定意识就是计算,你的测试就会倾向把计算系统验证为候选者。Pollan 强调了具身性、生物学、情感等缺失变项,以及在全球工作空间与整合资讯理论等框架中「谁是有意识主体」这一未解问题,并主张这些遗漏会削弱推论品质。他也强调风险不对称:若有意识受苦为真,伪阴性可能放任道德伤害;但伪阳性也可能扭曲政策、治理与研究优先顺序;因此这形成的是高风险的不确定性局面,而非机器人格即将到来的明确趋势。文章最后指出一种技术官僚式冲动,即试图调校如「快乐旋钮」之类的情绪参数,并将此立场视为证据,显示该领域很大一部分仍低估了意识所包含的内涵。

Published on Feb 24, 2026, Michael Pollan’s essay argues that AI may become more capable but will not become a person, framing the debate as a challenge to human self-definition rather than only a technical milestone. He revisits the Blake Lemoine episode as the public trigger for concern about machine consciousness and situates it within a longer shift in which humans have already revised claims of uniqueness as animal cognition research has advanced over the last few years and across a few thousand years of prior human exceptionalist thought. The central context is existential: if machines crossed from performance to subjective experience, the result would be a Copernican-scale demotion of human centrality.

The focal evidence is the 2023 Butlin report, an 88-page preprint by 19 computer scientists and philosophers stating that current AI is not conscious but that there are no obvious barriers to building conscious AI. Pollan challenges this by attacking its core assumption, computational functionalism, which treats consciousness as substrate-independent computation and therefore allows brain and computer interchangeability by design. He argues this collapses under neurobiological detail: brains are chemically modulated, continuously rewired by experience, and highly interconnected, with a single neuron linked to as many as 10,000 others, while researchers remain decades from even a crude full connectivity map; he also notes claims that one cortical neuron can match functions attributed to an entire deep neural network. He further critiques the report’s validation method, which uses indicators derived from about half a dozen contested consciousness theories, warning that simulation of theory-like architecture is not proof of consciousness.

The implication is that current confidence about near-term conscious AI is methodologically circular and ethically premature: if you assume consciousness is computation, your tests will tend to confirm computational systems as candidates. Pollan emphasizes missing variables such as embodiment, biology, affect, and the unresolved question of who the conscious subject is in frameworks like global workspace and IIT, arguing these omissions weaken inference quality. He also stresses risk asymmetry: false negatives could permit moral harm if conscious suffering were real, but false positives could distort policy, governance, and research priorities; this creates a high-stakes uncertainty regime rather than a clear trend toward imminent machine personhood. The article closes by highlighting the technocratic impulse to tune emotional parameters like a “joy dial,” presenting that stance as evidence that much of the field still underestimates what consciousness entails.

2026-02-24 (Tuesday) · 5c3af91fcf596460291f937fd30e4022fb8520b5