奥克兰 Ronald V Dellums courthouse 的这起案件,让 Elon Musk、Sam Altman,以及前沿 AI 自我监管这一问题受到关注。Musk 指控 Altman 背弃了 OpenAI 作为慈善机构的创始协议,并不公正地让自己获利;而 Altman 的律师则表示,Musk 因使用药物「rhino ket」而出现记忆缺失,因此不是可靠证人。尽管牵涉到数十亿美元的企业,这场证词看起来仍像是人身侮辱与积怨,而此案也凸显了全球两家资金最雄厚的前沿 AI 实验室是如何运作的。
在 Trump 第二任期内,「permissionless innovation」一直是政府的口号;随著美国急于超越中国,对美甲沙龙的监管限制都比对前沿 AI 公司的更多。Anthropic 对 Claude Mythos 的受控发布,使这款能够发现数千个网路安全漏洞的前沿模型,让人无法忽视国家安全风险。协助起草 Trump 2025 AI Action Plan 的 Dean Ball 说,像大规模网路攻击或生物武器这类灾难性风险牵涉到国家,而市场参与者没有动机去防止它们;他也警告,国家对前沿 AI 掌握过多权力,将是「最终极、必须避免的反乌托邦」。
Institute for Law and AI 的 Christoph Winter 和 Charlie Bullock 主张,社会需要「radical optionality」,并提出要有资金充足的专家机构、安全的信息共享管道、吹哨者保护、实验室安全标准,以及监测新兴风险的法律权限。英国的 AI Security Institute 源自 2023 年由 Rishi Sunak 主办的 Bletchley Park 峰会,已经对 Mythos 提供测试与评估。Anthropic 也将这项技术与 40 多个合作组织共享,并启动 Project Glasswing 以协助辨识并补上安全缺口,而 Dario Amodei 明确呼吁对前沿模型实施更严格的监管。
The case in the Ronald V Dellums courthouse in Oakland has put Elon Musk, Sam Altman and the question of frontier AI self-regulation under scrutiny. Musk has accused Altman of reneging on OpenAI’s founding agreement as a charity and unjustly enriching himself, while Altman’s lawyers say Musk is an unreliable witness because of memory lapses caused by his use of the drug “rhino ket”. The testimony has resembled personal insults and grudges, even though multibillion-dollar businesses are at stake, and the case has highlighted how two of the world’s best-funded frontier AI labs are run.
In Trump’s second term, “permissionless innovation” has been the administration’s mantra as the US races to outpace China, and more regulatory restrictions are imposed on nail salons than frontier AI companies. Anthropic’s controlled release of Claude Mythos, a frontier model capable of finding thousands of cyber security vulnerabilities, has made it impossible to ignore the national security risks. Dean Ball, who helped draft Trump’s 2025 AI Action Plan, says catastrophic risks such as mass cyber attacks or bioweapons implicate the state, while market actors have no incentive to prevent them; he also warns that too much state power over frontier AI would be “the ultimate dystopia to avoid”.
Christoph Winter and Charlie Bullock at the Institute for Law and AI argue that societies need “radical optionality” and propose well-funded expert institutions, secure information-sharing channels, whistleblower protections, lab security standards and legal authorities to monitor emerging risks. The UK’s AI Security Institute, which emerged from the Bletchley Park summit hosted by Rishi Sunak in 2023, has provided testing and evaluation of Mythos. Anthropic shared the technology with more than 40 partner organisations and launched Project Glasswing to help identify and plug security gaps, and Dario Amodei is explicitly calling for stricter regulation of frontier models.