← 返回 Avalaches

美国正在从放任式 AI 政策转向。2026 年 4 月 7 日 Anthropic 宣布 Claude Mythos 后,前沿模型的快速进展被视为同时带来经济价值与国家安全威胁。本文指出,少数几位创办人与工程师—Dario、Demis、Elon、Mark 与 Sam—正在掌握具全球战略影响力的模型,而特朗普政府先前相信私营竞争即可让美国在 AI 竞赛中战胜中国。这种信念正在瓦解,因为模型能力持续高速上升,且与国防、关键基础设施及工业规模滥用越来越密切相关。

Anthropic 的 Dario Amodei 并未全面释出 Mythos,而是限制约 50 家公司(涵盖运算、软体与金融)优先存取,以用于防御。财政部长 Scott Bessent 甚至召开大型银行的紧急会谈;国防部门此前已因 Anthropic 拒绝模型用于完全自主武器或大规模监控而介入。政治压力也在同步累积:美国民众对 AI 的疑虑高于多数国家,约有七成认为 AI 会冲击就业机会,且这一比例较一年前明显上升。地方社群对资料中心的反对也在增强,Sam Altman 相关的 OpenAI 办公场所近日遭到两次实体攻击,显示 AI 已成为 2028 年选战中的敏感议题。

困境具结构性。若华府无所作为,AI 可能在恶意行为下伤害关键体系与公共安全;若过度规管,则可能使美国在 AI 竞争中失利让予中国。时间窗口正在缩短:两年前拜登时代的讨论主要是潜在风险,如今每次模型发布都显著更强。被讨论且较可行的方案是先行受控授权:先让经认证的可信使用者(先行是资安专业人员)在全面商业化前测试使用最新模型。此法可先降风险、提升头部厂商与政府的议价能力并减少外泄,但会形成二级市场、巩固既有 AI 巨头,并排除许多外部企业;开源模型的规范、国际外溢与公平扩散问题仍未解决。

America is moving away from a laissez-faire AI policy. After Anthropic announced Claude Mythos on April 7, 2026, frontier model progress became visible as both an economic asset and a security threat. The article argues that a small group of founders and engineers—Dario, Demis, Elon, Mark, and Sam—now control models with global strategic impact, while the Trump administration had previously believed private-sector competition would keep the United States ahead of China. That confidence is now eroding because capabilities are rising quickly and are increasingly linked to defense, critical infrastructure, and industrial-scale abuse.

Anthropic's Dario Amodei did not release Mythos broadly, limiting access to about 50 firms in computing, software, and finance so they can use it for security hardening. Treasury Secretary Scott Bessent even convened major banks for urgent talks, and the Pentagon had already intervened when Anthropic refused to allow the model in fully autonomous weapons or mass-surveillance uses. Political pressure is rising at the same time: U.S. voters are more doubtful of AI than people in most other countries, with about 7 in 10 saying AI will harm job opportunities, up sharply from a year earlier. Opposition to data centres is growing, and locations linked to Sam Altman at OpenAI have faced two physical attacks, signaling that public anxiety is becoming an election-relevant issue by 2028.

The dilemma is structural. If Washington does nothing, AI-driven harm could threaten critical systems and public safety; if it overregulates, it may forfeit AI leadership to China. The window is tightening: discussions during the Biden administration two years ago focused mainly on potential risk, while now each model release is materially more powerful. A plausible pathway under discussion is controlled early access: certified trusted users, especially cybersecurity professionals, would receive frontier models before broad commercialization. This could reduce near-term risk, strengthen pricing power for large firms, and reduce leakage, but it also creates a tiered market, entrenches incumbent AI giants, excludes many outside firms, and leaves unresolved questions about open-source regulation, cross-border spillovers, and equitable diffusion.

2026-04-18 (Saturday) · edb9e3117734cf66cc5cc3d9a849705da5b42344