← 返回 Avalaches

在全球人工智慧竞争中,出现了一个不寻常的发展。 Microsoft、Alphabet 与 Elon Musk 的 xAI 同意向美国官员提前提供新模型,供国家安全测试使用;同时白宫已从「轻触」监管转向考虑更严格的监管框架。文章认为,虽然美国在人工智慧部署规模上居于核心,但其治理结构具有天然冲突,因为监管者与被监管产业过于接近。

最大风险在于潜在的监管俘获。白宫已提出含主要科技执行长的工作组方案,意味著受审机构参与制定监管规则本身。文中亦提醒意识形态干预风险,提到上夏季推行的 Preventing Woke AI in the Federal Government 行政令,显示安全审查可能被用来塑造政治论述(例如选举问题回应)。另有制度能力不足:参与先前模型分享的美国 CAISI 于2024年成立时仅有1,000万美元预算,且其所属机构曾有数百人裁员,甚至遭遇霉菌、停电与网路不稳等问题。

相比之下,英国的 AISI 具备更强资源基础。其年度预算约6,500万美元,依托英国政府2025年的五年期2.4亿英镑(约3.25亿美元)拨款;该机构于2023年在布莱切利公园成立,仅三年内即有约200名员工,约20%来自AI研究与安全组织。其地理上距离 Google DeepMind 等前沿实验室仅30分钟地铁,显著较CAISI距矽谷6小时飞行有利。另有约12人因降薪加入如 OpenAI、Anthropic、DeepMind 转来,亦有相似比例后回到产业。AISI还是Anthropic的 Mythos 存取首个也是唯一获得政府授权机构,报告指出该模型对资讯安全薄弱企业有明显网路风险。文中进一步指出,该机构或可仿效国际原子能总署,透过双边或多边宣告逐步累积法理权威。

An unusual development is underway in the global AI race: Microsoft, Alphabet, and Elon Musk’s xAI agreed to give U.S. officials early access to new models for national-security testing, while the White House has shifted from a light-touch posture toward stronger oversight frameworks. The article argues that although the U.S. is central in AI deployment, its regulatory architecture is structurally conflicted because regulators remain too close to the industry being regulated.

The key risk is regulatory capture. The administration has floated a working-group model that includes major tech executives, meaning firms under review may help draft the rules. The article also highlights potential ideological influence, citing the summer order titled Preventing Woke AI in the Federal Government, which suggests safety review could shape political narratives, including responses to election-related questions. Institutional weakness compounds this: CAISI, the U.S. vetting body involved, was created in 2024 with only $10 million and sits in an agency that has seen hundreds laid off, plus infrastructure problems such as mold, blackouts, and unstable internet.

By contrast, the UK’s AISI appears far stronger in capacity. It has a roughly $65 million annual budget, backed by a five-year £240 million (about $325 million) UK endowment in 2025. Founded in 2023, it reached about 200 staff within three years, with roughly 20% from AI research and safety organizations. It is also better located: around a 30-minute tube ride from Google DeepMind and other frontier labs, versus CAISI’s 6-hour flight to Silicon Valley. About a dozen staff reportedly joined from OpenAI, Anthropic, and DeepMind with pay cuts, while a similar number later returned. AISI also received exclusive access to Anthropic’s Mythos, and its report framed cyber risk as concentrated among firms with weak IT security. The article suggests AISI could build authority via stepwise international recognition, similar to the IAEA model.

2026-05-07 (Thursday) · 21c64b9b50d81ccbf08129d290ceb0211f6d58ec