英国在2023年率先举办重大AI安全峰会,并在2026年2月16日于印度启动继任峰会;由此诞生的AISI以1亿英镑(1.35亿美元)启动资金建立,现每年预算约6600万英镑,快速从DeepMind和OpenAI吸引人才。到2025年,OpenAI称AISI在发布前发现并修复了超过12个可能助长生物武器开发的漏洞,表明英国在前沿模型发布前测试中形成了早期制度化能力。
关键风险数据呈现明确趋势:一项AISI研究发现,聊天机器人对政治观点的说服力比静态AI文本高约50%,另一项发现三分之一英国成年人曾为情感需求使用AI。该机构通过“红队”方法和开源Inspect平台将这些风险信号标准化,强化了跨政府、学界与企业的可比安全评估。
但统计对比也暴露“有安全、少主权”的结构性缺口:AISI多依赖实验室自愿访问,且因无法获得模型权重而主要进行黑箱测试,同时英国在2014年已让最领先实验室DeepMind出售给Google。近期政策资金仅包括1亿英镑硬件“预先市场承诺”和5亿英镑“主权AI”投资,而市场端出现了更大规模信号,即David Silver宣布在伦敦募集10亿美元新实验室资金,显示英国正在从“测试领先”向“应用与产业能力”寻求转移,但体量仍落后于美国。

Britain moved first on AI safety with a major summit in 2023 and a successor summit launched in India on February 16, 2026; the resulting AISI was built with GBP100m (USD135m) upfront and now runs on roughly GBP66m annually, while rapidly recruiting from DeepMind and OpenAI. By 2025, OpenAI said AISI had found and fixed more than a dozen pre-launch vulnerabilities that could have enabled biological-weapons development, indicating early institutional capacity in pre-release frontier-model testing.
Core risk metrics show a clear trend: one AISI study found chatbot conversations were about 50% more persuasive than static AI text at shifting political views, and another found one-third of UK adults had used AI for emotional needs. Through red-teaming and its open-source Inspect platform, the institute is turning these signals into standardized safety benchmarking across government, academia, and industry.
Yet the numbers also expose a “safety without sovereignty” gap: AISI relies heavily on voluntary lab access and, without model weights, mostly performs black-box evaluations, while Britain had already seen its top lab DeepMind sold to Google in 2014. Recent policy funding totals just GBP100m for hardware advanced market commitments plus GBP500m for a sovereign-AI unit, whereas market momentum is larger, with David Silver announcing a USD1bn London lab raise, suggesting a shift from testing leadership toward use and industrial capacity but still at a smaller scale than the US.
Source: Britain is the closest the world has to an AI safety inspector
Subtitle: But having the first (and best) AI security institute is no excuse for the country to rest on its laurels
Dateline: 2月 19, 2026 04:53 上午