2026年2月25日,Anthropic PBC 宣布对其 Responsible Scaling Policy 作出重大变更,撤回了其在 2023 年作出的承诺:在已定义风险条件下暂停或延后可能危险的 AI 开发。该公司现在表示,如果其认为自己并未相对竞争对手保持显著领先,可能会继续开发,并将此转向描述为对一种政策环境的回应;该环境更关注竞争力与经济成长,而非可执行的联邦安全治理。这标志著 Anthropic 明确背离了其先前作为前沿 AI 中安全优先替代者的定位。
这一变化与 2020 到 2026 期间加剧的商业与战略压力相关:Dario Amodei 在 2020 年因安全与商业化疑虑离开 OpenAI,而 OpenAI 之后也从非营利起源转向营利性结构,并在 2024 年将 safely 一词从其 AGI 使命表述中移除。资本市场压力已十分明确,据报导 Anthropic 的估值为 $380 billion,而 OpenAI 的募资超过 $850 billion,这意味著 OpenAI 方面约有 2.24:1 的估值差距优势;同时据称双方都以最早在 2026 年走向 IPO 为目标。与此同时,Anthropic 还面临一场五角大楼争议:在周二会谈后,若未在周五截止期限前接受合约条款,美国官员威胁将升级采购与法律行动,包括可能动用 Defense Production Act。
更广泛的证据显示,这不是 Anthropic 的孤立事件,而是整个行业从预防性承诺转向速度、规模、国防整合与投资者攫取的再排序。Bloomberg 的报导还显示,xAI/SpaceX 与 OpenAI 关联伙伴也被置于五角大楼自主无人机竞赛之中,这表明军事相邻部署正成为前沿 AI 的核心需求通道,尽管先前领导人物曾公开对武器化表示谨慎。关键保留点在于,公司声明仍将政策迭代描述为风险管理,但可衡量的趋势线指向更弱的事前安全约束、更高的地缘政治耦合,以及联邦层级治理相对于能力与融资扩张速度的滞后。
On February 25, 2026, Anthropic PBC announced a major change to its Responsible Scaling Policy, reversing a 2023 commitment to pause or delay potentially dangerous AI development under defined risk conditions. The company now says it may continue development if it believes it does not hold a significant lead over competitors, framing the shift as a response to a policy climate focused on competitiveness and economic growth rather than enforceable federal safety governance. This marks a clear move away from Anthropic’s earlier positioning as a safety-first alternative in frontier AI.
The change is tied to intensifying commercial and strategic pressure across 2020 to 2026: Dario Amodei left OpenAI in 2020 over safety-commercialization concerns, while OpenAI later shifted from nonprofit origins toward a for-profit structure and in 2024 removed the word safely from its AGI mission language. Capital markets pressure is explicit, with Anthropic reported at a $380 billion valuation and OpenAI fundraising at more than $850 billion, implying roughly a 2.24:1 valuation gap in OpenAI’s favor as both reportedly target IPO paths as early as 2026. Simultaneously, Anthropic faces a Pentagon dispute in which US officials threatened procurement and legal escalation, including possible use of the Defense Production Act, if contract terms were not accepted by a Friday deadline after Tuesday talks.
The broader evidence suggests a sector-wide reprioritization from precautionary commitments toward speed, scale, defense integration, and investor capture, not an isolated Anthropic event. Bloomberg reports also place xAI/SpaceX and OpenAI-linked partners in a Pentagon autonomous drone competition, indicating that military-adjacent deployment is becoming a central frontier-AI demand channel despite earlier public caution against weaponization by leading figures. Key caveats are that company statements still describe policy iteration as risk management, but the measurable trend lines point to weaker ex ante safety constraints, higher geopolitical coupling, and governance lag at the federal level relative to the pace of capability and financing expansion.