眼前冲突的核心在于使用限制:Anthropic 表示其支持国防工作,但对大规模监控美国人以及完全自主武器部署设下红线;五角大厦则主张,军方使用者必须能将已签约的 AI 工具用于所有合法目的。报导指出,原始协议可能缺乏具体作业细节;另据称,国防部长 Pete Hegseth 要求 CEO Dario Amodei 在当周末前决定 Anthropic 是否会与五角大厦条款一致;官员也声称,包括 OpenAI、Alphabet 与 X.AI 在内的竞争公司原则上已接受政府做法。
该社论认为双方都有正当顾虑,但批评把 Anthropic 列入黑名单的威胁过度,并警告此举可能损害美国 AI 供应链、抑制民间部门参与国防。文中强调,现有约束已包括法规、法院监督、行政命令、五角大厦的 AI 伦理原则,以及 Responsible AI Strategy;但也主张,随著模型能力提升,这些控制可能仍不足够。其提出的政策方向是:更强的国会监督、更清晰的法律定义、增加通报要求,以及更快的规则制定,让民选立法者而非承包商来设定军事 AI 使用的长期边界。
Bloomberg Opinion describes a rapid breakdown in relations between Anthropic PBC and the US Defense Department after a July 2025 contract worth up to $200 million to provide a classified version of Claude for national-security use. By February 26, 2026, only 8 months later, the partnership was reportedly at risk, White House allies had publicly criticized Anthropic as “woke,” and defense officials were considering a “supply chain risk” designation, framing the dispute as an early test case for AI governance in military procurement.
The immediate conflict centers on usage limits: Anthropic says it supports defense work but sets red lines against mass surveillance of Americans and fully autonomous weapons deployment, while the Pentagon argues military users must be able to apply contracted AI tools for all lawful purposes. Reporting indicates the original agreement may have lacked specific operational detail, and Defense Secretary Pete Hegseth reportedly gave CEO Dario Amodei until the end of that week to decide whether Anthropic would align with Pentagon terms; officials also claimed competing firms including OpenAI, Alphabet, and X.AI had accepted the government’s approach in principle.
The editorial argues both sides have legitimate concerns but criticizes threats to blacklist Anthropic as disproportionate, warning that such action could damage US AI supply chains and chill private-sector defense participation. It emphasizes that existing constraints already include statutes, court oversight, executive orders, Pentagon ethical AI principles, and a Responsible AI Strategy, yet contends these controls may not be sufficient as model capability advances; the proposed policy direction is stronger congressional oversight, clearer legal definitions, added reporting requirements, and faster rulemaking so elected lawmakers, rather than contractors, set the long-term boundaries for military AI use.