← 返回 Avalaches

WIRED报导指出,虽然 OpenAI 在 2023 年的使用政策明确禁止军事使用,其技术仍透过 Microsoft 的 Azure OpenAI Service 进入 US Government,并据称被 Pentagon 测试。当时 Microsoft 已与 Department of Defense 合作数十年,且是 OpenAI 最大投资者;公司员工也在 San Francisco 办公室见到 Pentagon 官员。Microsoft 表示该服务自 2023 年起可供 US Government 使用,但直到 2025 年才获准用于「top secret」工作负载。

政策在 2024 年 1 月转向,OpenAI 移除了对军事使用的全面禁令,部分员工甚至先从新闻报导而非公司内部通知得知此事。到 2024 年 12 月,OpenAI 宣布与 Anduril 合作,用于未分类的「national security missions」;相较之下,Anthropic 与 Pentagon 相关的约 2 亿美元合约曾告破裂,而其与 Palantir 的安排涉及机密军事工作。OpenAI 也在 2024 年秋季拒绝加入 Palantir 的「FedStart」计划,理由据称是风险过高。

争议的核心在于治理边界与可见性,而不只是合作本身。数十名员工曾在公开 Slack 频道讨论疑虑,认为模型连信用卡资讯都未必足够可靠,更不应协助战场任务;法律专家则警告,协议原始文字可能为合法监控留下空间,例如购买第三方美国人资料再以 AI 分析。到 2026 年 3 月,距离取消全面禁令仅略多于两年,OpenAI 已明显朝防务合作扩张,Sam Altman 并据报表示他也有意向 NATO 销售模型。

WIRED reports that although OpenAI's 2023 usage policy explicitly banned military use, its technology still reached the US government through Microsoft's Azure OpenAI Service and was allegedly tested by the Pentagon. At the time, Microsoft had contracted with the Department of Defense for decades and was OpenAI's largest investor; employees also reportedly saw Pentagon officials in the San Francisco office. Microsoft says the service became available to the US government in 2023, but was not approved for top secret workloads until 2025.

The policy shifted in January 2024, when OpenAI removed its blanket ban on military use, and some employees reportedly learned this first from press coverage rather than internal notice. By December 2024, OpenAI announced a partnership with Anduril for unclassified national security missions; by contrast, Anthropic's related Pentagon contract, worth roughly $200 million, had collapsed, while its Palantir arrangement covered classified military work. OpenAI also declined to join Palantir's FedStart program in fall 2024 because it was reportedly too risky.

The dispute centers on governance boundaries and visibility, not only on partnership volume. A few dozen employees discussed concerns in a public Slack channel, arguing that models may be too unreliable even for credit card information, let alone battlefield assistance; legal experts also warned that the agreement's original wording may have left room for technically legal surveillance, such as buying Americans' data from third parties and analyzing it with AI. By March 2026, just over two years after ending the blanket ban, OpenAI appeared to be moving decisively toward defense partnerships, and Sam Altman reportedly said he was also interested in selling models to NATO.

2026-03-08 (Sunday) · 471424d668f5ba2f44a32adfd04dfc93d1253bea