← 返回 Avalaches

在2026年3月2日,OpenAI联合创始人Sam Altman在与五角大楼达成突发协议后,迅速宣称已赢得重大让步,但文章认为他已经越过了其竞争对手Anthropic不愿越过的伦理—实践边界。核心争议在于,OpenAI对美国大规模监控和军事人工智能使用的合法性与伦理性的解释是否可接受,而Anthropic则持更严格立场。OpenAI称其通过将系统保持在边缘端之外(例如无人机上)来避免承担直接责任;Anthropic反驳说,致命决策仍可在远程、包括云端工作流中作出,因此这种区分在伦理上不足。

在自主武器问题上,Anthropic管理层据称愿意开发此类系统,但内部测试显示其模型尚未可靠到足以承担该任务。就监控而言,Anthropic表示它在五角大楼后期才提出利用AI分析美国人群体数据的要求时措手不及。争议已上升为实际的制裁风险:OpenAI似乎至少拿下了2亿美元的美国政府合同,而Anthropic则面临“供应链风险”认定,国防部长Pete Hegseth警告任何与美国军方合作的承包商、供应商或合作伙伴不得与其进行任何商业活动。Anthropic正在对该举动提出异议并计划诉讼。

这场由Dario和Daniela Amodei及其他五名前OpenAI员工因伦理问题离职而起的竞争,正在成为人工智能热潮早期阶段的定义性分野。围绕美国州级AI监管、聊天机器人广告以及印度AI峰会上的公开摩擦(Sam Altman与Dario Amodei据称都未加入由Narendra Modi主持的连手拍照)等议题,分歧持续扩大。随着压力加剧,现有合同下Anthropic的Claude仍向超过100,000名顶级机密网络用户提供服务,并有报道称其参与支持美国对伊朗的行动;因此以OpenAI替代并非简单切换模型,因为架构差异显著(Nvidia GPU数据中心系统对比Amazon自研AI芯片),将Anthropic排除出涉密用途会引发能力、迁移成本与国家战略风险问题。

On March 2, 2026, after a sudden Pentagon deal, OpenAI co-founder Sam Altman quickly claimed to have secured major concessions, but the article argues he has crossed an ethical and practical threshold that Anthropic was unwilling to cross. The core dispute is whether OpenAI’s interpretation of legality and ethics in U.S. mass surveillance and military AI use is acceptable, while Anthropic takes a stricter position. OpenAI says it avoids direct responsibility by keeping systems off edge deployment, such as drones; Anthropic counters that lethal decisions can still be made remotely, including through cloud workflows, so the distinction is ethically insufficient.

On autonomous weapons, Anthropic leadership is reported to have been willing to build such systems, but internal tests showed its models were not yet reliable enough to do so. On surveillance, Anthropic says it was caught off guard when the Pentagon’s late request came to analyze bulk data on Americans. The dispute has now become direct sanctions risk: OpenAI appears to have won at least a US$200 million government contract, while Anthropic now faces a “supply chain risk” designation, with Defense Secretary Pete Hegseth warning that no U.S. military contractor, supplier, or partner may do any commercial business with it. Anthropic is contesting that move and preparing legal action. (Key numbers: 2)

The rivalry, born in part from Dario and Daniela Amodei and five other former OpenAI staff who left on ethics grounds, is becoming a defining divide in the early AI boom. Policy frictions have widened over state AI regulation in the U.S., chatbot advertising, and public clashes such as India’s AI summit, where Sam Altman and Dario Amodei reportedly skipped Narendra Modi’s hand-holding photo line. As pressure mounts, Anthropic’s Claude remains available under existing contracts to more than 100,000 users on top-secret networks, and reports say it supported U.S. action against Iran, so replacing it with OpenAI is not just model substitution: architectural mismatch is significant (Nvidia GPU data-center systems versus Amazon custom AI chips). Excluding Anthropic from classified applications therefore raises issues of capability, migration cost, and strategic risk.

2026-03-03 (Tuesday) · ee5869598413dd3bd154182ac3415b8415de8da1