← 返回 Avalaches

在2026年3月9日星期一(4:38 PM),包括Google DeepMind首席科学家Jeff Dean在内的OpenAI和Google的30多名员工在支持Anthropic对抗美国政府的法律诉讼中提交了一份amicus brief(法庭之友意见书,案号涉及Anthropic v. 美国国防部及相关机构)。该文件在Anthropic起诉美国国防部和其他联邦机构、并对其被指定为“供应链风险(supply-chain risk)”提出异议仅数小时后提交,直接支持Anthropic在案件审理期间继续与军方合作伙伴合作的临时限制令(TRO)申请。文件核心主张是:若允许如此处分一家具备产业影响力的美国AI公司,其后果将超出AI领域,削弱美国的工业与科学竞争力。

该简报签署者包括Google DeepMind研究人员Zhengdong Wang、Alexander Matt Turner和Noah Siegel,以及OpenAI研究人员Gabriel Wu、Pamela Mishkin和Roman Novak等。签署者强调自己以个人身份签名,并非代表各自公司立场。简报指出,由于与五角大楼合同谈判破裂,Anthropic被列入“黑名单”后,实际上大幅限制其与军工承包商合作,且此类不确定性会扭曲美国创新与竞争环境。文中也认为,如果国防部不愿受既定合同约束,完全可以直接终止与Anthropic的合同。

该文件的另一重点是,Anthropic提出的红线条款——例如禁止其AI用于大规模国内监控、禁止用于自主致命武器开发——是正当且必要的。简报认为,在缺乏公开法律约束时,开发者设置的合同与技术要求可作为防范灾难性误用的重要保障。多位AI领导人也公开质疑该“供应链风险”认定:OpenAI CEO Sam Altman称执行SCR会对行业和国家都极其不利,并呼吁逆转。随着Anthropic与五角大楼关系恶化,OpenAI迅速与美国军方签订了自己的合同,此举被部分人批评为机会主义。

On Monday, March 9, 2026 (4:38 PM), more than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief in support of Anthropic in its legal dispute against the U.S. government (Anthropic v. the Department of Defense and related agencies). The filing came just hours after Anthropic had sued the Department of Defense and other federal agencies and challenged its designation as a “supply-chain risk.” It explicitly supports Anthropic’s motion for a temporary restraining order to keep work with military partners going while the case proceeds. The brief’s central claim is that penalizing a major U.S. AI company in this way could damage U.S. industrial and scientific competitiveness beyond AI itself.

Signatories included Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak. They stated they were signing personally, not on behalf of their employers. The filing argues that after Pentagon contract talks broke down, the “blacklist” designation severely limits Anthropic’s work with military contractors and injects harmful unpredictability into U.S. innovation and competitiveness. It also says that if the Department of Defense no longer wished to remain bound by existing terms, it could simply terminate the contract.

In another key section, the brief says Anthropic’s requested guardrails—such as prohibiting mass domestic surveillance and autonomous lethal weapons development—are legitimate and necessary. It argues that, absent public law, contractual and technical constraints imposed by AI developers are a vital safeguard against catastrophic misuse. Other AI leaders have also publicly questioned the designation; OpenAI CEO Sam Altman said enforcing the SCR would be very bad for the industry and the country and called for reversal. As Anthropic’s relationship with the Pentagon deteriorated, OpenAI quickly signed its own military contract, which some criticized as opportunistic.

2026-03-11 (Wednesday) · 2ca6da9a8b00ba41011d092da79c0d54582ddc2a