在军方使用 AI 的争议中,Anthropic 因被五角大厦标示为「供应链风险」而提告;除「自主杀戮」外,其更强调 AI 可能把对美国人的监控推向「全景监狱」。五角大厦主张其需将技术用于「任何合法用途」,并称对美国人的大规模监控受《第四修正案》限制而属非法;Anthropic 则指出,即便某些监控在现行法下「可能仍合法」,那也只是因法律未跟上 AI 能力的快速扩张。
一篇上月发表、由 Simon Lermen(MATS Research)与 Daniel Paleka(ETH Zurich)主导且列有 Anthropic 研究者为共同作者的研究,以 OSINT「碎屑」为脉络,量化了 AI 对「伪名」保护的侵蚀:在 Hacker News 的 338 个档案中,研究让某大型语言模型找出对应的真实身分与 LinkedIn,结果 226 例成功;论文称此可在「几分钟」重现人类调查员可能需「数小时」的工作,显示「实务上的不可见性」已不再成立。
研究亦揭示成本与规模的门槛正在下降:重跑该 Hacker News 实验仅约 2,000 美元,折合每个档案约 1–4 美元,意味只要资金充足即可并行、在极大规模上进行比对与识别,并引发「将两个以上合法取得资讯拼接」是否已构成监控的灰色地带。文章并举 Homeland Security 要求社群平台交出反 ICE 帐号持有人、以及 2026 年 1 月蒙面移民官宣称「有个小资料库」等事件,指出技术与机构可能持续拉扯法律与伦理边界;因此 Anthropic 不依赖五角大厦划线,而 OpenAI 接受保证则被批评为轻率。
Amid debate over the military’s use of AI, Anthropic sued after the Pentagon labeled it a “supply chain risk,” arguing not only about autonomous killing but about an AI-enabled “panopticon” of Americans. The Pentagon sought authority for “any lawful use” and asserted mass surveillance of Americans is illegal under the Fourth Amendment; Anthropic countered that even if some surveillance is currently legal, it may be only because the law has not yet caught up with rapidly growing AI capabilities.
A study published last month, led by Simon Lermen (MATS Research) and Daniel Paleka (ETH Zurich) with an Anthropic researcher as co-author, quantifies how AI collapses the “practical obscurity” that once protected pseudonymous users by scaling OSINT-style breadcrumbing. In an experiment on 338 Hacker News profiles, a large language model was tasked with identifying the individuals and finding their LinkedIn pages; it succeeded in 226 cases, “replicating in minutes” what might take a dedicated human investigator “hours,” indicating online privacy threat models must be reconsidered.
The paper also highlights the new economics of identification: the Hacker News run cost about $2,000 in total, roughly $1–$4 per profile, implying that with sufficient funding, scraping and matching can be parallelized and pushed to vast scale, in a legal gray zone about whether combining multiple legally gathered facts constitutes surveillance. The column points to non-theoretical U.S. episodes—Homeland Security demands for owners of anti-ICE accounts and a January 2026 claim by a masked immigration officer of “a nice little database”—to argue institutions may keep stretching legal and ethical boundaries; hence Anthropic’s skepticism of Pentagon assurances and criticism of OpenAI’s acceptance.