在明尼阿波利斯发生联邦特工枪杀37岁女性蕾妮·妮可·古德后的数小时内,社交媒体出现大量利用人工智能“去面罩”的伪造图像,错误声称已识别涉事特工身份。官方随后确认涉事人员为美国移民与海关执法局(ICE)特工,但现场公开视频并未出现任何摘下面罩的画面。流传的所谓“真容”截图被证实为基于原始视频、经AI工具生成的虚假面部。
传播规模迅速放大。WIRED在X、Facebook、Threads、Instagram、BlueSky和TikTok等平台发现大量AI篡改图像。一条附带伪造图像、呼吁公布姓名的帖子浏览量超过120万次;另一条在Threads上煽动性言论的帖子获得近3500个点赞。研究人员指出,在面部被遮挡的情况下,AI“增强”会臆造细节,无法进行可靠的生物识别,生成结果在视觉上清晰却脱离现实。
误识别已造成具体风险。部分用户在无证据情况下点名真实人物并链接其社交账号;其中至少两个姓名与ICE无明显关联。被错误指认为涉事特工的对象之一,其所属机构公开澄清这是一次“协调一致的在线虚假信息行动”。类似情形并非首次:此前一起枪击案中,基于低清视频生成的AI图像被广泛传播,最终与真实嫌疑人严重不符,凸显AI伪造在突发事件中的扩散速度与误导性。
Within hours of a masked federal agent fatally shooting 37-year-old Renee Nicole Good in Minneapolis, social media users circulated AI-altered images falsely claiming to reveal the agent’s identity. Authorities later identified the officer as an Immigration and Customs Enforcement agent, but original videos from the scene showed no unmasked faces. The widely shared “unmasking” images were screenshots modified with AI to fabricate facial features.
The spread was rapid and sizable. WIRED found AI-manipulated images across X, Facebook, Threads, Instagram, BlueSky, and TikTok. One post demanding the agent’s name drew more than 1.2 million views, while another inciting post on Threads received nearly 3,500 likes. Experts warn that when faces are partially obscured, AI enhancement hallucinates details and cannot produce reliable biometric identification, yielding images that look clear but are detached from reality.
Misidentification has produced tangible harm. Some users named real people without evidence and linked their social accounts; at least two circulated names showed no immediate connection to ICE. One wrongly accused individual’s organization publicly stated it was monitoring a coordinated online disinformation campaign. This is not unprecedented: a prior shooting saw an AI-generated image based on grainy footage spread widely and later prove starkly different from the actual suspect, underscoring how quickly AI forgeries can mislead during breaking news.