深伪与聊天机器人已逼真到让线上信任崩解:加州大学柏克莱分校的影像鉴识学者 Hany Farid 回忆,2023 年与欧巴马视讯时一度怀疑对方是深伪,甚至在通话前 10 分钟都想用「手在脸前挥动」这类测试来验真;但他近期示范显示,这种手势检测正快速失效,视讯不再等同可信身分。
从 1950 年图灵提出「模仿游戏」(后称图灵测试)到 CAPTCHA,人类曾以各种门槛区分人与机器;如今大型语言模型反过来攻破防线。研究指出 AI 代理可在提示下解出复杂 CAPTCHA;一项对 126 名受试者的测试中,受试者把 OpenAI 的 GPT-4.5 判成「人类」的比例达 73%。
高阶诈骗已造成具体损失与制度反制:声音仿冒者曾冒充美国国务卿 Marco Rubio;香港一名员工曾因深伪 CFO 汇出 2,500 万美元。招聘端为对抗「一键海量投递」,有人力资源每个职缺收到数千份申请;某次超过 900 份中不到 3% 遵守暗号指示。家庭与银行推行「密语」对抗语音克隆(Starling 指称看过宣导者中 82% 表示可能采纳),企业则恢复面试、加入生物辨识;FBI 指出北韩 IT 人员已在超过 100 家美国公司成功就业并每年输送数亿美元,同时也催生即时深伪侦测与虹膜「Orb」等身分验证方案,但代价与权力集中疑虑仍在。
Deepfakes and chatbots have become realistic enough to erode online trust. In 2023, UC Berkeley digital forensics professor Hany Farid said a video call with Barack Obama felt uncanny; for 10 minutes he suspected a deepfake and wanted to ask for a “hand-in-front-of-face” test. He now demonstrates that such liveness tricks are becoming obsolete, meaning a video call is no longer reliable proof of identity.
From Alan Turing’s 1950 “imitation game” to CAPTCHAs, humans used hurdles to separate people from machines; large language models are now clearing them. Research shows AI agents can solve complex CAPTCHAs with prompting, and a study of 126 participants found they judged OpenAI’s GPT-4.5 to be the human 73% of the time.
Concrete losses and improvised countermeasures are spreading. Voice-clone fraud has impersonated US Secretary of State Marco Rubio, and a Hong Kong employee wired $25 million after a deepfake CFO call; in hiring, some roles draw thousands of applications, and in one pool of 900+ candidates fewer than 3% followed hidden instructions. Families and banks promote secret code words (Starling says 82% who saw its message were likely to adopt), firms add biometrics and return to in-person interviews, and the FBI says North Korean IT workers have landed jobs at 100+ US companies and funnel hundreds of millions of dollars annually—fueling both real-time deepfake detection tools and biometric/crypto “proof of personhood” systems with major trade-offs.