文章指出,人工智能正把虚假信息战推入一个难以检测、规模空前的新阶段。2016年,俄罗斯“互联网研究机构”在圣彼得堡55号萨武什金娜街动用数百名人工员工,全天候手动操纵社交媒体;尽管投入巨大,其影响仍有限。十年后,《科学》杂志的一篇论文警告称,随着AI代理的发展,单个操作者即可指挥成千上万个账号形成“群体”,实时生成几乎无法与人类区分的内容,并在无需持续人工监督的情况下自适应演化。
论文由来自计算机科学、网络安全、心理学与政策等领域的22位专家共同撰写,认为这种AI群体可在社会层面操纵信念与行为,威胁民主制度。与传统机器人账号不同,这些代理具备持久身份与记忆,能够协同达成目标,同时保持个体差异以规避检测。研究者指出,它们还能在平台信号与真人互动中实时调整,运行数以百万计的微型A/B测试,以机器速度放大“胜出”叙事。
专家判断,现有平台对“协同非真实行为”的检测手段难以识别此类系统,因而无法确认其是否已被部署。论文预测,这类工具未必在2026年11月的美国中期选举中产生决定性影响,但很可能被用于扰乱2028年总统大选。为应对风险,作者建议建立“AI影响力观测站”,由学界与非政府组织协作,提升态势感知与快速响应;然而,平台以参与度为导向、政府政治意愿不足,使得防御推进面临现实阻力。
The article warns that advances in artificial intelligence are pushing disinformation into a new, hard-to-detect phase of unprecedented scale. In 2016, Russia’s Internet Research Agency deployed hundreds of human operators from 55 Savushkina Street in St. Petersburg to manually manipulate social media; despite heavy investment, its impact was limited. A decade later, a Science paper argues that modern AI agents would allow a single operator to command swarms of thousands of accounts, generating content indistinguishable from humans and adapting in real time without constant oversight.
Authored by 22 experts across computer science, cybersecurity, psychology, and policy, the paper contends such AI swarms could manipulate beliefs and behaviors at a population level, threatening democracy. Unlike classic bots, these agents maintain persistent identities and memory, coordinate toward shared goals while preserving individual variation to evade detection, and respond dynamically to platform signals and human interactions. With feedback, they could run millions of micro A/B tests, propagating winning narratives at machine speed.
Researchers say current systems for detecting coordinated inauthentic behavior are ill-equipped to identify these swarms, making it unclear whether they’re already in use. The paper predicts limited impact on the November 2026 US midterms but a high likelihood of deployment to disrupt the 2028 presidential election. To mitigate the risk, the authors propose an “AI Influence Observatory” led by academia and NGOs to standardize evidence and accelerate collective response; however, engagement-driven platform incentives and weak political will may hinder action.