← 返回 Avalaches

文章以开源代理工具 Moltbot(原名 Clawdbot)的更名事故,描绘 2026 年初「代理式 AI」风险外溢的速度与尺度:该工具在 GitHub 取得超过 80,000 颗星后,因名称接近 Anthropic 的 Claude,创办人 Peter Steinberger 于 2026-01-27(UTC+8;原始资料:1 月 27 日)紧急更名;在释出旧帐号并抢注新帐号的约 10 秒窗口期内,加密货币诈骗者接管原帐号,借用数万追随者信任推广假代币,吸金「数百万美元」后崩盘。同时,研究人员发现部分使用者配置不当,将具控制权限的 AI 介面暴露于公网,使 API 金钥与私人对话纪录面临外泄,显示单次命名与帐号治理失误可被极短时间放大为系统性损害。

风险由程式码层转向终端与平台权力:Gemini3 的个人智能引擎与 Apple Intelligence 以「context packing」整合全量个人资料(相簿、数年前搜寻等),并可能绕过 App 介面直接执行操作,使议题从隐私安全升级为「入口控制」与「数位主权」的结构性冲突。文中引述 2026-01-26(UTC+8;原始资料:1 月 26 日)Tencent 内部会议上,马化腾点名批评 ByteDance 的 Doubao 手机助手以外挂式萤幕录制并上传云端,称其「极其不安全、不负责任」;随后 WeChat、Taobao、Alipay 等以风控机制限制 Doubao 的自动化操作,显示平台以安全名义回收代理操作能力,并以技术与规则重塑生态边界。

AI 进入物理世界后,统计趋势与风险一并加速:2024 年全球智能眼镜出货量暴增 210%(约 3.10 倍),且有预测称 2025 年中国市场出货量将突破 275 万台、同比增速 107%(约 2.07 倍),跃居全球第一。文章指出 AI 眼镜的「不可见录制」削弱传统 LED 指示的社交契约,且指示灯可被遮蔽而不影响摄录;更高阶风险在于其作为可移动生物识别感测器,持续搜集人脸几何、声纹、注视轨迹等高度敏感资料,滥用将呈指数级放大。企业端则出现治理真空:非人类身份(智能体/脚本)数量可能达人类的数十倍,一旦过度授权且缺乏持续审计与即时约束,单次金钥外泄或越权即可引发「多米诺」式崩溃;EU《Artificial Intelligence Act》自 2025-02 生效(UTC+8;原始资料:2025 年 2 月)提供底线但不能取代企业治理,结论是最强 AI 必须更可被规则约束,而非仅更聪明。

The piece frames 2026’s opening governance problem through an open-source agent tool, Moltbot (formerly Clawdbot), showing how quickly agentic AI risk scales. After surpassing 80,000 GitHub stars, it was renamed on 2026-01-27 (UTC+8; original: Jan 27) because its name resembled Anthropic’s Claude; during the ~10-second window between releasing the old account and registering the new one, crypto scammers seized the original account, used tens of thousands of followers’ trust to promote a fake token, pulled in “millions of dollars,” and then crashed. In parallel, researchers found misconfigured users exposing control-capable AI interfaces to the public internet, putting API keys and private chat logs at risk, illustrating how a brief identity and access-control lapse can cascade into broad compromise.

It then tracks risk spilling from code into devices and platform power. Gemini3’s personal intelligence engine and Apple Intelligence rely on “context packing” to fuse comprehensive user data (photos, years-old search history) and can execute actions by bypassing app interfaces, shifting the dispute from privacy/security to who controls the user entry point and, by extension, “digital sovereignty.” On 2026-01-26 (UTC+8; original: Jan 26), Tencent’s Ma Huateng reportedly criticized ByteDance’s Doubao phone assistant for plug-in-style screen recording and cloud uploading as “extremely unsafe” and “irresponsible”; soon after, platforms such as WeChat, Taobao, and Alipay restricted Doubao’s automated operations via risk-control mechanisms, indicating a structural conflict where safety arguments intersect with ecosystem dominance.

As AI moves into physical space, adoption metrics rise alongside asymmetric surveillance risk. Global smart-glasses shipments surged 210% in 2024 (~3.10×), and a forecast cited expects China’s 2025 shipments to exceed 2.75 million units with 107% year-on-year growth (~2.07×), ranking first globally. The article argues “invisible recording” weakens the social contract of visible LED indicators—easy to obscure without disabling capture—and escalates further when glasses function as mobile biometric sensors collecting facial geometry, voiceprints, and gaze trajectories, where misuse can grow exponentially. Inside firms, governance gaps widen because non-human identities (agents/scripts) may outnumber humans by tens of times; once over-privileged without continuous auditing and real-time constraint, a single key leak or overreach can trigger domino-style collapse, and the EU Artificial Intelligence Act effective 2025-02 (UTC+8; original: Feb 2025) sets a floor but cannot substitute for enterprise controls.

2026-01-30 (Friday) · a9294eb1a68cacd8e5e6103c28262f0b418b3373