Parmy Olson 认为,OpenAI 聘用 OpenClaw 创建者 Peter Steinberger,让 Sam Altman 在打造自主 AI agents 的竞赛中获得动能,但也引入了重大的企业安全风险。OpenClaw 在此脉络下于 2026-02-23 1:00 PM GMT+8 发布,是一个可透过 WhatsApp、Telegram 或 Slack 控制的开源 agent 系统,且它能在使用者机器上执行真实操作,而不只是生成文字。文章将此描述为 OpenAI、Anthropic 与 Google 的更广泛产业转向:能力更强的 agents 同时提升生产力潜力与攻击面。
文章引用了快速采用讯号与市场外溢效应:OpenClaw 成为 GitHub 成长最快的专案之一,据称 Raspberry Pi 股价在一周内因 agent 相关投机几乎翻倍,且开发者展示了可取代专用软体任务的多 agent「swarm」工作流。文章也详述了为何安全警报升级:OpenClaw 可存取档案、电子邮件、行事历与应用程式;Steinberger 表示他经常交付自己未完整阅读的程式码;而主要机构反应强烈。据称 Gartner 将 OpenClaw 风险标为「unacceptable」并建议封锁相关流量,Cisco 研究人员称此类个人 agents 是「absolute nightmare」,且据称一位 Meta 高层警告员工不要在公司笔电上使用它。
Olson 的核心含意是,商业化较少取决于原始能力,更多取决于面向非专家使用者的安全预设。OpenAI 决定让 OpenClaw 保持为独立基金会,可能在维持品牌动能的同时降低直接责任,但企业部署仍需强化自主存取与讯息授权。文章将 OpenClaw 与 Anthropic 限制较多的沙箱模型对比,并提到 NanoClaw 开发者 Gavriel Cohen 的容器隔离方法,同时指出可用性失误仍可能发生,例如因错误聊天群组而意外暴露控制权。即使已有 5 billion USD 的金融科技部署询问,结论仍是:软体与专业职能的颠覆可能很快,但让 agents 同时安全且对新手不出错,才是更难且更慢的瓶颈。
Parmy Olson argues that OpenAI’s hiring of OpenClaw creator Peter Steinberger gives Sam Altman momentum in the race to build autonomous AI agents, but also imports major enterprise security risk. OpenClaw, announced in this context on 2026-02-23 at 1:00 PM GMT+8, is an open-source agent system controllable through WhatsApp, Telegram, or Slack, and it can execute real actions on a user’s machine rather than just generate text. The piece frames this as a broader industry shift at OpenAI, Anthropic, and Google, where more capable agents increase both productivity potential and attack surface.
The article cites rapid adoption signals and market spillovers: OpenClaw became a top-growth GitHub project, Raspberry Pi shares reportedly nearly doubled in a week on agent-related speculation, and developers demonstrated multi-agent “swarm” workflows that can replace specialized software tasks. It also details why security alarms escalated: OpenClaw can access files, email, calendars, and apps; Steinberger said he often shipped code he did not fully read; and major institutions reacted sharply. Gartner reportedly labeled OpenClaw risk “unacceptable” and advised blocking related traffic, Cisco researchers called personal agents like it an “absolute nightmare,” and a Meta executive reportedly warned staff to keep it off corporate laptops.
Olson’s core implication is that commercialization depends less on raw capability and more on secure defaults for non-expert users. OpenAI’s decision to keep OpenClaw as an independent foundation may reduce direct liability while preserving brand momentum, but enterprise deployment still requires hardening autonomous access and messaging authority. The piece contrasts OpenClaw with Anthropic’s more constrained sandboxed model and cites NanoClaw developer Gavriel Cohen’s container-isolation approach, while noting usability failures remain likely, such as accidental control exposure through the wrong chat group. Even with a 5 billion USD fintech inquiry for deployment, the conclusion is that disruption of software and professional roles may be fast, but making agents both secure and idiot-proof is the harder and slower bottleneck.