亚马逊员工据报称已使用内部 AI 工具 MeshClaw 来自动化非必要工作并抬高活动指标。MeshClaw 可让使用者建立可在工作软体中代为操作的 AI 代理,最近几周已在公司内部广泛推广。据员工反映,在亚马逊设定超过80%开发者每周使用 AI 的目标、并开始公布内部每周 AI 使用排行榜后,一些员工有意透过额外的模型调用提高 token 消耗。
该政策原意是推动采用,但似乎也形成竞争性扭曲。虽然官方表示 token 数据不作为绩效考核依据,但许多员工认为管理层仍会监控使用率并重视操作频率。亚马逊已把排行榜存取权限限定为员工本人与主管,且主管被内部人员称已被提醒避免以 token 数判断绩效,但压力仍在。亚马逊今年预期将投入约 2,000 亿美元(约 200bn 美元)资本支出,主要用于 AI 与资料中心基础设施;Meta 亦曾出现被称为「tokenmaxxing」的类似行为。
MeshClaw 可用于代为进行程式部署、电子邮件分流与与应用程式(如 Slack)互动。亚马逊表示,这有助每天让数千名 Amazonians 自动化重复性工作,并展现其安全、负责任地推动 AI 的取向。内部文件显示,超过三十多人(>30)参与建置该工具;一份近期备忘录形容该 bot 会「整晚整合所学」并在员工会议中监控部署、于其起床前处理邮件。员工仍担心,这类可代为行动的工具存在安全风险,可能产生错误或非预期行为,一位员工称预设安全状态「令人恐惧」。
Amazon employees have reportedly used the in-house AI tool MeshClaw to automate nonessential work and inflate activity metrics. MeshClaw allows users to create AI agents that can act inside workplace software, and it was broadly rolled out across the company in recent weeks. According to staff, some employees intentionally triggered extra model calls to raise token consumption after Amazon set a target for more than 80% of developers to use AI weekly and began publishing internal weekly AI-use leaderboards.
The policy was meant to accelerate adoption, but it appears to have created competitive distortion. Although Amazon officially says token statistics are not used for performance reviews, many employees believe leaders still monitor usage and reward frequency. Amazon has restricted leaderboard access to employees and managers only, and managers are reportedly reminded not to use token counts for performance evaluation, yet the pressure persists. Amazon also signaled a heavy capital commitment, expecting about USD 200 billion in capital expenditure this year, mostly for AI and data-center infrastructure, while Meta has shown a similar pattern known as “tokenmaxxing.” (Key numbers: 2,000, 200bn)
MeshClaw is used for tasks such as code deployment, email triage, and interacting with apps such as Slack. Amazon says the system helps thousands of Amazonians automate repetitive tasks each day and presents a model of safe, responsible AI use. Internal records show more than three dozen employees worked on the in-house tool; one internal memo described it as consolidating what it learned overnight and monitoring deployments and email before a user wakes up. Staff remain concerned that giving an agent authority to act can cause errors or unintended actions, and one employee called the default security posture “terrifying.” (Key numbers: 30)