← 返回 Avalaches

高谭·穆坤达主张,前沿 AI 实验室正呈现克莱顿·克里斯坦森所描述的典型颠覆模式:它们一方面为最严苛的使用者打造产品,另一方面又锁定了极端的成本结构,而这种结构可能超出主流客户需求的成长速度。他以个人使用经验说明此点:单一一则高价值的 Claude 回应,可能需要 10 分钟的高强度 GPU 运算;他也以全产业的资本密集度举例:Anthropic 以 $380 billion 估值募得 $30 billion,OpenAI 募得 $40 billion,且据报还在寻求另外 $100 billion。他的核心论点是,这些公司在投资人眼中看似强大,但在结构上其实脆弱,因为它们最佳化的是持续更高的效能,而非对平均需求进行广泛且可获利的覆盖。

相关证据聚焦于劳动力、基础设施与推理经济的升级成本:据报导,Meta 向 AI 研究人员提供超过 $200 million 的方案;同时 OpenAI 在 2025 年于以股票为基础的薪酬上支出 $6 billion,约为营收的 50%,而许多 IPO 前科技公司约为 6%。OpenAI 亦曾提到约 $1.4 trillion 的资料中心承诺,并预估 2026 年亏损 $14 billion、到 2029 年累计亏损 $115 billion。即使在固定效能下 token 成本每年可能下降约 5-10x,推理工作负载仍快速扩张:OpenAI 回报 2025 年客户 token 成长 320x。下游产品也显示利润率压力,包括有报导称 Notion 因 AI 成本损失约 10% 的利润率,以及有说法指 GitHub Copilot 在每位使用者每月收费 $10 的同时,单月每位使用者亏损超过 $20。

Mukunda 表示,这会形成一个陷阱:前沿供应商必须持续推进能力,而更便宜的进入者则将已商品化的前一代模型变现。他引用需求异质性的讯号:在 800 million 名 ChatGPT 使用者中,仍每天选择 GPT-4o 的只有 0.1%(约 800,000 人),但他们的反弹显示许多使用者相较于最大能力,更重视风格与契合度。更多比较资料也强化了颠覆论点:RAND 估计中国模型可在约 1/6 到 1/4 的成本下运行可比的美国系统;Stanford AI Index 指出开放模型与专有模型的基准差距在 1 年内由 8% 缩小至 1.7%;另有一项来源声称 DeepSeek API 定价约比可比的 OpenAI 方案低 90%。主要但书在于 AGI:若出现不连续式跃迁,标准的颠覆经济学可能失效;但在持续、渐进的进步下,该文结论认为前沿实验室可能创造庞大价值,而由低成本追随者更有利可图地攫取更多利润。

Gautam Mukunda argues that frontier AI labs are exhibiting a classic disruption pattern described by Clayton Christensen: they are building for the most demanding users while locking in extreme cost structures that can outpace mainstream customer needs. He illustrates this with personal usage, where a single high-value Claude response can take 10 minutes of heavy GPU computation, and with capital intensity across the sector: Anthropic raised $30 billion at a $380 billion valuation, OpenAI raised $40 billion and is reportedly pursuing another $100 billion. His core claim is that these firms look powerful to investors but structurally vulnerable because they are optimized for ever-higher performance rather than broad, profitable coverage of average demand.

The evidence centers on escalating labor, infrastructure, and inference economics. Meta reportedly offered AI researchers packages above $200 million, while OpenAI spent $6 billion on stock-based compensation in 2025, about 50% of revenue versus roughly 6% for many pre-IPO tech firms. OpenAI has cited about $1.4 trillion in data-center commitments, with projected losses of $14 billion in 2026 and $115 billion cumulative through 2029. Even though token costs at fixed performance may drop about 5-10x per year, reasoning workloads expand rapidly: OpenAI reported 320x customer token growth in 2025. Downstream products also show margin pressure, including a report that Notion lost about 10% of margin to AI costs and claims that GitHub Copilot was losing over $20 per user monthly while charging $10.

Mukunda says this creates a trap in which frontier providers must keep pushing capability, while cheaper entrants monetize commoditized prior-generation models. He cites signals of demand heterogeneity: only 0.1% of 800 million ChatGPT users still chose GPT-4o daily (about 800,000 users), yet their backlash implied that many users value style and fit over maximum capability. Additional comparative data strengthens the disruption thesis: RAND estimates Chinese models can run at about 1/6 to 1/4 the cost of comparable US systems; Stanford AI Index reports the open vs proprietary benchmark gap narrowed from 8% to 1.7% in 1 year; and one source claims DeepSeek API pricing is roughly 90% below comparable OpenAI offerings. The main caveat is AGI: if a discontinuous leap arrives, standard disruption economics may break, but under continuous incremental progress the article concludes frontier labs could create immense value that lower-cost followers capture more profitably.

2026-02-27 (Friday) · 7858021f377d97fa41eda47f90d1bbbbe8bad95b