美国在专有前沿模型上保持领先,但在开源可下载、可本地运行的开放权重模型上已被中国追上甚至反超。中国的 Kimi、Z.ai、Alibaba、DeepSeek 等模型在开发者中迅速普及,性能显著强于美国现有开放权重版本。DeepSeek-R1 于 2025 年 1 月推出后,以远低于美国模型的训练成本取得先进能力,推动中国企业持续推出高性能开源模型。与此同时,美国主要厂商因追逐“超级智能”而减少开源力度,Meta 甚至暗示未来不再开源最强模型,使美国开放生态在 2023 年 Llama 后陷入停滞。
专家指出依赖外国开源模型既是供应链风险也是创新风险:若中国模型被关闭或限制,美国企业与研究机构可能立刻失去关键工具。开源模型同时是企业处理敏感数据、科研社区快速试验与扩散创新的基础。数据透明度成为另一弱点:当前中美大多数所谓“开源”模型仍隐藏训练数据。来自 Stanford 的 Percy Liang 正推进 Marin 项目,试图以完全公开的数据训练模型,并获得 Google、Open Athena 与 Schmidt Sciences 支持。Andrew Trask 则提出建立类似 ARPANET 的政府项目,让研究者能够在严格隐私机制下访问非公开训练数据。全球数据量约 180 ZB,而现有模型仅使用数百 TB,显示潜在跃升空间巨大。
美国若要在开放生态重建优势,所需代价有限。ATOM 项目估算,只需每年约 1 亿美元即可构建并维持一个开放权重前沿模型,与当前顶尖研究者个人薪酬相当。中国在开放模式、数据可得性与成本效率上的领先已促使美国学界与产业呼吁政府介入,避免开放模型领域被中国持续拉开差距,并防止未来形成单极技术垄断。
The US leads in proprietary frontier models but has fallen behind China in open-weight systems that can be downloaded, adapted, and run locally. Chinese models such as Kimi, Z.ai, Alibaba’s Qwen, and DeepSeek have rapidly gained global traction and now outperform US open-weight releases. DeepSeek-R1, launched in January 2025, demonstrated frontier-level capability at a fraction of US training costs and triggered accelerated releases of powerful open Chinese models. Meanwhile, US companies, driven by a pursuit of “superintelligence,” have retreated from openness; Meta even signaled it may no longer open-source its best models, causing stagnation since Llama’s 2023 debut.
Experts warn that relying on foreign open models poses both supply-chain and innovation risks: the US could lose essential tooling if Chinese models were restricted. Open models remain critical for sensitive-data workloads, research experimentation, and ecosystem diffusion. Transparency is another gap; most US and Chinese “open” models still hide training data. Stanford’s Percy Liang is developing Marin, a fully open-data model backed by Google, Open Athena, and Schmidt Sciences. Andrew Trask argues for an ARPANET-like federal effort enabling privacy-preserving access to nonpublic training data. With an estimated 180 zettabytes of global data—and current frontier models trained on only hundreds of terabytes—the potential for breakthroughs is vast.
Reestablishing US leadership in open models would require modest investment. The ATOM Project estimates roughly $100 million per year to build and maintain an open-weight frontier model—comparable to what top researchers are offered individually. China’s momentum in openness, data leverage, and cost efficiency has intensified calls within US academia and industry for government involvement to prevent widening gaps and avoid a future dominated by a single-country open-model ecosystem.