文章发表于 2025 年 12 月 27 日 6:00 AM(作者 Will Knight),以 2026 年“将属于 Qwen”为主线:作者在 2025 年夏天走访中国杭州 Rokid,总部原型智能眼镜用阿里巴巴开源权重模型 Qwen(通义千问)把工程师的中文实时译成英文并转录到镜片屏幕上。尽管在多项基准上,OpenAI 的 GPT-5、Google 的 Gemini 3、Anthropic 的 Claude 往往更高分,且真正引爆开源权重浪潮的先驱是 Meta 于 2023 年发布的 Llama,但 Qwen 以“足够强 + 易改造”迅速扩散。
关键量化趋势集中在“使用与下载”而非单项分数:文章称 2025 年 7 月,HuggingFace 平台上“开放的中国模型”下载量超过“美国模型”;OpenRouter 也表示 Qwen 在 2025 年内快速上升,成为全球第二受欢迎的开放模型。作者强调 Qwen 既可被下载、微调并由公司自托管,也能在手机等设备上运行“极小版本”以应对断网;类似的低算力路线还被 DeepSeek 以更少算力推出前沿模型所凸显。
过去 12 个月里,美国开放模型出现几次显著“失速”:Meta 在 2025 年 4 月发布的 Llama 4 表现令开发者失望;OpenAI 于 2025 年 8 月发布的 GPT-5 也被部分用户批评“更冷”且出现简单错误,同月虽推出较弱的开放模型 gpt-oss,但热度仍被 Qwen 等中国模型压过。学术侧,NeurIPS 上有“数百篇”论文使用 Qwen,且 Qwen 团队的 arXiv 论文(2505.06708)获评当年最佳论文之一;产业侧,BYD 将其接入新车载助手,美国的 Airbnb、Perplexity、Nvidia 也在用,甚至传 Meta 也借助 Qwen 研发新模型。作者据此提出:衡量模型的关键指标应包含“被用来构建多少东西”,按此标准,Qwen 及中国开放模型在 2026 年更占上风。
Published Dec 27, 2025 at 6:00 AM, the piece argues that while GPT-5 had a big year, 2026 will be “all about Qwen.” The author visits Rokid in Hangzhou in summer 2025 and demos prototype smart glasses that translate Mandarin to English and transcribe it onto a tiny display, powered by Alibaba’s open-weight model Qwen (Tongyi Qianwen). Qwen is not portrayed as the top benchmark scorer versus GPT-5, Gemini 3, or Claude, and Meta’s Llama kicked off the open trend in 2023.
The key statistical signal is adoption: the article says that in July 2025, downloads of open Chinese models on HuggingFace surpassed US ones, and OpenRouter reports Qwen rose quickly during 2025 to become the world’s second-most-popular open model. Qwen’s appeal is operational: it can be downloaded, modified, hosted in-house, and even run in very small on-device versions when connectivity drops. DeepSeek is cited as reinforcing the trend by releasing a cutting-edge model with much less compute than US rivals.
The past 12 months include notable US stumbles: Llama 4 (April 2025) disappointed on popular benchmarks, and GPT-5 (August 2025) underwhelmed some users, with complaints about a cold tone and simple errors; OpenAI also released a weaker open model, gpt-oss, that month. In contrast, “hundreds” of NeurIPS papers used Qwen, and a Qwen training paper (arXiv:2505.06708) was named among the conference’s best. Qwen is shown spreading into products at BYD and into US firms like Airbnb, Perplexity, and Nvidia, with even Meta reportedly using it—supporting the claim that broad usage, not marginal benchmark wins, predicts 2026’s leaders.