← 返回 Avalaches

辉达与 Meta 于 2026年2月18日达成的协议,标志著其商业模式从主要销售离散式 GPU,转向供应更广泛的 AI 基础设施,将 GPU、CPU 与互连技术整合在一起。此交易被描述为多年期且价值数十亿美元,Meta 将为超大规模资料中心采购辉达晶片,并同时聚焦于训练与推论。这反映出市场正在转型,领先的 AI 买家愈来愈需要端到端的运算堆叠,而不仅是最高阶的加速器。

这项公告扩大了已经相当庞大的 Nvidia-Meta 合作关系:Meta 先前估计到 2024 年底将采购 350000 颗 H100 晶片,并在 2025 年底前可取得总计 1300000 颗 GPU。Nvidia 现在表示,Meta 计划大规模部署独立的 Grace CPU,以及数百万颗 Blackwell 与 Rubin GPU,这显示 CPU 需求正随著 GPU 成长而上升。支持性证据包括分析师报告指出,部分与 OpenAI 相关的 Microsoft 设施需要数万颗 CPU 来协调 PB 级 GPU 工作负载;同时,Nvidia 也持续推动推论经济性,其中包括与 Groq 达成价值 $20000000000 的授权与人才交易,以及 Jensen Huang 先前对业务结构的估计:40 percent 推论对 60 percent 训练。

其影响具有两面性:辉达正在扩大其可触及市场,但由于工作负载正在改变且 GPU 供应仍受限制,超大规模业者与 AI 实验室正分散供应商。Meta 表示,将 2026 年 AI 基础设施支出由 2025 年的 $72200000000 提高至 $115000000000-$135000000000,年增幅度显著;同时仍高度依赖以 GPU 为中心的架构,而在 CPU 供应不足时,CPU 可能成为瓶颈。竞争压力也可从平行承诺中看出,例如 OpenAI 与辉达最高达 $100000000000 的协议、与 AMD 最高达 6 gigawatts 并可能附带 10 percent 股权的安排,以及与 Cerebras 合作新增 750 MW、金额为 $10000000000,这些都显示运算组合正朝向多供应商、混合架构的统计趋势。

Nvidia’s February 18, 2026 agreement with Meta marks a shift from selling mostly discrete GPUs to supplying broader AI infrastructure that combines GPUs, CPUs, and interconnect technologies. The deal is described as multiyear and worth billions of dollars, with Meta buying Nvidia chips for hyperscale data centers focused on both training and inference. This reflects a market transition in which leading AI buyers increasingly need end-to-end compute stacks, not just the highest-end accelerators.

The announcement extends an already large Nvidia-Meta relationship: Meta had previously estimated purchases of 350000 H100 chips by the end of 2024 and access to 1300000 total GPUs by the end of 2025. Nvidia now says Meta plans large-scale deployment of stand-alone Grace CPUs plus millions of Blackwell and Rubin GPUs, signaling that CPU demand is rising alongside GPU growth. Supporting evidence includes analyst reports that some OpenAI-linked Microsoft facilities require tens of thousands of CPUs to orchestrate petabyte-scale GPU workloads, while Nvidia has also pushed inference economics, including a $20000000000 licensing-and-talent deal with Groq and Jensen Huang’s earlier business mix estimate of 40 percent inference to 60 percent training.

The implications are two-sided: Nvidia is broadening its addressable market, but hyperscalers and AI labs are diversifying suppliers because workloads are changing and GPU access remains constrained. Meta said it would raise 2026 AI infrastructure spending to $115000000000-$135000000000 from $72200000000 in 2025, a major year-over-year jump, while still relying heavily on GPU-centric architecture where CPUs can become bottlenecks if underprovisioned. Competitive pressure is visible in parallel commitments such as OpenAI’s deal with Nvidia worth up to $100000000000, an AMD arrangement for up to 6 gigawatts with potential 10 percent equity, and a Cerebras partnership adding 750 MW for $10000000000, indicating a statistical trend toward multi-vendor, mixed-architecture compute portfolios.

2026-02-19 (Thursday) · bfdc9dd0ebbb98407c461ecbb887e3cbda6a0fe0