Meta 于 2026-03-11 宣布开发 4 款新晶片,纳入其 MTIA(Meta Training and Inference Accelerators)产品线,用于生成式 AI 功能与内容排序/推荐系统;其中 MTIA 300 已量产,其余 3 款(MTIA 400、450、500)预计于 2027 年初至 2027 年底分批出货,形成从「已在产」到「约 1 年后集中上市」的明确时间梯度。
晶片由 Meta 与 Broadcom 合作、采开源 RISC-V 架构,并交由 TSMC 制造;YJ Song 指出 AI 模型演进速度快于传统晶片开发周期,因此以迭代式路线推进,每一代在前一代基础上,以模组化 chiplets 方式吸收最新工作负载洞见与硬体技术,凸显缩短世代间隔的策略取向。
工作分工上,MTIA 300 主要用于训练面向「数亿」日活使用者的内容排名与推荐演算法;MTIA 400 著重推论并宣称效能可与主流商用品竞争、已测试且将进入资料中心;MTIA 450 的高频宽记忆体容量为 MTIA 400 的 2 倍、预计 2027 年初出货;MTIA 500 预计 2027 年稍晚到来,记忆体将高于 MTIA 450,并加入低精度资料创新,同时在自研昂贵复杂的前提下,Meta 仍以「数十亿美元」规模向 Nvidia、AMD 采购并另与 Google 租用晶片。
On 2026-03-11 Meta announced 4 new chips in its MTIA (Meta Training and Inference Accelerators) line to run generative AI features and content ranking/recommendation systems; MTIA 300 is already in production, while the other 3 chips (MTIA 400, 450, 500) are expected to ship between early 2027 and late 2027, creating a clear timeline from “in production now” to “rolling releases about a year out.”
The chips are co-developed by Meta and Broadcom on the open-source RISC-V architecture and fabricated by TSMC; YJ Song argues AI models evolve faster than traditional chip development cycles, so Meta is using an iterative roadmap in which each generation builds on the last via modular chiplets and incorporates the latest workload insights and hardware technologies, emphasizing shorter generation intervals.
By role, MTIA 300 is primarily for training ranking and recommendation algorithms serving “hundreds of millions” of daily users; MTIA 400 targets inference, is described as competitive with leading commercial products, has been tested, and is expected in data centers soon; MTIA 450 is slated for early 2027 with 2× the high-bandwidth memory of MTIA 400; MTIA 500 is due later in 2027 with more memory than MTIA 450 and “low-precision data” innovations, while Meta still expects to buy most AI hardware externally, reflected in multibillion-dollar Nvidia and AMD deals and an agreement to rent Google-made chips.