Elon Musk(伊隆·马斯克)正把轨道上的 AI 资料中心作为一项 $1.25tn 计划的核心理由,该计划旨在将 SpaceX 与亏损的 AI 新创 xAI 合并,并把合并后公司的 IPO 锁定在今年。他主张,由太阳能供电的卫星编队、利用太空真空降温,并透过 Starlink 等网路回连地球,可能成为在 3 years 内生成 AI 算力的最便宜方式;他声称太空会成为 AI 基础设施在经济上最具吸引力的所在地需要「36 months, probably closer to 30 months」。产业高管与投资人表示这个时间表很激进,但如果发射成本持续下降、同时 AI 算力需求持续上升,这一概念正变得更可行;讨论也正从其科幻根源(1941 年 Isaac Asimov 的故事)转向近期原型验证。
马斯克并非唯一:Jeff Bezos(杰夫·贝佐斯)曾预测太空中会出现「giant gigawatt」资料中心;而 Google 与 Planet 计划在 early 2027 之前为「Project Suncatcher」进行一次「learning mission」,把 2 颗搭载 tensor processing unit(TPU)AI 晶片的原型卫星送入低 Earth orbit。新创公司则瞄准更快的里程碑,Starcloud 与 Aetherflux 目标是在 12 months 内把 GPU 发射到太空;Starcloud 的共同创办人声称技术将在 2 to 3 years 内被证明,「probably this year」,并提出更长期的说法:在 10 years 内「all compute will be built in space」。支持者认为,轨道部署可按「industrial manufacturing pace」而非「real estate pace」扩张,可能借由利用几乎充裕的太阳能与寒冷的运行条件,绕开地面资料中心在规划与供电延宕等瓶颈。
关键限制在经济与工程:一篇 Google 研究论文(November)估算,太空资料中心在每千瓦/年成本上只有在发射价格降到每 kg 低于 $200(相较于今天至少 $1,000/kg)时,才会与地面设施「roughly comparable」,并预测这个门槛要到 mid-2030s 才会达成。其他未知数包括:GPU 等级晶片是否能在规模化情境下充分防护辐射并在真空中有效散热,以及 AI 的发展轨迹是否仍会需要越来越多算力,因为商业论证取决于需求能否持续成长。可重复使用火箭如 SpaceX 的 Starship 与 Blue Origin 的 New Glenn 被视为潜在降本因素;而 SpaceX 已向美国 FCC 申请许可,欲为 AI 发射最多 1mn 颗卫星,但质疑者指出实际载荷的现实(伺服器机架很重)以及即使在支持者之间仍未解决的架构决策,意味著即便经济帐可能算得过来,技术也可能尚未完全就绪。
Elon Musk is pitching orbital AI data centres as the core rationale for a $1.25tn plan to merge SpaceX with the lossmaking AI start-up xAI, with an IPO for the combined company targeted for this year. He argues that fleets of solar-powered satellites, cooled by the vacuum of space and linked back to Earth via networks such as Starlink, could become the cheapest way to generate AI compute within 3 years, claiming “36 months, probably closer to 30 months” for space to be the most economically compelling location for AI infrastructure. Industry executives and investors say the timeline is aggressive, but the concept is becoming more plausible if launch costs keep falling while AI-compute demand keeps rising, with discussion shifting from science fiction roots (a 1941 Isaac Asimov story) to near-term prototyping.
Musk is not alone: Jeff Bezos predicted “giant gigawatt” data centres in space, while Google and Planet plan a “learning mission” for “Project Suncatcher” by early 2027 that would send 2 prototype satellites with tensor processing unit AI chips into low Earth orbit. Start-ups are targeting faster milestones, with Starcloud and Aetherflux aiming to launch GPUs into space within 12 months, and Starcloud’s co-founder asserting the tech will be proven in 2 to 3 years, “probably this year,” alongside a longer-horizon claim that in 10 years “all compute will be built in space.” Proponents argue orbital deployment could scale at “industrial manufacturing pace” rather than “real estate pace,” potentially bypassing terrestrial data-centre bottlenecks like planning and power-supply delays by tapping effectively abundant solar energy and cold operating conditions.
Key constraints are economic and engineering: a Google research paper (November) estimates space-based data centres become “roughly comparable” to terrestrial facilities on a per-kilowatt/year basis only if launch prices drop below $200 per kg versus at least $1,000/kg today, and it projects that threshold is not reached until the mid-2030s. Additional unknowns include whether GPU-class chips can be adequately shielded from radiation and cooled in vacuum at scale, and whether AI’s trajectory continues to require ever-more compute, since the business case depends on sustained growth in demand. Reusable rockets such as SpaceX’s Starship and Blue Origin’s New Glenn are cited as potential cost reducers, and SpaceX has sought US FCC permission to launch up to 1mn satellites for AI, but sceptics note practical payload realities (server racks are heavy) and unresolved architecture decisions even among supporters, implying the economics could pencil out before the technology is fully ready.