← 返回 Avalaches

在2026年4月30日(周四)联邦法庭的盘问中,Elon Musk 回应 OpenAI 律师 William Savitt 的追问时,将 distillation 定义为以一个 AI 模型训练另一个模型。当被问到 xAI 是否以 OpenAI 模型做此作法时,他先说「一般所有 AI 公司都会做」,在「那算是 yes 吗?」的追问下改为「Partly(部分)」。他在后续回答中又称,使用其他 AI 来验证自家 AI 是「标准做法」,这强化了他未明确否认 xAI 可能使用 OpenAI 模型训练自家系统的解读。

OpenAI 与 xAI 都未立即回应。OpenAI 在2026年2月提交给众议院委员会的备忘录中表示,已采取措施保护并强化其模型以防止 distillation,并将中国可能透过挪用与重包装美国创新来推进 autocratic AI 视为主要风险。接著在2026年4月,白宫科技政策顾问办公室主任 Michael Kratsios 指出将与美国 AI 企业分享外国 distillation 的资讯,并在 X 上重申政府承诺推动竞争生态中的 free and fair 发展。

文章指出,美国 AI 产业正在从有限互测转向更封闭的边界。虽然模型验证与安全评估仍常见,但 2025年8月 Anthropic 已封锁 OpenAI 存取 Claude coding 模型,之后也再限制 xAI。再加上 Savitt 在多日交叉盘问中拿出自2017年起的邮件与简讯追问 Elon Musk 是否压低资金、挖角关键研究员以及是否图谋接管 OpenAI,显示模型使用、资料竞争与治理风险在加速收敛为核心争议。

On April 30, 2026, during federal court testimony, Elon Musk, when cross-examined by OpenAI attorney William Savitt, defined distillation as using one AI model to train another. Asked whether xAI had done this using OpenAI models, he said that, generally, all AI companies do, and when pressed with “Was that a yes?”, he answered “Partly.” He later added that using other AIs to validate one’s own AI is a standard practice, which strengthens the interpretation that he did not clearly deny that OpenAI models may have been used in xAI training.

OpenAI and xAI did not respond immediately. In a February 2026 memo to a House committee, OpenAI said it had taken steps to protect and harden its models against distillation, framing China’s potential replication and repackaging of U.S. innovation as a major risk for autocratic AI advances. In April 2026, White House Office of Science and Technology Policy director Michael Kratsios said it would share information with U.S. AI firms on foreign distillation, and reiterated on X that the government is committed to a free and fair AI ecosystem.

The context suggests a shift in U.S. AI competition from selective testing toward tighter restrictions. Even though cross-model validation for progress and safety remains common, control is tightening: in August 2025, Anthropic cut off OpenAI’s access to Claude coding models, and later also restricted xAI. In addition, during multi-day cross-examination, attorney William Savitt used emails and texts from 2017 onward to press Elon Musk on funding constraints, researcher poaching, and his attempts to gain control of OpenAI, showing that model reuse, competitive behavior, and governance risk are converging into the central legal dispute.

2026-05-03 (Sunday) · da1894aeeadde668014a045b0bfc6caf406f271f