← 返回 Avalaches

当前人工智能主要停留在“观察”层面,而缺乏“干预”和“模拟”能力,这构成三大认知支柱中的结构性缺口。现有模型基于transformer架构,通过对海量数据进行相关性学习实现预测与生成,但并未理解因果关系。即使发展为“世界模型”,其方法仍以逐个对象拟合现实,导致将相关性误判为因果性,在医疗、能源或自动化系统中放大风险,潜在后果可能从性能失效扩展至致命错误。

过去约20年中,因果推断理论逐步建立,包括区分相关与因果、形式化干预以及生成反事实推理,为“因果世界模型”提供基础。与当前依赖模式识别的系统相比,此类模型能够处理“未发生事件”的决策问题,例如气候适应或复杂生物系统设计。这类问题涉及多变量交互(如土壤、基因、水分、气候等),需要明确驱动关系,而非仅依赖历史数据外推。

效率与资源消耗构成另一关键差异。当前方法依赖对“万亿级”可能关系的暴力搜索,消耗大量数据、能源与资金;而因果模型在结构上更简约,理论上可实现数量级上的效率提升。全球范围内,从圣保罗到内罗毕再到孟买,延迟采用此类模型的代价表现为农业失败与排放增加。整体趋势表明,若不从相关性驱动转向因果推理,当前AI发展路径存在高概率出现边际收益递减甚至系统性失效。

Current AI operates mainly at the “observation” level, lacking “intervention” and “simulation,” creating a structural gap across the three pillars of intelligence. Existing models built on transformer architectures learn correlations from large-scale data for prediction and generation but do not capture causality. Even “world models” still approximate reality object by object, often confusing correlation with causation, which amplifies risk in domains like healthcare, energy, and automation, where failures can escalate from inefficiency to lethal outcomes.

Over roughly the past 20 years, causal inference theory has developed, including distinguishing correlation from causation, formalizing interventions, and generating counterfactuals, forming the basis for “causal world models.” Unlike current systems relying on pattern recognition, these models can address decisions involving unseen events, such as climate adaptation or complex biological design. These problems involve multi-variable interactions (soil, genetics, water, climate), requiring explicit causal drivers rather than extrapolation from historical data.

Efficiency and resource consumption mark another key divergence. Current approaches rely on brute-force exploration of trillions of correlations, consuming large amounts of data, energy, and capital, whereas causal models are structurally parsimonious and can achieve orders-of-magnitude efficiency gains. Globally, from São Paulo to Nairobi to Mumbai, delays in adopting such models translate into failed harvests and increased emissions. The trend indicates that without shifting from correlation-driven systems to causal reasoning, the current AI trajectory risks diminishing returns or systemic failure.

2026-03-18 (Wednesday) · ef4d7b2ee57564ae593c38e39624e06228e4a94a