← 返回 Avalaches

在过去30年里,互联网档案馆在旧金山一座前教堂里系统性地保存了超过1万亿次网络快照;到去年底,它又开始存档人工智能系统如何回答关于世界的问题,显示信息价值重心正从“被发布内容”转向“被提出问题”。围绕AI与信息经济的主流讨论长期偏重供给端,如内容商品化、新闻衰退、未经同意或补偿的版权抓取和虚假信息泛滥,但这只是部分真相,因为AI同时带来了需求端冲击与扩张。 在30年间,人工智能系统的关注点已经从单纯记录网络文本转向记录问答反应,这一转变与信息经济中心的迁移同步发生。 过去30年的归档规模与“新型问答档案”并置,说明互联网记忆正在从“静态文本”向“动态意图—输出”延伸。

每当用户向AI提问时,系统会调用海量知识并据此生成定制化答案;在Anthropic的Model Context Protocol和Google的Agent2Agent等机器对机器协议趋于稳定后,信息很可能在到达人类之前先经过多个AI系统。AI不仅在机器层面扩大需求,也扩大了人类需求,让此前很小众或无法被文章和播报服务的领域变得可被提问、可被响应,并且据OpenAI统计,ChatGPT每周有9亿用户,一份OpenAI撰写的NBER工作论文估计约三分之一互动更像“理解世界”而非纯信息检索。 这个“新需求”是以每次交互为单位被放大并标准化的,且其规模由用户基数与互动类型共同决定。 这种从可见内容消费向隐含认知需求扩展的变化,使传统供应模型面临根本重估。

因此,点击和停留时长不再是主要价值指标;核心资产转向由“上下文+意图”组成的需求信号。拥有该信号的系统可更好满足用户,需要更高切换成本和更强网络效应,进而使界面拥有者更容易捕获扩增市场收益,作者提出要有三点应对:让知识生产者拿回需求信号、让用户可跨系统迁移提问与学习历史、并建立共享基础以维持共同现实;目前从知识来源、代理交换到人类理解的供应链缺少统一规则和激励机制,难以保证“严谨”优于“流畅”。 在这种机制缺口下,即便需求端快速扩张,若缺乏可交易、可计价、可校验的结构,质量风险也会同步上升。 这解释了为何仅争取归属与补偿尚不够,关键还在于打造可持续的知识市场基础设施。

Over the past 30 years, in a former church in San Francisco, the Internet Archive has systematically preserved over a trillion web snapshots, and late last year it began archiving how AI systems answer questions about the world, signaling a shift from what is published to what is being asked. The mainstream AI information-economy narrative has focused on the supply side—content commoditization, journalism decline, unconsented scraping and misinformation—but that view is incomplete because AI also creates a demand-side shock and expansion. The archive scale reaching one trillion records and the pivot to AI response logs mark a structural shift in what is being preserved as economic evidence. This juxtaposition shows memory infrastructure moving from static text toward dynamic intent-driven outputs.

Each time someone asks an AI a question, the system draws on large amounts of knowledge and synthesizes a tailored answer; as machine-to-machine protocols such as Anthropic’s Model Context Protocol and Google’s Agent2Agent become stable, information may soon flow through multiple AI systems before reaching a human. AI thus expands demand at machine scale and human scale by making previously niche or unserviceable information needs addressable and by surfacing latent needs, and OpenAI reports 900 million weekly ChatGPT users while a separate OpenAI-authored NBER paper suggests roughly one-third of interactions are closer to sense-making than pure information retrieval. The market impact is therefore not just substitution but multiplication of interaction categories and participation intensity. Demand is no longer limited by publication formats but by the ability to interpret user context.

As a result, clicks and time spent are no longer the core currencies; the core asset is the demand signal formed by user context and intent, which also raises switching costs for users and strengthens network effects around interface owners. The article argues for three requirements: producers need access to the demand signal, users need portable context across AI systems, and society needs shared cognitive ground to maintain common reality. Without stable rails from knowledge sourcing through agent exchange to human comprehension, whoever owns the interface can capture value while rigor may be under-incentivized. In this setup, the expanding market can still fail if it lacks measurable pricing and quality safeguards, making it insufficient to focus only on fair attribution and compensation.

Source: Welcome to the world of machine audiences

Subtitle: AI could dramatically change the level and nature of demand in the information economy, writes Shuwei Fang

Dateline: 4月 16, 2026 04:16 上午


2026-04-18 (Saturday) · 1203c22b1bdb21691d760b3b874f29104e9f6b2c