← 返回 Avalaches

文件主张在AI时代需要新的工业政策,并将议程分成两大主题:开放经济与韧性社会。第一主题聚焦让经济利益外溢到更多人,包括让工人参与AI决策、以微额补助或收益型融资扶持创业、将AI视为经济参与基础设施扩大普及、调整税制以回应企业利润与资本利得可能上升、劳动所得与工资税占比下降,并探讨全民资产基金。它也提出加快电网建设、设计效率红利让福利与照护补贴提高、以及运行无减薪前提的32小时/四天工作周试点。安全网设计则要求以就业和薪资影响指标触发自动扩张,再在风险回落时退出。

第二主题在于部署后韧性:建立风险监测、威胁建模、红队测试与快速防护能力,并以采购、标准、保险与预购机制推动安全市场;建立可验证与可追溯的AI信任链与审计体系,将前沿风险审计交由如CAISI等机构,对仅少数高风险模型施加更高审核。另有政府AI使用法规、透明化规格、事件与近失败通报、以及跨国资讯共享网路,目标是在危机时加速协同回应。文件声明其政策方案属起点:透过反馈邮箱、新成立的研究奖学金(高达10万美元与高达100万美元API额度)与5月在华盛顿DC召开的研讨会持续扩展讨论。

OpenAI’s April 2026 report, *Industrial Policy for the Intelligence Age: Ideas to Keep People First*, frames AI’s arc as moving from supporting minute-level tasks to hour-level tasks and then toward systems that can handle work that currently takes months, arguing that the shift toward superintelligence is already underway. It sees major upside—higher productivity, cheaper essential goods, new forms of work and entrepreneurship—while warning that the transition also increases risks: job and industry disruption, malicious misuse, misalignment and loss of control, weakened democratic institutions, and concentrated power and wealth. It defines three policy aims: broadly sharing prosperity, mitigating risks, and ensuring broad AI access and human agency.

The report argues for a new industrial policy and organizes it into two sections: building an open economy and building a resilient society. In the open economy track, it proposes worker voice in deployment decisions, support for AI-first entrepreneurs, universal baseline access to foundational models, updated taxation as economic activity shifts toward corporate profits and capital gains versus labor income, and a public wealth fund so growth gains are distributed to citizens. It also proposes faster grid expansion, converting AI efficiency gains into welfare and care supports, and 32-hour/four-day workweek pilots with no pay cuts. Adaptive safety systems should scale and shrink automatically based on real-time labor-disruption metrics, and benefits should be portable across jobs and career states.

On resilience, the report calls for post-deployment risk systems: threat modeling, red teaming, rapid mitigation for cyber/biological risks, and safety procurement ecosystems that keep protections evolving as threats evolve. It emphasizes trust and accountability infrastructure, stronger auditing through institutions like CAISI, narrowly targeted controls on a small set of the most dangerous frontier models, legal standards for government AI use, public specification and evaluation processes, near-miss reporting, and international risk information coordination. OpenAI presents this as a starting framework rather than final doctrine, adding feedback channels, fellowship and research grants of up to $100,000 and up to $1 million in API credits, and a May workshop launch in Washington, DC.

2026-04-16 (Thursday) · 577239e8320e60df53a70d3d2203ca64c87b593c