该专栏从 2026 年 2 月 6 日回望,说明「the year 2000」如何在 20 世纪被用作指代未来的简写,并主张在千禧年前后拍摄的科技电影,之所以最耐看,是因为它们预测了数位生活将如何改变人与人的连结,而不只是展示小玩意。文中对比了把科技当作推动情节的装置(例如一部以 AI 为核心、由 60 多岁明星主演的 2025 动作片),与那些预见数位时代 3 个持久特征的「科技电影」:资讯过度充裕引发对信任、真相与监控的争夺;便利与即时性削弱连结;以及科技放大不平等并重塑权力。作者将此定位为对约 25 年快速变迁的回顾,涵盖社群媒体、智慧型手机、AI 与自动化,并指出其伤害(junk fees、可疑广告、成瘾、错误资讯)来自诱因与薄弱护栏,而非必然宿命。
在 5 部每一部都超过 10 年的电影中,文章把具体的故事元素连到真实的数位时代动态,以及「大规模预测」。《Minority Report》(2002)预示了个人化、以行为为驱动的广告与「prediction as power」,作者指出 20 多年后我们仍缺少「precogs」,但确实用演算法在保释、信用与医疗等高影响情境做出关键预测,而衡量不只看「accuracy」,也包括偏误、可问责性、隐私,以及系统究竟在最佳化什么。《Her》(2013)对应到生成式 AI 之后人们大量把聊天机器人当成导师、伴侣或治疗师的浪潮,即使机器不会感受,也仍对下游影响感到不确定;《WALL-E》(2008)预见了永远在线的资讯流、近乎即时的配送、让人上瘾的设计与萤幕时间上升,也点出资料中心的实体足迹与电网需求。《Gattaca》(1997)被连到 CRISPR 时代的基因编辑与「因可近性而分层」的结构,其中还提到一则广泛报导的三代血浆交换(企业家、青少年儿子、父亲),作为极端长寿追求已经集中在富人之中的讯号;《The Truman Show》(1998)则用来描述社群媒体的监控式经济、创作者指标与「life-as-content」,并把有数百万人观看 Truman 节目的设定视为注意力最大化系统的早期模板。
贯穿全文的主线是:当科技在数十年尺度上扩张,它会从工具转为环境,而诱因与治理决定收益是被广泛分享,还是被少数幸运者攫取;同时,便利与互动也可能被最佳化到以牺牲自主、健康与真相为代价。作者强调,预测与个人化会引发分配与权利层面的问题(偏误、能动性、同意、长期风险),当制度跟不上时这些问题会相互叠加,他也把对护栏低信任视为关键限制因素。结尾他以《Frankenstein》作为跨越数百年的类比:在规范尚未成熟前,就把由人类资料拼装而成的强大系统释放进复杂社会场域;并主张未来 25 年的形状,与其说取决于 AI 能做什么,不如说取决于对平台设计、治理与习惯的选择。文中隐含的保留是:许多被描述的趋势(监控、互动最大化、不平等、资讯碎片化)都能在大规模上被量测,但其长期的人类影响仍未被充分理解,尤其是在 AI 介入的人际关系与生物科技干预方面。
The column looks back from February 6, 2026 on how “the year 2000” functioned as a 20th-century shorthand for the future, then argues that tech films made around the turn of the millennium aged best when they predicted how digital life would change human connection rather than just showcasing gadgets. It contrasts plot-device tech (e.g., a 2025 action movie centered on AI and a 60-something star) with “tech movies” that anticipated 3 durable features of the digital age: information abundance driving fights over trust, truth, and surveillance; convenience and immediacy undermining connection; and technology amplifying inequality and reshaping power. The author frames this as a retrospective over roughly 25 years of rapid change spanning social media, smartphones, AI, and automation, with harms (junk fees, dubious ads, addiction, misinformation) emerging from incentives and weak guardrails rather than inevitability.
Across 5 films that are each more than 10 years old, the essay ties specific story elements to real digital-era dynamics and to prediction at scale. Minority Report (2002) foreshadows personalized, behavior-driven advertising and “prediction as power,” and the author notes that more than 2 decades later we still lack “precogs” but do use algorithms for consequential forecasts in bail, credit, and medicine, where performance is not just “accuracy” but also bias, accountability, privacy, and what systems optimize. Her (2013) maps onto the post-generative-AI surge of people using chatbots as mentors, partners, or therapists, with uncertainty about downstream effects even if machines cannot feel; WALL-E (2008) anticipates always-on feeds, near-immediate delivery, habit-forming design, and rising screen time, alongside the physical footprint of data centers and grid demand. Gattaca (1997) is linked to CRISPR-era gene editing and to access-driven stratification, including a widely publicized tri-generational blood plasma swap (entrepreneur, teenage son, father) as a signal that extreme longevity pursuits already concentrate among the wealthy; The Truman Show (1998) is used to describe social media’s surveillance economics, creator metrics, and “life-as-content,” with Truman’s show watched by millions as an early template for attention-maximizing systems.
The throughline is that as technology scales across decades, it shifts from tools to environments where incentives and governance determine whether gains are broadly shared or captured by a lucky few, and where convenience and engagement can be optimized at the expense of autonomy, health, and truth. The author emphasizes that prediction and personalization raise distributional and rights-based questions (bias, agency, consent, long-term risks) that compound when institutions fail to keep pace, and he treats low trust in guardrails as a key limiting factor. In closing, he uses Frankenstein as a centuries-old analogy for releasing powerful systems assembled from human data into complex social settings before norms are ready, arguing that the next 25 years will be shaped less by what AI can do than by choices about platform design, governance, and habits. The implicit caveat is that many trends described (surveillance, engagement-maximization, inequality, and fragmented information) are measurable at scale but not yet fully understood in their long-run human effects, especially for AI-mediated relationships and biotech interventions.