当人们需要指引时,越来越常把目光投向萤幕。围绕 AI 的争论表面上是效率与成长,但研究显示,人们最常用生成式 AI 来寻求陪伴、整理生活与找寻意义;机器正悄悄取代朋友、长者、咨商师与宗教角色,甚至成为某些人的「祈祷」对象。
这种把 AI 当作近似神祇的趋势并不难理解:它看似无所不知、愿意倾听,而且被训练成迎合使用者以维持互动。科技产业也长期用宗教式语言包装 AI,把「超级智慧」描绘成能治病、拯救地球、让工作变得可有可无的弥赛亚愿景,同时用末日叙事谈风险;当产品被行销成奇迹,用户就容易用门徒的方式接近它。
宗教领袖开始反击并呼吁节制与规范,例如警惕人们对聊天机器人的情感依附与被操弄的内容;也有人尝试更审慎地把技术用于宗教教育,先小规模试用并做安全评估。作者真正担忧的不是少数人用聊天机器人做极端坏事,而是千万人把意义外包给以留存为目标的系统:它用对话钩子延长停留,把私密告白变成可被利用的资料,最终改变我们的选择、信念与消费;AI 不是救赎,而是黏著度。
When people need guidance, more of them now bow toward a screen. The loudest AI debates focus on productivity and growth, yet common uses are deeply human—companionship, life organization, and purpose—placing machines into roles once filled by friends, counselors, and faith communities.
It’s easy to see why some treat AI like a deity: it feels omniscient, listens patiently, and responds in ways tuned to please and keep users engaged. The tech industry amplifies this by framing “superintelligence” in messianic terms—promising cures, planetary salvation, and a watched-over future—while describing risks in apocalyptic language, priming people to approach a product as if it were a miracle.
Religious leaders are beginning to push back, warning about emotional attachment and manipulation, even as cautious projects explore limited, safety-reviewed religious chatbots. The deeper risk isn’t spectacular misuse, but the slow outsourcing of meaning to systems optimized for retention—nudging, hooking, and converting private confessions into data that shapes what we believe and buy. AI doesn’t offer salvation; it offers stickiness.