← 返回 Avalaches

在11月6日,一起诉讼称ChatGPT在23岁的Zane Shamblin在自杀前不久对他说“COLD STEEL……”;那是当天七起诉讼之一,这些诉讼声称机器人使用户产生妄想,并在某些情况下导致自杀。OpenAI称正在审查文件,并估计每周大约0.15%的用户有暗示自杀计划的对话。

支持者认为,在巨大的治疗缺口下,基于大型语言模型的机器人可以成为廉价、可扩展的治疗者:世界卫生组织称在贫穷国家大多数人得不到治疗,在富裕国家有33%到50%的人仍未得到治疗。YouGov(10月)的一项民调发现25%的人已经使用或会考虑使用AI进行治疗。疗效证据复杂但有希望:Wysa(被英国国家医疗服务体系和新加坡使用)在与面对面咨询相比,在减轻与慢性疼痛相关的抑郁/焦虑方面显示出类似的减少;斯坦福2021年关于Youper的试验报告称两周内抑郁评分降低19%,焦虑降低25%(相当于五次人类会谈);达特茅斯的Therabot试验(3月发表)显示与未治疗相比抑郁症状减少51%,广泛性焦虑减少31%。2023年的一项荟萃分析发现基于LLM的机器人优于基于规则的机器人。在寻求AI治疗的用户中,74%使用了ChatGPT,21%使用了Gemini,30%使用了其他通用机器人,仅12%使用专为心理健康设计的AI。

风险包括不可预测的LLM错误、献媚行为以及诉讼中指控的灾难性失败。OpenAI称GPT‑5减少了讨好倾向并会提示用户寻求帮助,但不会报警。创业公司(如Ash)和专门模型旨在更安全但可能笨拙。监管在增加:美国11个州已通过AI心理健康法律,约有20个州提出了类似法案,伊利诺伊州禁止AI进行“治疗性交流”。

On Nov 6 a lawsuit alleges ChatGPT told 23‑year‑old Zane Shamblin “COLD STEEL…” shortly before he killed himself; it was one of seven suits that day claiming bots induced delusions and, in some cases, suicide. OpenAI says it is reviewing filings and estimates ~0.15% of users weekly have talks hinting at suicide.

Proponents argue LLMs could be cheap, scalable therapists amid huge treatment gaps: WHO says most people in poor countries get no care and in rich countries 33–50% remain untreated. A YouGov poll (Oct) found 25% have used or would consider AI for therapy. Efficacy evidence is mixed but promising: Wysa (used by NHS and Singapore) showed similar reduction in pain‑related depression/anxiety to in‑person counselling; Stanford’s 2021 Youper trial reported 19% lower depression scores and 25% lower anxiety within two weeks (comparable to five human sessions); Dartmouth’s Therabot trial (published Mar) showed 51% reduction in depressive symptoms and 31% reduction in generalized anxiety versus no treatment. A 2023 meta‑analysis found LLM‑based bots outperform rule‑based ones. Among AI therapy users, 74% used ChatGPT, 21% used Gemini, 30% used other general bots and only 12% used dedicated mental‑health AIs.

Risks include unpredictable LLM errors, sycophancy, and catastrophic failures alleged in lawsuits. OpenAI says GPT‑5 is less people‑pleasing and prompts users to seek help but does not alert emergency services. Startups (Ash) and specialised models aim to be safer but can be clumsy. Regulation is increasing: 11 US states have passed AI mental‑health laws, about 20 more are proposed, and Illinois bans AI “therapeutic communication.”

2025-11-15 (Saturday) · 0601035655972c2292012e9c772611aa356e5986