← 返回 Avalaches

WIRED与Indicator的分析指出,学校中的AI深伪色情滥用是一个持续扩大的全球问题:至少有接近90所学校、超过600名学生在公开报导中受影响,且自2023年起在至少28个国家有相关纪录。北美地区有近30起案例;南美超过10起,欧洲超过20起,澳洲与东亚合计约12起。单一起最大的公开事件涉及超过60名疑似受害者。更广泛的估计显示规模更高:UNICEF调查指出一年内有120万名儿童的性化深伪作品被制作出来,其中在西班牙,五分之一的年轻人表示曾受害,约八分之一的青少年认识到有人被锁定为受害者,且2024年CDT调查中有15%的学生知道与其学校相关的AI深伪裸体案。

大多数案例都指向青少年男生为主要肇事者,图片通常先从社群媒体下载,再透过Instagram、Snapchat、即时通讯或同侪网络快速传播。这些影像被归类为CSAM(儿童性虐待影像),对受害者造成严重心理伤害,包括羞辱、焦虑、退学念头、食欲下降,以及担心终生都可能在网路上存在的恐惧,迫使她们或他们长期监控网路。制作门槛迅速下降,许多nude-fication软体、机器人与网站只需点击即可生成逼真的性化影像或影片,且多无需技术背景,这种规模、速度与可及性的提升使更多人、包括青少年,都能低成本大量制作。

学校和执法机构的回应仍然不一致:有些事件通报警方拖延三天,或即时处理不足,甚至缺少即时后果;另一些则追究刑责,如宾州两名青少年在少年法庭承认罪行后,因制作多名未成年人裸体影像与影片而被判60小时社区服务。受害者与家长常认为行动不足,而学生行动有时更快,出现课堂罢课支持受害者、抗议涉嫌者、以及推动线上防护课程的现象。学生参与也促成《Take It Down Act》立法,要求非自愿亲密影像48小时内移除。英国与欧盟正推进禁止nude-fication服务,澳洲eSafety亦有行动取缔部分服务;而Thorn、UNICEF与预防组织都指出成年人回应揭露的方式,会影响受害者恢复与是否再次提出揭露。

The analysis by WIRED and Indicator identified sexualized AI deepfake abuse in schools as an expanding global problem: nearly 90 schools and more than 600 students were publicly reported as affected, with incidents tracked since 2023 in at least 28 countries. In North America there were nearly 30 cases, while more than 10 were reported in South America, more than 20 in Europe, and around 12 across Australia and East Asia. The largest single public case involved over 60 alleged victims. Broader estimates are likely much higher: a UNICEF survey cited 1.2 million children who had sexual deepfakes made of them in one year, one in five Spanish youths reporting victimization, one in eight teens knowing someone targeted, and 15% of students surveyed in 2024 aware of school-linked AI nude cases.

Most cases point to adolescent boys as the main perpetrators, and images are often taken from social media before being rapidly shared through Instagram, Snapchat, messaging apps, or peer networks. These images are classified as CSAM and cause severe emotional harm, including humiliation, anxiety, hopelessness, reduced appetite, fear of lifelong exposure, and long-term pressure to monitor the internet. Technical barriers have fallen rapidly: nudification apps, bots, and websites can generate realistic explicit images and videos with very little effort and minimal technical knowledge, creating a scale and volume effect that allows many actors, including teenagers, to produce convincing abuse content.

School and law-enforcement responses remain inconsistent: some incidents are delayed or weakly handled, while others lead to prosecution, such as two Pennsylvania teens later sentenced to 60 hours of community service for CSAM-related felony charges involving nude morphing of other juveniles. Victim advocates and parents often deem action insufficient, while students sometimes respond faster through walkouts, protests, online training initiatives, and advocacy. Their pressure helped contribute to the U.S. Take It Down Act requiring non-consensual intimate images to be removed within 48 hours. The UK and EU are moving to ban nudification services, Australia’s eSafety has taken action against some providers, and groups including Thorn and UNICEF stress that how adults respond to disclosure strongly affects recovery and whether children speak up later.

2026-04-19 (Sunday) · e8544d7dad933b103ccd7b506075e6d59f551c7d