在2026年4月13日,包括ACLU、EPIC和Fight for the Future在内的70多个民权、家庭暴胀、生育权、LGBTQ+、劳工和移民组织要求Meta停止在Ray-Ban与Oakley智能眼镜中部署计划中的人脸识别功能,内部称为Name Tag。纽约时报披露的内部文件显示,Meta考虑在“dynamic political environment”中推出,意味着公共关注会被削弱。根据这些报道,Name Tag将通过眼镜内置AI助手运行,使佩戴者能够实时识别其视线中的人物。报道称Meta讨论了两个版本:一个仅限于佩戴者在Meta平台上已有连接的人,另一个则涵盖在Instagram等Meta服务上拥有公开账户的任何人。
该联盟认为公共场所的旁观者无法有效同意被识别,因此要求在正式上线前立即移除。它还要求披露在跟踪、骚扰或家庭暴力案件中的任何已知使用情况,以及与联邦执法机构(如ICE和海关与边境保护局)就使用Meta可穿戴设备或数据开展的任何既往或持续讨论。该联盟还要求在任何消费级生物识别部署前先征询民间社会与独立隐私专家意见。EPIC另向FTC和各州执法机构申请阻止上线并警告,若将实时识别叠加现有隐蔽拍摄机制(如可隐藏的小灯),将可在抗议活动、宗教场所、支持群体和诊所等场景识别个人,进一步侵蚀公共匿名性。
Meta过去的行为显示法律和监管压力持续上升。2021年,在多年诉讼后,Meta关闭了Facebook的人像标注并删除超过1 billion(约10亿)个人脸模板,称其是从大规模识别转向的一步,并指出当时监管规则尚不明确。此后它在伊利诺伊州和德州生物识别隐私案中共支付约2 billion美元(约20亿美元)和解金,2019年又向FTC支付5 billion美元(约50亿美元)以结案。2026年3月,洛杉矶陪审团裁定Meta与YouTube在平台设计上存在过失并判给6 million美元(约600万美元)赔偿;2026年4月,马萨诸塞州最高司法法院裁定Section 230并不阻止针对青少年成瘾的消费者保护诉讼。总体轨迹显示,Meta在产品安全与隐私设计决策上的审视正在加强且法律风险持续累积。
On April 13, 2026, more than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant groups—including the ACLU, EPIC, and Fight for the Future—called on Meta to stop the planned face-recognition feature in Ray-Ban and Oakley smart glasses, internally called Name Tag. The New York Times reported internal documents indicating Meta considered launching in a “dynamic political environment,” implying public scrutiny would be weakened. According to those reports, Name Tag would run through the glasses’ AI assistant and enable real-time identification of people in the wearer’s view. Meta is said to have weighed two versions: a limited one for people already connected to the wearer on Meta platforms and a broader one for any person with a public account on Meta services such as Instagram.
The coalition says bystanders in public cannot meaningfully consent to being identified, so it asks for immediate removal before launch. It also seeks disclosure of any known use in stalking, harassment, or domestic-violence incidents, and any past or ongoing discussions with federal law-enforcement agencies such as ICE and Customs and Border Protection on using Meta wearables or data. They urge consultation with civil society and independent privacy experts before any consumer deployment of biometric identification. EPIC separately asked the FTC and state enforcers to block rollout, warning that real-time recognition, added to existing covert-recording features like hidden small lights, could identify people at protests, worship sites, support groups, and clinics, eroding public anonymity.
Meta’s past actions show a pattern of increasing legal and policy pressure. In 2021, after years of litigation, it shut down Facebook photo tagging and deleted over 1 billion face templates, presenting it as a move away from broad identification while noting regulators lacked clear rules. It then paid roughly $2 billion to settle biometric privacy cases in Illinois and Texas, and in 2019 paid $5 billion to the FTC in a separate privacy case involving face recognition. In March 2026, a Los Angeles jury found Meta and YouTube negligently designed their platforms and awarded $6 million, and in April 2026 the Massachusetts Supreme Judicial Court ruled Section 230 does not bar a youth-addiction consumer-protection lawsuit. The trajectory points to tighter scrutiny and mounting legal exposure around Meta’s product-safety and privacy design decisions.