By Brenda Baletti, Ph.D.

A broad coalition of civil society groups last week sent a letter to Meta, Ray-Ban parent company EssilorLuxottica, the White House, the Federal Trade Commission (FTC), the U.S. Department of Justice and senior federal officials urging them to stop plans to integrate Meta’s facial recognition software into Ray-Ban AI smart glasses.
Sixty-four consumer advocacy groups, led by the Consumer Federation of America and Ultraviolet Action, warned that integrating facial recognition into eyewear is a “dangerous and reckless plan that will harm both users and the entire public.”
“This move will endanger us all, and particularly give ammunition to scammers, blackmailers, stalkers, child abusers, and authoritarian regimes,” the letter stated. “It would also create acute and unnecessary national security risks.”
The letter follows a recent investigation by Swedish outlets Svenska Dagbladet and Göteborgs-Posten, which revealed that contractors in Kenya hired to train Meta’s artificial intelligence (AI) glasses technology are routinely processing sensitive personal user data — without users’ knowledge.
Last month, U.S. Sens. Ron Wyden and Jeff Merkley, Democrats from Oregon, separately sent a letter to Meta CEO Mark Zuckerberg demanding that Meta explain its plans to add facial recognition to Ray-Ban smart glasses.
The senators set an April 6 deadline for a response. It is unclear whether Meta has yet replied. Neither the senators’ offices nor Meta responded to requests for comment.
Meta plans ‘Name Tag’ rollout knowing feature poses safety and privacy risks
Meta plans to release the new feature in glasses sold in the U.S. later this year, The New York Times reported. The “Name Tag” feature allows wearers to identify people with public Meta accounts and retrieve information about them via Meta’s AI assistant.
The feature reportedly must be activated by the user through a voice command or by pressing a button.
The Times reviewed internal documents revealing that because the company knew the feature carries “safety and privacy risks,” it planned to first release the feature at a conference for blind attendees.
An internal memo also said that political upheaval in the U.S. provided favorable timing for the release.
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” reads an internal document from Meta’s Reality Labs.
Even without the facial recognition feature, Meta’s smart glasses have had commercial success. EssilorLuxottica reported it sold more than 7 million pairs last year, according to the Times.
The company previously paid large fines for collecting facial data on users without permission. It has paid $2 billion to settle lawsuits in Illinois and Texas, and in 2019, it paid $5 billion to the FTC to settle charges of violating user privacy, including the unauthorized collection of facial data.
Kenyan contractors say they routinely view intimate data
The Swedish investigation revealed that much of the footage that Ray-Ban’s AI glasses record is reviewed by human data annotators working for outsourcing firms such as Sama in Nairobi, Kenya.
These workers review images, videos and audio collected through the glasses to help train Meta’s AI systems.
According to interviews with more than 30 employees, the material they process can include highly sensitive and intimate content — ranging from private conversations to explicit footage.
“In some videos you can see someone going to the toilet, or getting undressed,” one worker said. “I don’t think they know, because if they knew they wouldn’t be recording.”
Others described reviewing clips showing nudity, sexual activity and financial information such as visible bank cards. One worker summarized the scope bluntly: “We see everything — from living rooms to naked bodies.”
Workers also said users can record themselves without realizing it.
As one annotator put it: “You think that if they knew about the extent of the data collection, no one would dare to use the glasses.”
Meta’s privacy terms obscure how much data the glasses collect and transmit
The investigators said that when they purchased the glasses themselves, the Terms of Service indicated the company treats privacy as a priority, with terms stating voice recordings will be used to train Meta products only if the consumer actively agrees.
Although Meta’s terms state that some data may be reviewed by humans, experts argue that the boundaries between voluntary sharing and automatic data collection are unclear.
Data protection lawyer Kleanthi Sardeli, who has brought several lawsuits against Meta, told the Swedish researchers that users may not realize when recording begins, particularly when activating the AI assistant by voice.
Petter Flink, a security specialist at Sweden’s privacy regulator, noted that most users have little understanding of what happens behind the scenes once they begin using such devices.
The data Meta collects are more valuable than the glasses, Flink said.
The investigation also found widespread confusion among retailers selling the glasses. Retail staff often told customers that the data stay on the device or that sharing is optional.
However, technical testing by the journalists showed that the glasses frequently communicate with Meta servers and cannot function fully without sending data for processing.

This article was funded by critical thinkers like you.
The Defender is 100% reader-supported. No corporate sponsors. No paywalls. Our writers and editors rely on you to fund stories like this that mainstream media won’t write.
Meta says safeguards are in place — contractors say they don’t always work
Meta said that sensitive data are not intended for AI training and that safeguards such as face blurring are in place.
But both former employees and current annotators say these protections are not always effective.
“The algorithms sometimes miss,” one former Meta worker said, noting that faces and bodies can remain visible under certain conditions.
Annotators say they are bound by strict confidentiality agreements, with limited ability to question the material they review.
Related articles in The Defender
- Today’s Surveillance Technology Makes 2013 Look Like ‘Child’s Play,’ Snowden Warns
- Mass Surveillance Technologies Put in Place During Pandemic Aren’t Going Away
- Is Your Phone Spying on You? Possibly, According to New Research.
- AI Mass Surveillance at Paris Olympics Will Continue Even After Games End
The post ‘Dangerous and Reckless’: Meta’s Ray-Ban ‘Smart’ Glasses Expose Sensitive Data appeared first on Children’s Health Defense.
IPAK-EDU is grateful to The Defender as this piece was originally published there and is included in this news feed with mutual agreement. Read More

























Leave a Reply