Meta Explained Its Smart Glasses AI Privacy Policies to Me. I'm Still Worried

Meta Explained Its Smart Glasses AI Privacy Policies to Me. I'm Still Worried

I wear Meta's Ray-Bans off and on when I travel to snap photos, take phone calls and listen to music. The technology is fascinating, fun and convenient.

I also knew that Meta's privacy policies might be a concern, but now I'm more worried about it than ever before.

My concerns ramped up after a number of friends and colleagues shared a report about Meta's third-party contractors in Kenya being able to view sensitive information like photos of banking records, nudity and sexual encounters that had been recorded on Meta glasses (which has resulted in a class action lawsuit). 

What boundaries had Meta set up to protect people's privacy? I pored over Meta's terms of service online and in the Meta AI app, but that was no help.

I wanted some answers. So I contacted Meta's comms team to get clarity.

But even after getting the official answer from Meta about where the lines are drawn, I'm still frustrated and uncertain. While many people are rightly worried about someone secretly recording them with smart glasses, there's also another wrinkle: When are these glasses potentially sharing what you've been recording with others?

Here's a short answer: Do Meta's glasses have third-party contractors potentially looking over your data? Yes, sometimes -- if you're using AI services. If you're not using those AI services, then according to Meta, you should be OK. But even then, I don't know where that "AI services" wall gets clearly drawn. And that's one of my biggest concerns.

Meta has had a long history of problems with both privacy and trust, extending into the last decade and the Cambridge Analytica scandal. Those issues haven't come up with Meta's VR headsets, which don't have many data-collecting AI services, but the company's smart glasses do. And those services will keep growing and becoming more capable over the next few years. Meta's popular Ray-Ban glasses -- more than 7 million pairs were sold last year -- are the frontrunners in a whole wave of camera-enabled AI glasses and wearables coming from a number of companies, with Google entering the mix later this year. 

If you're interested in Meta's glasses, which, as a technical achievement, are the best-quality camera and audio-enabled smart glasses at the moment, you need to keep these concerns in mind. And as smart glasses pivot to always-on AI-enabled devices, we're only going to run into more questions about how comfortable you might feel leaning on their services -- and what all the cloud-based AI tech companies need to do to make these policies clearer.

Below, I'm going to share Meta's responses at length so you can understand my reasoning -- and also make your own assessment about the risks.

Meta Ray-Bans on a red table next to a phone showing a Live AI transcript

Meta's glasses pair with a Meta AI phone app. Be aware that your AI-based requests could be seen by third-party contractors.

Scott Stein/CNET

Using AI services with Meta Ray-Bans

If you're using AI -- for instance, to analyze something you see or to get a translation -- then third-party contractors might be looking at what you're recording.

This is what the company told me: "Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they've captured with Meta or others, that media stays on the user's device."

But then there's this: "When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed."

The assumption you can make from this is that any time you're using Meta's AI services, Meta may very well be using third-party contractors to review the information.

While Meta promises that the information is properly filtered to remove sensitive data or details, that worrisome news report said contractors in Kenya were annotating footage taken from glasses that had sensitive images that were clearly visible.

That has me especially concerned about what happens when people use Meta AI for assistive purposes: namely, as a way to "see" when you can't with your own eyes. Would looking at personal documents and reading them back be a risky thing to do? Since Meta hasn't properly introduced any sort of encrypted, private AI features on its glasses, it could be.

Meta does say this about privacy protections: "We have strict policies and guardrails in place that intentionally limit what information contractors see." 

But again, I don't actually know what those strict policies or guardrails are.

"We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed," Meta added. 

This doesn't help clarify any of the specifics. I'm going on trust here, which isn't ideal at all.

I have to assume that anything done via cloud AI services, like Meta's using, could be seen to some degree by third-party contractors. And you should too. 

Man wearing Ray-Ban Meta glasses

Meta's Ray-Ban smart glasses can take photos and videos, which, according to Meta, are seen only by third parties if you're using AI-based services.

Joanna Desmond-Stein/CNET

Taking photos and videos with Meta's Ray-Bans

Meta's glasses don't use AI all the time, and neither do I. In fact, I'm mostly using Meta's glasses to record photos and video, listen to music, and make phone calls. I don't use the AI much, in part because Meta's AI has very little interaction with or control over my other personal data or even my iPhone. 

For non-AI photo and video recording, things should be safe... I think.

I asked members of the comms team whether photo or video recordings that I made with the glasses, and that weren't involved in AI-based invocations, could be subject to third-party contractor viewing. They said this: "To be clear, the photos and videos that users take with their AI glasses that are simply stored on their phone's camera roll are not used by Meta to develop and improve AI. If you just record a video or take a photo using the glasses' camera button, that media stays on your phone. Unless you choose to share media you've captured with Meta or others, that media stays on your device."

That sounded promising. But with Meta's glasses settings, storage becomes a little cloudy… literally. In the Meta AI app Glasses Privacy settings, a Cloud Media toggle claims to "allow your photos and videos to be sent to Meta's cloud for processing and temporary storage."

Would cloud media mean my personal photos and videos were open to possible third-party contractor annotation? According to Meta, no. According to Meta, any commands using AI to send photos or using Autocapture modes that get enabled by toggling on Cloud Media will be safe too.

In the company's words: "Certain features, like sharing from your glasses using your voice ('Hey Meta, send a photo'), seamless auto-importing of media, or Autocapture, where the camera automatically takes photos or videos when you start the feature (useful for moments where you may want to capture content without manually triggering the camera via the button or voice), may require sending your photos and videos to Meta's cloud for processing and temporary storage. If you enroll in cloud media services, the photos and videos sent from the frames or auto-imported to your phone are not subject to human annotation. Enabling cloud media services is opt-in and not on by default."

Meta doesn't clearly define what exactly "Cloud Media" is, other than a temporary storage spot for your photos and videos so they can be processed with voice commands. And what worries me is how a wall gets drawn around "private" versus "AI-connected" media. It makes me want to toggle Cloud Media off, which would mean the photos and videos are stored just in my phone's photo library.

Meta Ray-Ban Gen 2 glasses out of the box

Meta's expected to have even more AI glasses later this year. So are other companies.

Scott Stein/CNET

What's to be done about AI glasses now?

I still like the camera and audio features of smart glasses and am intrigued by the AI features coming. But I'm also very concerned by the uncertainty about where the line is drawn between what gets annotated by a third party, potentially, and what stays private. Meta's using those third parties to help train AI, or to possibly moderate content. It's a reminder of how cloud-based and out of our control so many AI services are.

I get even more worried thinking about reports of Meta wanting to add facial recognition and more to its smart glasses.

Meanwhile, more AI glasses are coming, and wearable camera-equipped AI devices, too. Google is up next. And all of these companies need to make it much clearer how they're using the data from these devices, how they're protecting our privacy issues, and how we users can manage it -- if at all. It's not easy at all to understand how Meta's glasses handle AI data, or where it's being sent. I'm hoping this story helps you better understand where the lines might be.

Even so, I have to admit I feel a lot less likely to use Meta's glasses for anything personal or data-sensitive. Vacation glasses? A tool for quick social footage for work, I'm broadcasting anyway? Experiments with AI? I think so. 

But if Meta's aiming to be a deeply assistive tool for us via AI wearables, and doesn't want everyone calling them "pervert glasses," which people already are, it needs to do better, fast.

Patrocinado
Patrocinado
Atualizar para Plus
Escolha o plano que é melhor para você
Patrocinado
Patrocinado
Anúncios
Leia mais
Download the Telestraw App!
Download on the App Store Get it on Google Play
×