No matter my experience with the wide gamut of AI chatbots, there’s one thing in common between them: they’re confident liars. The Ray-Ban Meta smart glasses feature built-in AI image recognition, and I thought I would ask it some questions about my regular, ultra-nerdy hobbies. Instead, the glasses resisted engaging with any books, figures, prints, or toys I showed them. The only time I’ve experienced similar levels of disconnect was in typical, awkward conversations with my father.
See Ray-Ban Meta Glasses at Amazon
During last week’s Meta Connect conference, CEO Mark Zuckerberg introduced a wave of updates to the Ray-Ban glasses. On Wednesday, Zuck declared the glasses were all getting access to reminders (those with the latest update should be able to ask it, “Where did I park my car?), QR code scanning, and more features like the ability to easily reply to friends on WhatsApp or Messenger. More updates are coming down the road that should add real-time translation and real-time video scanning. That feature should allow the AI to comment about what you’re seeing live.
But I wouldn’t trust that AI to describe my room’s decor accurately, let alone supermarket items. Meta granted me a pair of the new Headliner transition lens glasses. I can say with certainty they look much better than my current pair of old, yellowed shades. The pictures they take are so-so quality compared to my iPhone, but I don’t have too many complaints about the built-in audio. They don’t match up to a pair of headphones or high-quality earbuds, but they easily beat most laptop speakers for going through my playlists on Apple Music. I’d consider them a solid option for personal audio when lounging on the beach.
Messages and music integrations are all good, but I wanted to see how well this AI wearable works, whereas other devices failed spectacularly. I took the Meta Ray Bans around my apartment and asked it questions about my stacks of tabletop RPGs, my prints on the wall, my statues of comic book characters, and my hoard of Warhammer 40K fiction. It was like talking to the paternal brick wall—somebody with no interest in fantasy or science fiction and only pretends to engage. Unlike my dad, who can still sometimes try, Meta’s glasses fib so poorly they can’t cover up how little it cares.
Does Meta’s AI Not Have Any Nerdy Information in its Training Data?
I pointed the glasses at my metal print of a scene from the 2019 RPG Disco Elysium. Its best guess was “Borderlands.” For some reason, it thought the faithful detective Kim Kitsuragi was Claptrap. Harry Dubois, AKA “Tequila Sunset,” was “one of the vault hunters.” I asked it to identify what my gaming setup included. It looked at the PlayStation 5 on my shelf and told me, with all certainty, that it was the PlayStation 4.
I tried it with memorabilia, both less or more esoteric. It looked at my action figures of The Will from the Brian K. Vaughan and Fiona Staples Saga comics, and Meta told me the figure was Dr. Strange. My statue of Marv from the Sin City comics was, according to Meta, The Hulk. Like my parents, the glasses seem to think anything nerdy is probably a character from the Marvel movies. The glasses looked at the prints hanging on my wall—two artistic depictions of Samus Aran in and out of armor from the Metroid series, and Meta told me they looked like Iron Man.
Even when the glasses got things right, the AI struggled to be specific or accurate. The AI confidently read the titles of several indie RPG rulebooks, namely Deathmatch Island and Lacuna. Still, in the most dad-way possible, it suggested these roleplaying games had something to do with Warhammer miniature wargaming. Yes, Dad, I play Warhammer 40K. No, Dad, these books have nothing to do with it.
But hey, the device knew who Luigi was. Nintendo’s reach obviously extends beyond the bounds of my little nerd bubble. Still, you’d think an AI could tell a Pokémon apart from a Legend of Zelda Korok.
See Ray-Ban Meta Glasses at Amazon
Meta’s Ray Bans Fall Short on Privacy, Even if Reminders Sound Useful
Meta’s glasses are light on details but heavy on conjecture. Yes, it’s funny to watch the glasses fail routinely to understand nerd trivia, but they won’t be useful for other basic tasks. It will look at a bottle of pomegranate molasses in my cupboard and tell me it’s soy sauce. Remember when Google’s first attempt at on-device AI lied about the Webb Telescope? Meta’s AI model used for the Ray-Bans will lie to your face, to your own eyes.
The answers it does get correct are often short and largely unhelpful. It can give the basic rundown of the fiction written by an author like Dan Abnett (it at least knows who he is). You can ask the AI more about his lexicon of fiction works, but when I asked how many books he has written for Games Workshop’s Black Library, it told me, “Over 25, but the exact number is unknown.” That number is very much quantifiable. You can follow the AI’s link to Wikipedia, and you’ll find the number is closer to 50 if you count them all yourself.
We have yet to experience Meta’s Llama 3.2 multimodal models. Meta’s AI says it still uses Llama 3.1 70B but that LLM may not be suited for mundane queries. The glasses don’t have access to location data (which is probably for the best). The wearable AI couldn’t tell me where the nearest boba tea place was near Union Square. There are two within a three-block radius.
I did not have luck accessing QR code scanning or the new reminders features despite being on the latest update. Reminders seem a much better use for the glasses but know that if you take a photo of your license plate and ask the glasses to analyze it, Meta sees that, too. The Zuckerberg-led social giant told TechCrunch this week that if you ask the AI to analyze the photo you take with your glasses, Meta takes it to train its AI.
The AI models are purposefully limited in other ways for privacy, just not your own. Meta’s AI won’t be able to describe any face or person it sees. You can still take pictures of anybody you want with a surreptitious press of the capture button, but the AI will refuse to identify anybody or comment on their appearance.
Despite Meta’s efforts, the Ray-Bans still have heavy privacy implications. A group of university students hacked the Ray-Bans glasses to add facial recognition. The modified glasses will even draw on more information from the Internetincluding names, telephone numbers, emails, or even more sensitive data. The group posted a video showing just how well their glasses worked on Twitter last week.
This isn’t what the Ray-Ban Metas were designed for. A Meta spokesperson pointed out to 404 Media that the facial recognition software would technically work on any camera, not just the ones on the Ray-Ban glasses. But at the same time, Meta went out of its way to make its smart glasses’ cameras as discrete as possible. Meta positions their Ray-Bans for the influencer crowd wanting to drop their pictures on Instagram. The AI, as it currently stands, doesn’t offer much more for that audience beyond a quippy clip for Reels.
These name-brand designer glasses aren’t necessarily the kind made for an audience that will go to New York Comic Con and ask their glasses for more information about which character a con-goer is cosplaying. In their current state, I wouldn’t use the AI functions for anything more than a party trick.
See Ray-Ban Meta Glasses at Amazon