Let's debunk confused techno-panic over "doxing glasses"
Extracurricular activities by two Harvard students does not show a special risk for AI glasses. Here's what's really going on.
Can smart glasses enable people to know your personal information? The answer is a clear and resounding: “not really, but sort of, I guess.”
Two Harvard engineering students, AnhPhu Nguyen and Caine Ardayfio, recently demonstrated and published a paper and a video showing a project they’d been working on. The project is called I-XRAY. It uses Ray-Ban Meta glasses, AI, and publically available websites to create the illusion that “AI glasses” according to the misleading title of their paper, can “reveal anyone’s personal details.”
Specifically, they used unmodified Ray-Ban Meta glasses to live-stream video to Instagram. They wrote software to watch the stream, looking for faces. When a face was detected, their software uploaded a screen grab of the face picture to PimEyes or FaceCheck.id (services that can recognize faces, then show the user other instances of that same person’s face on other websites). Some of those other websites contain the person’s name and other details, which their software used to search FastPeopleSearch. This website curates publicly available information about people. The captured personal info was then relayed back to a phone.
The setup enabled these students to walk up to strangers and know their names and other biographical details to convince those people they had met before. (I’m not sure why they lied about a previous meeting instead of just telling them what they were actually doing.)
The experiment inspired commentators to say that the whole stunt reveals “the creepy side of smart glasses,” and is “dystopic.” One of the perpetrators of this stunt, Nguyen, suggests it points to “a world where our data is exposed at a glance.”
But none of that is true.
The setup “feels” like this: “Smart glasses can invade the privacy of anyone you’re looking at.”
What’s true is this: “Smart glasses have a camera. *ANY* camera can be used to invade the privacy of anyone you’re looking at for reasons that have nothing to do with smart glasses.”
In other words, the only role the glasses had in the entire operation was just the use of its camera.
Sure, the camera enabled them to get a photo of the person without their knowledge. So would a telephoto lens, or even a selfie with the target in the background. Smart glasses are unnecessary for this trick to work.
In fact, without any modifications to their I-XRAY setup, they could instead just live stream from a smartphone. In fact, it would work better because the image would be higher resolution.
I-XRAY is a trick. An old trick.
It’s called “hot reading”
The I-XRAY guys are doing something called “hot reading,” which is a scam created by late 19th Century and early 20th Century mediums, psychics and faith healer types.
Essentially, you gather personal information about a person through means they wouldn’t expect, then reveal that learned information in a way that makes them think you have special knowledge.
The standard way this worked in the past is that a co-conspirator of the scammer pretended to be an ordinary audience member at, say, some carnival or circus “psychic” performance. While waiting in line for the event, the co-conspirator would strick up conversations with people, learning about them. Then, using notes on a card or, later, radio transmission, that personal information would be relayed to the person on stage, who then confronted the target person with personal details, convincing everyone they had psychic powers.
“Hot reading” proved a great way to dazzle farmers at the county fair.
It’s the same well-intentioned scam “Professor Marvel” did to Dorothy in the 1939 movie, The Wizard of Oz. In the movie, he starts out with a “warm reading,” which is where he generalizes in a way that makes Dorothy think he’s reading her mind. On the Professor’s third guess: “You’re, you’re running away.”
Then, they go into his caravan and he performs a “hot reading” — she closes her eyes and he peeks into her basket, where he finds a picture of Dorothy with her arm around Auntie Em. “Professor Marvel Astounds Dorothy by describing the woman he saw in the picture. Dorothy says that’s “Auntie Em,” whereupon Professor Marvels says with confidence: “Her name is “Emily.” And so on.
Professor Marvel took a peek at readily available personal information of Dorothy. And that’s what the I-XRAY guys did as well.
Instead of looking at a photograph, they captured one with the glasses. By streaming video to Instagram (a normal feature of that product), they then used software to snag a screen capture with a face in it.
With photo in hand, they ran the photo through face recognition, found that person’s photo elsewhere in places with an associated name, then used the name to search FastPeopleSearch. They used AI to do some of the automation for this process that could also be done quickly and easily by a person.
I told you about this, remember?
I’ve been writing about this for years. For example, in March of 2017, I provided step-by-step instructions on how, using a camera to take a stranger’s picture, you could have that person’s home address in less than three minutes. (I told you how to do it because I believe the good guys should know what the bad guys already know about how scams work so they can act in their own defense.)
Mine was a similar approach as I-XRAY. I used a now-defunct Russian site called FindFace and got the public data from Family Tree Now.
When PimEyes later became available, I updated my estimation in April of last year here in this newsletter. I could now go from photo to home address in less than a minute, thanks to PimEyes’ functional superiority.
This is exactly what the I-XRAY guys did, with two differences. They used Ray-Ban Meta glasses for the camera. They also used AI to automate the use of recognition and informational online sites.
As you can see, this brand of privacy invasion has nothing to do with smart glasses or, for that matter, AI.
They’re just needlessly using two new products to do something that has been possible for years — and then they blamed those two new products, or implied that privacy violation is particularly associated with AI glasses and AI. It’s not.
How to think about face-based privacy
If you want to think clearly about the reality of face-recognition identification, here are the component parts:
Face recognition exists. It’s often very good. If someone has access to a photograph of your face, your face can be recognized. Sites exist where you can upload a picture, and they will show you other websites where that same person is also present in photographs (the same pictures or different ones). These services are available to anyone in the world with an internet connection.
Public databases of information exist, where if anyone has one data point about you — your name, address, phone number or email address — they can get the other data points, plus your age, relatives, work history and so on. These databases are available to anyone in the world with an internet connection.
Photographs of your face are probably connectable on public websites with other information, most commonly your name.
Therefore, if someone takes or downloads a picture of your face, they can probably see where else your face is posted. If your name exists on that page, then they can use your name to find out a lot more about you.
Smart glasses have nothing to do with any of this. In other words, if you oppose the existence of all this, opposing smart glasses does nothing.
If you want to opt out of all this, you need to look elsewhere:
Most public information databases enable you to opt-out.
Most face-recognition sites, including PimEyes, enable you to opt-out.
You can also find the places, such as your own social media profiles, where your name and face are associated with each other, and either delete the page or replace your face with an avatar or other image. Or replace your real name with a fake name.
The kludge put together by Nguyen and Ardayfio is interesting and dramatic and makes for a great alarmist video.
But it misleads the public about where the fault lies. It’s not with the glasses. It’s with the face recognition sites and the public personal data sites. Glasses have nothing to do with any of this.
Shameless Self-Promotion
Google, it’s time to kill CAPTCHAS
When AI can "prove it's human" — and CAPTCHAs exist mainly to distribute malware and steal users' time — Google should step up and get rid of CAPTCHAs.
Plus:
Everything you always wanted to know about the AI PC trend
What happens when everybody winds up wearing ‘AI body cams’?
What North Korea’s infiltration into American IT says about hiring
My Location: Marrakesh, Morocco
(Why Mike is always traveling.)
I worked at Google when Glass came out. Some trendy folks wore them hooked to their regular glasses. I demurred.
However, the ONE use case that might have made me buy them is the case like this: I walk up to someone, they say Hi, and I wonder, "Who TF is that?" I'd only want to know what I already knew about this person; not random details from the Internet.