Two Harvard students have demonstrated what privacy advocates have long feared: the ability to walk up to a complete stranger, glance at them through a pair of ordinary-looking smart glasses, and within seconds pull up their name, home address, phone number, and Social Security number — all without the person’s knowledge or consent.
The project, called I-XRAY, pairs Meta’s Ray-Ban smart glasses with the controversial facial recognition platform PimEyes, along with large language models, to create a pipeline that turns a casual glance into a full personal dossier. AnhPhu Nguyen and Caine Ardayfio, students at Harvard, built the system as a proof of concept and demonstrated it in a video that has since gone viral, reigniting an urgent debate about the intersection of wearable technology, artificial intelligence, and the erosion of personal anonymity.
From Science Fiction to Sidewalk Reality: How I-XRAY Works
The technical architecture behind I-XRAY is disturbingly straightforward. Meta’s Ray-Ban smart glasses, which retail for around $299, are equipped with a camera capable of livestreaming video. The students configured the glasses to stream footage to Instagram Live, where a separate computer program monitors the feed in real time. When a face appears in the frame, the system captures a still image and runs it through PimEyes, a publicly available reverse facial recognition search engine that scans billions of images scraped from the internet.
PimEyes returns matching photographs along with the URLs where those images were found — often social media profiles, news articles, or organizational directories. From there, a large language model cross-references the information to extract the individual’s name, and then queries public data brokers and people-search databases to compile additional personal details including phone numbers, home addresses, and in some cases, partial Social Security numbers. The entire process takes only seconds, as reported by Futurism.
The Students Behind the Experiment Say They Want to Sound an Alarm
Nguyen and Ardayfio have been explicit that they do not intend to release the I-XRAY code publicly. In a detailed Google document outlining their project, they wrote that the demonstration was designed to raise awareness about what is already technically possible using commercially available tools. “We are not releasing the code or providing a guide on how to replicate the tool,” the students stated, emphasizing their goal of provoking a public conversation about the privacy implications of current technology.
Their demonstration video showed them approaching strangers on campus and in public spaces, identifying individuals in real time, and even using the retrieved information to initiate conversations that referenced personal details the subjects had never shared. The effect was jarring — a live demonstration of how easily the veneer of public anonymity can be stripped away by anyone with a pair of consumer smart glasses and some programming knowledge.
Meta’s Complicated History With Facial Recognition
The demonstration puts Meta in an uncomfortable spotlight. The company has a fraught history with facial recognition technology. Facebook once operated one of the world’s largest facial recognition systems, automatically tagging users in uploaded photographs. In 2021, the company announced it would shut down that system and delete the facial recognition templates of more than one billion users, citing “growing societal concerns” about the technology. Meta’s then-VP of artificial intelligence, Jerome Pesenti, said at the time that the decision reflected a need to weigh the positive use cases against those concerns.
Yet Meta’s Ray-Ban smart glasses, developed in partnership with EssilorLuxottica, have always carried an implicit tension. The glasses are designed to look indistinguishable from regular Ray-Bans, with only a tiny LED indicator light signaling when the camera is active — a light that is easy to miss or, as critics have noted, could potentially be obscured. When the glasses launched, Meta insisted that the built-in camera was intended for casual photo and video capture, not surveillance. But as the I-XRAY project demonstrates, the hardware’s capabilities extend far beyond what Meta may have envisioned — or at least, far beyond what it has publicly endorsed, as Futurism detailed in its reporting.
PimEyes and the Booming Market for Facial Recognition Search
Central to the I-XRAY pipeline is PimEyes, a Polish-founded facial recognition search engine that allows anyone to upload a photograph and find matching faces across the internet. The platform has drawn intense scrutiny from privacy researchers and journalists. Unlike Clearview AI, which restricts its services to law enforcement agencies, PimEyes is available to any paying customer, making it a powerful tool for stalkers, abusers, and bad actors alongside its legitimate uses in identity verification and intellectual property protection.
PimEyes has previously been the subject of investigations by The New York Times and other outlets, which documented cases of the tool being used to identify and harass individuals, including sex workers and protesters. The company has made some efforts to limit misuse — for instance, it claims to allow searches only of one’s own face — but enforcement of that policy has been widely questioned. The ease with which the Harvard students integrated PimEyes into their system underscores how porous those safeguards remain.
A Legal and Regulatory Vacuum
The United States lacks a comprehensive federal privacy law governing facial recognition technology. A patchwork of state and local regulations exists — Illinois’s Biometric Information Privacy Act (BIPA) is among the most stringent, requiring informed consent before biometric data is collected — but most jurisdictions have no specific restrictions on the use of facial recognition by private individuals. Several cities, including San Francisco and Boston, have banned government use of facial recognition, but those bans do not extend to private citizens using commercially available tools.
In the European Union, the recently enacted AI Act classifies real-time biometric identification in public spaces as a “high-risk” application and imposes significant restrictions, particularly for law enforcement. However, the regulation’s applicability to individual consumers using off-the-shelf products in informal settings remains an area of legal ambiguity. The I-XRAY demonstration highlights a gap that regulators on both sides of the Atlantic have yet to meaningfully address: what happens when surveillance-grade capabilities are democratized and placed in the hands of ordinary consumers.
The Broader Implications for Wearable Technology
The I-XRAY project arrives at a moment when the wearable technology sector is accelerating rapidly. Meta has sold millions of its Ray-Ban smart glasses and is reportedly developing more advanced versions with integrated displays. Apple’s Vision Pro, Snap’s Spectacles, and a growing roster of AI-powered wearables from startups are all pushing cameras and microphones closer to the body and deeper into everyday life. Each of these devices, in theory, could serve as a platform for the kind of real-time identification system Nguyen and Ardayfio built.
The implications extend beyond individual privacy. Consider the chilling effect on public life if anyone in a coffee shop, at a protest, or on a subway could be instantly identified by a stranger wearing smart glasses. The asymmetry of information — where the observer knows everything and the observed knows nothing — fundamentally alters the power dynamics of public interaction. Scholars have long warned about the concept of a “surveillance society,” but the I-XRAY demonstration suggests that the surveillance may not come primarily from governments or corporations. It may come from the person sitting across from you.
What Can Individuals Do to Protect Themselves?
The Harvard students, to their credit, included practical recommendations alongside their demonstration. They encouraged individuals to search for their own faces on PimEyes and similar platforms and to submit opt-out requests where available. They also recommended removing personal information from data broker sites — services like DeleteMe and Kanary automate this process for a fee. However, as the students themselves acknowledged, these measures are imperfect. Data brokers continuously re-aggregate information, and new photographs appear online constantly.
Some technologists have proposed more radical countermeasures, including adversarial fashion — clothing and accessories designed to confuse facial recognition algorithms — and legislative pushes for a federal biometric privacy law. But for now, the gap between what is technically possible and what is legally regulated remains vast. The I-XRAY project did not require any classified technology, any government database, or any illegal access. Every component — the glasses, the facial recognition engine, the data brokers — is commercially available and legally accessible in most of the United States.
The Question Meta and the Tech Industry Must Now Answer
Meta has not publicly commented on the I-XRAY project specifically. The company’s terms of service for the Ray-Ban smart glasses prohibit using the device to “violate others’ rights, including privacy rights,” but enforcement of such policies is inherently reactive. The question facing Meta — and every company building camera-equipped wearables — is whether the design of these products adequately accounts for foreseeable misuse, or whether the industry is building infrastructure for a surveillance apparatus while disclaiming responsibility for how it is used.
The Harvard demonstration has made one thing unmistakably clear: the technical barriers to real-time, public facial recognition have effectively collapsed. The tools exist, they are affordable, and they are improving rapidly. What remains to be determined is whether society will establish meaningful guardrails before the technology becomes so ubiquitous that anonymity in public becomes a relic of a previous era. For now, the answer to that question is far from reassuring.
Meta’s Smart Glasses Just Became a Real-Time Facial Recognition Machine — and Privacy May Never Recover first appeared on Web and IT News.
Anthropic just made its AI agent permanently resident on your desktop. Not as a chatbot…
Jack Clark thinks coding is the new literacy. Not in the vague, aspirational way that…
Ask a chatbot a question and you’ll get an answer. But the answer you get…
For years, cropping a photo in Google Photos has been an exercise in quiet frustration.…
OPEC’s crude oil production dropped sharply in May, and the reasons stretch far beyond the…
Google is making its biggest bet yet on the idea that artificial intelligence should be…
This website uses cookies.