Analysis

Paying for Iris Scans: AI-Fueled Surveillance Harms

February 6, 2025 | Justin Sherman, EPIC Scholar in Residence

Brazil’s National Data Protection Authority (ANPD) is banning the company Tools for Humanity, co-founded by OpenAI’s Sam Altman, for something disturbing: offering to pay people for their iris scans. “Monetary consideration offered by the company may interfere with the free expression of the will of individuals,” the ANPD’s decision reads, putting the offer in contradiction with the consent requirements in Brazil’s national consumer privacy regime.

While a visually striking example—indeed, an American tech billionaire paying people in a Global Majority country, in cryptocurrency, for their iris scans does ring of a dystopian Hollywood flick—it speaks to a bigger problem. Hype about AI research and development is driving companies to buy people’s data where they otherwise weren’t and, conversely, to sell people’s (including customers’ and users’) data where they otherwise wouldn’t. These practices are harmful already. And their acceleration by the AI craze is poised to rapidly increase data-sharing, -selling, and -monetization threats to privacy in the coming years. This enables the repeat exploitation of people’s data and causes especially pronounced harm to vulnerable people and communities.

Tools for Humanity describes itself as a “technology company building for humans in the age of AI,” founded in 2019 by Alex Blania and Sam Altman, headquartered in San Francisco, California and Munich, Germany. It says the company is composed of more than “400 developers, scientists, engineers, designers, creatives, economists, and other various optimists,” building at least two products. First is the Orb, a metallic, ball-shaped camera that claims to locally process biometric data for identity verification. Second is the World App, a mobile app using biometric data to verify access to websites, mobile apps, and cryptocurrencies Worldcoin and Ethereum.

What these self-described “optimists” fail to mention on the website homepage, however, is that it used to be called Worldcoin—the very company the French privacy regulator called legally “questionable” in July 2023 and that the Kenyan government ordered in August 2023 to cease collecting citizens’ iris scans, due to a “lack of clarity on the security and storage” of the iris scans and the “uncertainty” of the attached cryptocurrency. But regulators’ concerns didn’t stop there. In an echo of Brazil’s recent decision, Kenyan authorities also took issue with another detail: from Nairobi, Kenya to Bengaluru, India to Hong Kong, Worldcoin was paying people to stand in front of the Orb camera in shopping malls and submit their iris scans to the firm’s blockchain.

In its decision against Tools for Humanity, the Brazilian privacy authority made a critical point about this practice. Paying people for data, as it laid out above, doesn’t just threaten all people’s ability to truly consent to data collection and use. It also influences people, the ANPD said, “especially in cases where potential vulnerability and insufficiency make the weight of the payment offered even greater.” That is, anyone offered money for their data could be more willing to go against their underlying wishes for data to be collected or used, but such pressure is particularly acute for people who are living in a place of financial struggle.

Brazil’s ANPD is calling out something to which many companies and pay-people-for-their-data advocates are often oblivious or, perhaps, something which they ignore. These real-world dynamics of race, class, gender, other intersectionalities, and poverty are often missed by those who argue that companies should be able to pay people for their data—overlooking how, in practice, those most in need of money could be forced into contributing to further losses of their own privacy. It’s an essential point about a theory that, in fact, is usually predatory. In this paradigm, the rich are the ones who can afford to say “no” to more data collection, while people living in poverty are not afforded the freedom to do the same—and become even more surveilled. (Perversely, companies and governments can then blame the people for selling away their own data.) For its part, Tools for Humanity seems to miss this point on its own website, which proclaims that it’s a global hardware and software company “built to ensure a more just and accessible economic system.” People who sold away their biometric data and then didn’t receive the payment that Worldcoin promised did not miss this point. Nor did the people who sold away their data, and did not outwardly complain, but who did it because they needed the funds: “I’ve heard there’s money to be made here,” a Nairobi bus terminal worker told Rest of World in 2023, speaking of an Orb booth in the terminal. “The money will help me. Right now life isn’t easy.”

But this case speaks to a much broader problem. The boom of excitement about AI research and development—boosting stock prices, yielding lobbying to relax regulations and accelerate climatekilling energy consumption, causing pundits to regurgitate misguided talk of an AI “arms race”—is driving more companies to monetize personal data. Not just monetize (necessarily) through advertising, or through building their own AI models, or through running more analytics, but monetize through selling data. Companies that historically haven’t been in the market for people’s data have now grown a significant appetite for it, meaning many are willing to pay third parties for access to datasets they would not have otherwise purchased. In some cases, as with Worldcoin, they’ll pay consumers directly, too, so they can build AI systems with the data. On the flip side—and I have observed this already in the data brokerage market—companies that historically haven’t sold any of their data to third parties are now moving to do so specifically because of demands for AI training. Put simply, the craze about AI (founded in some narrow areas, such as for climate modeling and healthcare advancement; wildly overhyped in many others) is driving more companies to buy and sell more of people’s data.

This sale of data is rarely transparent. Consumers, by and large, do not actually read privacy policies and, by and large, could not even understand the policies if they did. Legalistic disclosures, at the bottom of an unread and unreadable document, loaded with “coulds” and “mays” and other unclear language, are wholly inadequate to provide useful information to consumers. This lack of transparency and poor disclosure does not constitute getting people’s actual consent. Companies that pay people directly for their data, like Worldcoin/Tools for Humanity, fall under this umbrella, undermining the principle of consumer consent and exploiting problematic class, poverty, and other dynamics as outlined above. Companies that pay indirectly for people’s data, such as by purchasing from a data collector, do the same. For example, there are a growing number of health data companies that pay health institutions to sell “deidentified” patient data—“deidentified” under poor federal standards, anyway—to train AI systems. These purchases are likewise not transparent to the consumer and are even further removed from the people whose data is being monetized for the purposes of building and refining AI systems.

Further, this sale of data enables repeat exploitation of people’s data and exacerbates privacy harms in an era of frantic AI development. One harm is the original data collector selling a person’s data without their knowledge or consent—say, the health institution that does not actually make clear, in plain language, to people that their medical scans or genetic information could be “deidentified” under a weak, outdated regulatory standard and sold to a for-profit AI company. Another harm is enabling a range of potentially unlimited data purchasers to access and use people’s data, given that many data sellers fail to adequately check not just buyers’ identities (e.g., via know-your-customer rules), but also their post-sale data uses (e.g., via detective controls and technical purpose limitation restrictions). Third parties who buy the data for “AI training” on paper could resell the data, not actually build AI and instead target individuals using their personal data, or build AI systems that are highly dangerous, such as those perpetuating predatory insurance pricing.

And another harm, perhaps especially relevant here, is that stronger incentives for companies to buy and sell people’s data can enable AI models that themselves invade people’s privacy. For example, say a stock image compiler that never sold its data decides to start selling its data (or “licensing” it, or whatever the company’s parlance) to a facial recognition vendor. Data sold about some people, that likely does not even include most of the population, could power the refinement of a facial recognition system that can in turn be used to surveil, identify, and track an entire population’s worth of people. In this harm scenario, it does not matter per se that the data sold is on a limited subset of the population if its sale and use fuels an AI system that compromises the entire population.

Consumers should be wary of companies promising them money (or cryptocurrency) in exchange for their data. They should be skeptical of any company claiming that data it buys directly from them—or data it buys about them from someone else—is “anonymized,” especially when it is becoming easier and easier to reidentify datasets and when biometric data such as iris scans is not seriously “anonymizable” in the first place. And people should know that when data is bought, there are few restrictions in practice on how it can subsequently be used, including when companies increasingly look to plug people’s data into building a range of AI applications. Even if their own data isn’t sold in every case, the sale of other people’s data can erode their privacy, too.

Policymakers should consider whether companies telling consumers to sell their data for AI training are in fact getting full, affirmative, express consent and whether any practices to the contrary are breaking the law (such as unfair and deceptive acts and practices requirements). But most importantly, they should take away the higher-level concern about AI hype and overhype, data brokerage, and the near- and long-term future of privacy: that the current AI market environment is only further incentivizing companies to monetize people’s data.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate