Analysis

Bankrupt Genetic Data: Minimizing and Privacy-Protecting Data from the Start

April 14, 2025 | Justin Sherman, EPIC Scholar in Residence

Direct-to-consumer genetic testing company 23andMe has now filed for bankruptcy. “After a thorough evaluation of strategic alternatives, we have determined that a court-supervised sale process is the best path forward to maximize the value of the business,” its board chair said. It’s not just office chairs and corporate merch (from, yes, the “Gene Shop”) up for sale; a judge granted 23andMe permission to sell the trove of data it has gathered on people’s genetics, health, and ancestry, to the tune of more than 15 million customers.

The scenario now at hand—an untold number of buyers able to potentially grab up the genetic data troves collected and analyzed by 23andMe—is a forceful reminder of the need for better data privacy and security protections. Nothing can eliminate the possibility that a company collecting data goes bankrupt. But companies and policymakers can and should do better in mitigating the risks from any potential bankruptcy-data fallout ahead of time. Two steps are top of mind. Companies should implement data minimization practices to limit the collection of data in the first place. And legislators, regulators, and policymakers should mandate strong, comprehensive, federal data privacy and security requirements to safeguard data, including to govern its deletion and expiration.

It’s in the genes

Genetic data is different from many other types of data. Unlike your email address or even your mobile device ID, you can’t click a few buttons or stop by the local electronics store and swap out your DNA. While your DNA can change as you age over time, it’s also not making quick changes by the hour or the week. Once a company (or anyone else for that matter) has your genetic data, it has therefore gotten hold of data with great longevity—as genetic data doesn’t change quickly on its own (compared to your online purchase transactions or your phone’s latitude-longitude coordinates) and isn’t easily changeable by the person in question (compared to a phone number or even a prescription medication).

Several other properties stand out. Your genetic information is unique to you (other than with an identical twin). This by and large means that the data—unlike, say, an IP address tied to multiple people in one residence—can theoretically be used by itself to identify you. Your genetic information, as even 23andMe pointed out on its website, is similar to that of your family. Someone with access to your genetic information, as scholarship and advocacy efforts have repeatedly discussed, can therefore make judgments about your biological relatives, too. Hence, protecting genetic privacy is truly never about just one individual. As my colleague Emily Tucker at Georgetown Law’s Center on Privacy & Technology recently noted, breaches, bankruptcies, and the like at companies such as 23andMe involve “significant risks not only for the individual who submits their DNA, but for everyone to whom they are biologically related.”

Genetic data, like many other kinds of data, is additionally subject to reidentification attacks, where someone with access to your supposedly “anonymized” data is able to technically eviscerate the idea that the data cannot be tied back to you—linking it you by name through statistical analysis, combining it with other datasets, and so forth. As with any dataset, “reidentification” is neither a one-size-fits-all nor flawless process; it can depend upon many factors surrounding the data itself, the technologies the data-holder has to analyze the data, and other datasets they have available, among others. But the uniqueness and longevity of genetic data make reidentification attacks an even greater concern for people’s privacy.

And lastly, companies, governments, and people can make phenotype inferences based on your partial genomic information—such as to draw conclusions about a disease you have, and by extension, one your biological relatives may have or be susceptible to as well.

The context is bankrupt

Privacy impacts and preferences already depend on context. For example, information you’re comfortable disclosing to one person in one setting might seem uncomfortable to share with a different person in a different setting. Your willingness and—given safety, stigma, and other considerations—ability to divulge information in one place (like religion, sexual orientation, or immigration status) can make a 180 in a different town or country. Suddenly, social structures, population majority-minority ratios, discrimination problems, laws and regulations, and other important details can change. Data types can impact the privacy context, too—think of the difference between disclosing to someone you know something visually observable, such as your rough height, and divulging something more hidden, such as your debts or sexual preferences.

This 23andMe situation is no different. The nature of genetic data means that all of the questions of concern when any data-collecting company goes bankrupt—who gets access to the data; what will or could they do with it; will there be any notification to consumers—become even more urgent due to the data’s linkability (to a person), longevity (your DNA doesn’t rapidly change), and wide reach (through inference of additional data and implication of all your relatives). Direct-to-consumer genetic testing companies also have people’s names on file, because you send your name and other personal information to them when you submit your genetic sample; after all, you want to get your results, and any company in question needs to be able to send them to you. Somewhere in the company’s systems, genetic data and related analysis is outright tied to your name and other personal information.

Conversely, the bankruptcy context matters for the privacy impacts to you. Data collected by a genetic testing company not necessarily for the purposes of selling it—although, 23andMe did sell genetic information for years to pharma giant GlaxosmithKline—can be suddenly put up for sale in a bankruptcy, now part of the explicit monetization of the business’ assets. The entity that takes over the company is not up to the consumer, either; as with plenty of mergers and acquisitions of non-bankrupt businesses, a bankrupt genetic testing company could be purchased by anything from a socially beneficial, ethically minded health research institution to a predatory data analytics vendor to a private equity firm looking to squeeze every drop of value out of millions of customers’ genetic data. Once the deal is done, the new entity in charge of the bankrupt firm’s assets could decide to start sharing its genetic data troves with law enforcement with little or no disclosure to—or permission from—consumers. It could decide to monetize the same practice by selling genetic data to law enforcement. It could decide to drop its cybersecurity standards, or eliminate data deletion agreements, or begin transacting with third parties (like advertisers) that consumers would never have been comfortable with originally. Bankruptcy puts the use of consumers’ data, and genetic data at that, even more outside their control.

Underlying both of these points is that it is extremely difficult for consumers to foresee how genetic data could be exploited years down the line—not by, for example, medical professionals doing socially beneficial research within relevant privacy and consent guardrails, but unscrupulous companies, government agencies, and other actors, in new risk scenarios, with new levels of access to complementary datasets, with technologies that are nascent today or don’t even exist yet. The difficulty of envisioning future exploitation could happen through a data breach, like the one that hit 23andMe in late 2023, a scenario like bankruptcy, or even what a company considers routine data collection.

Facial recognition provides a comparative example. As I wrote in a column for Barron’s in December 2023, “consumers posting pictures to social media in the early 2000s likely didn’t consider how in 2023 companies they’ve never heard of could scrape those images to build facial recognition for policing or even generate nonconsensual, fake sexual imagery. Yet, that happens today.” It’s difficult for privacy experts themselves, let alone many consumers, to forecast how the damage of genetic data breaches, leaks, and bankruptcies will unfold years down the line, in a potentially very different technological and privacy context.

Locking down the DNA

If you’ve previously sent your genetic information to 23andMe, or any of your biological relatives have done the same, you can follow online guides to request the deletion of the information. Beyond that, however, this incident underscores the urgency of at least two privacy measures.

The 23andMe bankruptcy reminds that all companies would do well to exercise good data minimization practices. As my EPIC colleagues Sara Geoghegan and Suzanne Bernstein have explained in the context of federal regulatory actions, data minimization is a longstanding privacy principle with a simple directive: companies should limit the collection, use, transfer, and retention of personal data to data and practices that are reasonably necessary. Data minimization means no needless collection of credit card data by a website that doesn’t even use it. It means a mobile app not grabbing up geolocation pings it doesn’t currently use just because, at some point in time, maybe that data will be useful for a new feature.

Instead, as applied in the 23andMe case, data minimization would mean to collect, use, transfer, and retain only the information reasonably needed to complete a direct-to-consumer genetic test. Doing so could mean that a bankrupt 23andMe has collected less information about customers in the first place. It could also mean the company has purged plenty of people’s genetic information from its systems after processing and delivering results. Both would limit the amount of personal data exposed, up for grabs, in a scenario like a company bankruptcy. Unfortunately, however, gaps in the Health Insurance Portability and Accountability Act (HIPAA), the Genetic Information Nondiscrimination Act (GINA), and the Affordable Care Act mean such best practices are not required—and these laws do not shield—23andMe customers.

These gaps lead to the second point. From a law and policy perspective, the genetic testing bankruptcy at hand underscores just how much legislators, regulators, and policymakers should implement comprehensive data privacy and security protections. Certainly, healthcare research innovations—including genetic data-related research and innovations—that genuinely benefit society and appropriately protect people’s privacy are vital. There are plenty of challenges in figuring out the right balances in genetic data collection, sharing, and use for healthcare research and clinical treatment purposes.

But it is also true, as several scholars at Brown University recently detailed at length, that the US approach to genetic data privacy and security has critical gaps when it comes to the protection of privacy, civil rights, and personal liberties. It is true that bankruptcies like 23andMe raise urgent questions about genetic data discrimination, biometric surveillance, and law enforcement access that are inadequately dealt with under current law. It is true that plenty of companies offering commercial DNA analysis to the general public built their businesses by taking advantage of these weak legal protections in the first place, collecting volumes of genetic information for profit in the absence of strong data protections generally and robust safeguards for genetic information in particular. Even on the national security side, there are gaps and risks: a non-US purchaser looking to buy up 23andMe could trigger a review under the Committee on Foreign Investment in the United States (CFIUS), which would evaluate the security risks of people’s genetic data getting bought up—but current laws and regulations have enormous gaps if an American company were to purchase the genetic data for harmful ends, which would not trigger such a review or anything similar. The United States needs to update and strengthen its genetic privacy laws.

For now, it counts for something that consumers who gave 23andMe their genetic data can request its deletion. Those whose data is in jeopardy should absolutely look into doing so. But this scenario—troves of genetic data, effectively for sale to the highest bidder; consumers left clicking deletion buttons on a bankrupt company’s website—shows that consumers deserve much better to ensure their genetic information, in some cases their literal DNA, is seriously protected.

This blog is cross-posted with Georgetown Law’s Center on Privacy & Technology.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate