Analysis

NetChoice v. Bonta: An exacting level of scrutiny no privacy law could survive

January 15, 2024 | Megan Iorio, Senior Counsel

This is Part II in a series on NetChoice v. Bonta, a case where a district court judge enjoined enforcement of California’s Age-Appropriate Design Code because it likely violated the First Amendment. Read Part I for background on the case and an analysis of the judge’s decision to apply First Amendment scrutiny.

As a quick recap, the California Age-Appropriate Design Code protects kids online by requiring companies to assess the risks their data and design practices pose to kids and to refrain from certain harmful data and design practices. The law includes some transparency, data minimization, and safe design requirements. The law gives companies the option to either apply heightened privacy protections to just those users they estimate to be kids, or to apply the protections to all users.  

NetChoice, a Big Tech trade organization, sought an injunction against the AADC arguing, among other things, that the law violated their Big Tech members’ First Amendment rights to determine how to present content to users. The district court judge agreed with NetChoice and issued a sweeping decision that would make every privacy law—and much internet regulation generally—subject to heightened First Amendment review.

The judge in Bonta determined that commercial scrutiny should apply to all of the AADC’s challenged provisions. Commercial scrutiny is a form of intermediate scrutiny that applies to commercial speech. The government must show that its interest in the regulation is substantial, that the regulation directly advances the government’s interest, and that the amount of speech that is restricted is proportional to the government’s interest.

Commercial scrutiny is not supposed to be impossible to pass. Unlike strict scrutiny, commercial scrutiny does not require the government to choose the narrowest means of achieving its end—as long as the law does not restrict substantially more speech than necessary, the means justify the ends. 

The AADC should have passed First Amendment scrutiny. But the judge in Bonta said that none of its challenged provisions did. While the judge in Bonta claimed to apply commercial scrutiny to each provision of the AADC, in fact, she applied a much more exacting level of scrutiny—something more akin to strict scrutiny. This was partly due to the judge’s error in applying the standard, but also partly due to decades of courts conflating commercial scrutiny and strict scrutiny, which has left a somewhat confusing body of precedent. What is clear, though, is that whatever test the judge applied is a poor fit for assessing the speech interests at issue in the case, particularly with respect to the transparency requirements. 

The judge’s reasons for striking down the AADC’s provisions fell into three categories:

  1. Transparency and reporting requirements are unconstitutional because they are “ineffective.”
  2. Invasive and manipulative data and design practices aren’t that bad so regulating them is unconstitutional.
  3. Age estimation and data protection laws unconstitutionally limit users’ access to lawful content. (EPIC filed an amicus brief in the Ninth Circuit dispelling this myth about the AADC.)

This blog post analyzes each of these reasons, why they are flawed, and how they can be used to invalidate other internet laws.

Bad Take #1: Transparency requirements are “ineffective” and therefore unconstitutional.

The AADC includes several transparency requirements. Bonta addresses the constitutionality of three of them: a requirement to conduct and report data protection impact assessments, or DPIAs; to use age-appropriate language in policy disclosures; and to not make promises in published policies that companies do not plan to keep. As explained in the first part of this blog series, the Bonta judge decided that all three of these provisions required commercial speech review because they compel companies to speak. 

Under commercial speech review, a law “directly and materially advances” its end if there is an “immediate connection” or “direct link” between the means and the ends of the law and the provision will alleviate an alleged harm “to a material degree.” The judge in Bonta said that all of the AADC’s transparency requirements were unconstitutional because they were “ineffective” at achieving the government’s goal of protecting kids online and so did not “directly and materially advance” the government’s interest.

Part of the problem with the judge’s analysis in Bonta was that she ignored, dismissed, or misunderstood the evidence explaining why DPIAs and other transparency requirements advance the statute’s purpose. Transparency requirements admittedly do not do as much as direct regulation to advance consumer protection, but they do something—and that should be enough for the First Amendment. DPIA requirements in particular force companies to assess how their business practices may cause harm. Companies can’t prevent harms if they don’t know about them, so forcing companies to identify harms makes it more likely they will do something about them.

Another misstep was misreading commercial speech scrutiny’s requirement that a law advance a goal “to a material degree.” The judge in Bonta took this fuzzy standard to require the law to meet a certain threshold of effectiveness that only direct regulation can meet. Such a reading of commercial speech doctrine could render nearly every transparency measure unconstitutional. It also creates a lose-lose situation for internet regulation: Companies argue that transparency laws are unconstitutional because they are “ineffective” while at the same time arguing that direct regulations are unconstitutional because they are “too burdensome.”

This way of reading the “directly and materially advances” requirement also misses the point of this step of the First Amendment analysis. Laws that burden speech but do nothing to advance the government’s offered interest are considered suspect because failing to advance a legitimate interest suggests the government’s actual interest was the suppression of unwanted speech. The judge in Bonta did not identify any suspect motivations for the AADC’s transparency provisions or that they were aimed at anything other than improving platform transparency. The way the judge applied this step also turns this part of the commercial speech test into a quasi “fit” analysis, except it misses a crucial part of the fit analysis: measuring whether the burden to speech is proportional to the ends the law achieves. A law that doesn’t impose much, if any, burden on speech shouldn’t have to meet a heightened standard of justification or impact.

More fundamentally, the regular commercial speech test is not an appropriate standard to assess transparency laws. The commercial scrutiny test was designed to assess laws that make it more difficult for companies to communicate truthful information to consumers through, e.g., advertisements. These laws need to meet a certain threshold of effectiveness, perhaps, to ensure that the law is not merely an excuse to burden disfavored speech. 

Transparency laws typically do not burden companies’ ability to speak to consumers. In fact, they do the exact opposite: they mandate or enhance companies’ truthful communications with consumers. They are commonplace in a wide range of industries precisely because they help consumers decide which products to buy and which services to use. Thus, to the extent that transparency laws implicate the First Amendment at all, they do not do so in the same way that mainstream commercial speech doctrine was designed to capture. 

That is why disclosure requirements are typically assessed under the more deferential standard from Zauderer v. Office of Disciplinary CounselZauderer upheld the constitutionality of a law that compelled companies to disclose certain information about their products in their advertisements. The Zauderer test says that a disclosure requirement does not violate the First Amendment if it is (1) reasonably related to a government interest in protecting consumers from deception and (2) not unjustified or so unduly burdensome that it would chill speech. The Eleventh Circuit applied this test in NetChoice v. Moody to assess the constitutionality of disclosure requirements on social media companies, and the Supreme Court seems poised to apply the Zauderer test to social media disclosures when it decides Moody and NetChoice v. Paxton later this term. (UPDATE: this is exactly what the Court did.)

Zauderer’s requirements are more appropriately tailored to the speech implications of transparency laws and less vulnerable to abuse than the standard commercial scrutiny test. Zauderer only requires the means to be “reasonably related” to the ends—a relatively straightforward standard to apply that simply asks whether there is a connection between the means and ends and doesn’t invite judges to assess whether the means meet some fuzzy “material” threshold of effectiveness at achieving the ends. The Zauderer standard also recognizes that transparency measures do not typically burden speech and in fact promote truthful communications between companies and consumers, so that such laws need only some justification, not a “substantial” one, and there need not be an inquiry into whether the justification is proportional to the burden. It is only in the rare instance that complying with a transparency law would be “so unduly burdensome that it would chill speech,” like the law at issue in Moody.

Bad Take #2: Invasive and manipulative data and design practices aren’t that bad, so restricting them is unconstitutional.

Under commercial scrutiny, the government must show that a challenged law advances a substantial interest. The judge in Bonta found that most of the AADC’s duties and prohibitions did not advance a substantial government interest. This was due in part to the judge’s error: the judge focused primarily on a single harm—exposing kids to harmful content—that the AADC’s duties and prohibitions are not meant to protect against. She then ignored many of the harms that the provisions do address, like invasion of privacy and manipulation. Worst of all, the judge found that the regulated practices are actually beneficial in some circumstances, so regulating them is unconstitutional.

This outcome, where the judge replaces the legislature’s assessment of harm with their own views about what is harmful and what is not, is one of the dangers of subjecting statutes to First Amendment scrutiny. Heightened scrutiny invites judges to assess the strength or importance of the harms the legislature chose to address. Although courts have recognized many government interests to be “substantial” under commercial scrutiny—including protecting against many privacy harms—less established harms and methods of addressing harm are vulnerable to attack. 

The specific framing of the government’s burden in some commercial speech cases is especially concerning for consumer advocates: the law must protect against a harm that is “real, not merely conjectural.” This is the exact same phrase used to assess whether a plaintiff has suffered a concrete harm in the Article III standing context. Judges have often sided with corporations in finding that privacy and other online harms are not “real.” This is, in part, because a recent Supreme Court case, TransUnion v. Ramirez, limits the scope of concrete harms under Article III to harms analogous to those historically heard in American courts, and there isn’t always a clear analogy for online harms. But even before TransUnion, when judges considered a broader record on harm—such as legislative findings and evidence of harm before the legislature—courts refused to recognize that many privacy laws protected against a real harm, and so made those protections unenforceable. It is almost inevitable that some of the assumptions and biases underlying Article III standing analysis in privacy and other online harms cases will seep into judges’ First Amendment determinations because of the similarities between the two inquiries.

That is exactly what happened in Bonta. In several places, the judge described her inquiry into the government’s substantial interest as a search for a “concrete harm.” For example, the judge found that requiring companies to enforce their own policies did not protect against a “concrete harm,” even though the court quoted the state’s expert saying that holding companies to their word ensures truthful communication with potential users who are considering whether to use their services—which is just the sort of substantial government interest commercial speech doctrine recognizes. 

The judge also did not think that tricking kids into opting into more intrusive data collection through dark patterns was a “concrete harm.” In a move directly reminiscent of Article III standing analysis, which generally falls back on monetary and physical harms, the judge considered, and then rejected, the idea that the harm caused by dark patterns is a monetary one. Even though the judge analyzed multiple potential harms from dark patterns, she did not consider the harm of manipulation, which, as Daniel Solove pointed out, is the most obvious candidate here. The record described how dark patterns manipulate kids into making decisions that are best for the company and not for kids, so the harm of manipulation was before the judge—the judge just ignored it. 

Especially troubling was the judge’s failure to recognize the privacy harms caused by unrestricted collection, use, and disclosure of personal information. The Supreme Court has recognized that privacy is a substantial government interest, most notably in Sorrell v. IMS Health—the case that the judge in Bonta cited to extensively in deciding that First Amendment scrutiny applied to all of the AADC’s provisions. 

But the judge in Bonta decided that invasive data practices could sometimes benefit consumers, and this fact undermined the government’s argument that the practices caused harm. For instance, the judge did not see how using personal information for multiple purposes could cause harm, and even if it did sometimes cause harm, it didn’t always cause harm. Requiring companies to minimize the amount of data they collect, sell, share, and retain on kids would have “throw[n] out the baby with the bathwater” because sometimes excessive data collection could lead to users finding content they would not have otherwise found. And when it came to profiling kids, the judge said that the practice “can be beneficial to minors,” particularly “LGBTQ+ youth” and “pregnant teenagers,” because it “may lead to fulfilling new interests that the consumer may not have otherwise thought to search out.” 

Tech companies can dream up any number of real or hypothetical ways that invasive data practices might have incidental benefits to consumers. For example, tech companies could say limits on the sale of location data would prevent transit agencies from purchasing the data and using it to make decisions about new transit projects that benefit consumers. Companies could also argue that HIPAA’s limits on the disclosure of health data impede many AI companies from obtaining data that they could use to identify and treat illnesses that the patients suffer from. Legislatures that enact privacy laws recognize that the harm invasive data practices cause individuals far outweighs any benefits. First Amendment scrutiny allows judges to second guess that determination which, as we’ve seen in the Article III standing context, has had devastating results for the enforcement of privacy and transparency laws.

In fact, conceding that all privacy laws implicate the First Amendment would pose an even greater barrier to the enforcement of these laws than Article III standing doctrine. Companies cannot challenge the enforceability of a privacy law under Article III until they are sued and the impact on enforcement is generally limited to a certain class of plaintiffs bringing claims under a private right of action. First Amendment claims can be brought facially, allowing companies to pre-emptively block all enforcement of a statute, including by attorneys general who do not need to show standing. First Amendment scrutiny, as the Bonta judge applied it, would be like Article III standing on steroids and be an existential threat to all privacy and transparency laws.

Bad Take #3: Age estimation and data protection requirements unconstitutionally restrict companies’ ability to present content to users.

As EPIC explained in the amicus brief we filed in December, the judge in Bonta erroneously determined that the age estimation and data protection requirements restrict users’ ability to access content. This is a misreading of the statute as well as a misreading of NetChoice’s burden in a facial challenge. The judge’s determination also gives dangerous credence to NetChoice’s argument that companies’ choices about how to present content to users—such as the choice to use personal information to determine what content to present to a specific user—are a form of protected speech. 

The AADC requires companies to either apply heightened privacy protections to users they estimate to be minors or to apply these protections to all users. The level of certainty required in the age estimate depends on the risks the companies’ data practices present to users: the higher the risk, the higher the certainty the company needs to have in its estimate. Thus, what age estimation method to use is a fact-specific inquiry that depends on the data practices of the company. Companies must first perform a DPIA to assess their level of risk, and then determine what method of age estimation is reasonable. There are many non-invasive age estimation methods available, including using data the company already collects to estimate age. Indeed, the age estimation provision’s certainty level seems keyed to a company’s ability to estimate age with existing data: the more data a company collects, and the more sensitive that data, the greater their ability to estimate age based on that data.

NetChoice did not try to apply the AADC’s age estimation provision to any specific fact scenario. Instead, it argued, in the abstract, that companies would use the most invasive form of age verification to ensure that they were complying with the law. This argument is purely speculative and cannot support striking down the provision in all of its applications. It is also a misreading of the statute. Companies are required to assess whether the method they choose to estimate age complies with the AADC—specifically, whether the method involves collecting and using the minimium data necessary to perform the estimate, and otherwise minimizes data risks. Thus, a company that uses invasive age verification when their existing data is sufficient to estimate age to the appropriate degree would be in violation of the AADC.

As for the data protection requirements, the judge in Bonta said that “if a business chooses not to estimate age but instead to apply broad privacy and data protections to all consumers, it appears that the inevitable effect will be to impermissibly ‘reduce the adult population … to reading only what is fit for children.’” But the AADC’s data protection provisions do not restrict access to content; they only restrict whether and how companies can use personal information to show users content. Specifically, the law limits how companies can use behavioral data, which is passively collected from users. Nothing in the law prohibits companies from showing users content they indicate they want to see. Companies can show users whatever content they like, and express any message they like, as long as they don’t engage in harmful data practices. 

Underlying the judge’s analysis, though, is something more dangerous: a nod to NetChoice’s broader First Amendment ambitions. In its briefing, NetChoice argued that the AADC’s data protection provisions violated the First Amendment because decisions about “how, under what conditions, and to whom content may be published” are protected editorial judgements. Data minimization requirements that restrict companies from collecting and using information that they would otherwise use to determine “how, under what conditions, and to whom content may be published” are thus presumptively unconstitutional, according to NetChoice.

This argument is dangerously overbroad and could make all platform regulation unconstitutional. EPIC explained why such a rule is dangerous in our amicus brief in the NetChoice v. Paxton and Moody v. NetChoice cases, where NetChoice is pushing the same rule in a different context. We urged the Supreme Court to adopt a much narrower rule: that content moderation decisions based on a company’s judgements about the content’s message implicate the First Amendment, but decisions based on users’ behavior do not because they do not express any message and regulating such decisions does not impact what messages the company can spread. (UPDATE: the Court’s guidance to the lower courts suggests they agree.)

***

The district court’s decision in the NetChoice v. Bonta case is deeply flawed. The judge misinterpreted the statute and constitutional precedent. She also applied the wrong constitutional tests, applied those tests incorrectly, adopted or nodded to dangerously overbroad rules, and did not hold NetChoice to its proper burden in a facial challenge. The case is now before the Ninth Circuit on appeal, where the decision will hopefully be overturned.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate