NetChoice v. Bonta: The Case That Threatens the Future of Privacy
October 19, 2023 |
Last month, a California district court judge enjoined enforcement of the state’s Age-Appropriate Design Code, or AADC, based on a dangerously overbroad reading of the First Amendment. The weaponization of the constitution in NetChoice v. Bonta should raise alarms for all privacy and consumer protection advocates.
California passed the Age-Appropriate Design Code last year in an effort to promote privacy protections for children online and to ensure that online products and services are designed in ways that create a safe space for children to learn, explore, and play. The AADC includes many requirements and obligations that are common among U.S. and international privacy frameworks, like requirements to make privacy policies understandable to kids, minimize the collection, sale, and use of kids’ data, defaulting to high privacy settings, and prohibiting companies from tricking kids into changing their privacy settings to make them more permissive.
The judge in Bonta found that the whole law required close review under the First Amendment based on an incorrect reading of a Supreme Court case called Sorrell v. IMS Health. The judge then said that all of the challenged provisions were unconstitutional by applying an exacting level of First Amendment scrutiny that no privacy law could pass. If the decision is allowed to stand, it would allow federal judges to substitute their own policy views on privacy law for those of the legislature, undermining internet governance, consumer protection, and even democracy itself.
Professor Neil Richards has called the type of reasoning in Bonta “Digital Lochner,” a reference to an early twentieth-century case that ushered in an age in which judges struck down important and popular economic regulations under the guise of constitutional Due Process. The Bonta decision would achieve the same effect though the First Amendment. It would allow judges to second-guess legislatures’ decisions to protect consumers from harms caused by tech companies, disrupting the popular push for tech accountability and paving the way for tech companies to make themselves immune from regulation as a matter of constitutional law.
The California Attorney General’s office appealed the district court decision to the Ninth Circuit yesterday, making this case one to watch. Because of the importance of this case to the future of tech regulation and accountability, we will be publishing a series of posts over the next few months digging into its significance.
Part I of this blog series explains how the Bonta court’s overly expansive view of the First Amendment’s scope would subject almost all tech regulation to heightened constitutional scrutiny.
Part II will explain the harsh scrutiny the judge in Bonta applied to the AADC and the implications for privacy law writ large.
The judge in Bonta did not carefully determine whether every provision of the AADC required heightened First Amendment review
Before a court can review a law under the First Amendment, it must first decide whether the statute requires First Amendment review at all. Precedent about when, and to what extent, a statute implicates the First Amendment involves looking at whether the statute burdens a recognized category of protected speech and the extent to which that burden implicates the interests underlying the First Amendment.
Determining whether a statute implicates the First Amendment and requires heightened review is a crucial analytical step. Laws that do not trigger heightened scrutiny get reviewed under a very deferential standard that gives judges little room to question the legislature’s policy choices. But when a judge reviews a statute under the First Amendment under heightened scrutiny, they essentially put a legislature’s policymaking under a microscope. The state is required to justify to the judge the importance of the policy interests the legislature chose to address and whether the legislature chose an appropriate means of addressing those interests. If the judge doesn’t think that the legislature’s choices are justified enough, they can strike the law down.
Because constitutional review gives unelected and unaccountable judges the power to override the democratic process, it should only be used when absolutely necessary to protect people from being unfairly targeted by the state. When only part of a law is suspect, only that part of the law should be reviewed so that, if it is possible to enjoin just that part of the statute, the rest of the statute can survive. To that end, judges should analyze each challenged provision of a law to determine how it impacts speech and whether the impact justifies First Amendment review.
That is not what the Bonta judge did.
The AADC consists of several different kinds of provisions, some of which may require First Amendment review, but most of which do not. The provisions NetChoice challenged were:
- Basic data protection requirements. These provisions restrict the collection, use, and disclosure of user data. The provisions NetChoice challenged were a requirement to minimize the amount of data it collects, uses, and discloses to that which is necessary to provide a service and a ban on unauthorized use of kids’ data. NetChoice even challenged a requirement that companies give kids high privacy settings by default, meaning kids can still change the settings to make them less protective.
- Safe design requirements. These provisions prohibit companies from using certain manipulative design techniques and to use other design techniques to mitigate harms specific to kids. The challenged manipulative design provisions were the prohibition on tricking kids into selecting lower privacy settings through dark patterns and profiling kids by default. NetChoice also challenged the provisions prohibiting companies from using kids’ data in a way they know or have reason to know harms kids and a requirement to either estimate kids’ ages or default all users to high privacy settings.
- Transparency & reporting requirements.
- The AADC requires companies to conduct Data Protection Impact Assessments, or DPIAs, to assess the risks associated with their products and services use of kids’ data. Companies are also required to come up with a plan to mitigate the risks found but are not required to follow through on the plan. This provision has received a lot of attention because it requires companies to assess the risks associated with their use of kids’ data to show kids harmful content. As EPIC wrote in its amicus brief in the district court, DPIAs are a common corporate accountability tool and are found in privacy frameworks around the world.
- In addition to the DPIA requirement, NetChoice challenged the AADC’s requirements that companies’ communicate their policies in age-appropriate language, and that they enforce their policies as published.
Instead of going through these provisions one-by-one to decide whether they required First Amendment review, the Bonta judge analyzed the challenged “prohibitions” and “requirements.” Within these two categories, the judge basically had two analyses: one that applied to the data protection and privacy-by-design mandates, and one that applied to the transparency and reporting requirements.
By lumping most provisions together into crude categories, the judge elided over constitutionally-significant distinctions. The judge also completely ignored the dark patterns prohibition in its analysis, but later applied commercial scrutiny to the provision—even though commercial scrutiny does not apply to misleading speech.
More significantly, the judge in Bonta adopted two incredibly overbroad—and incorrect—rules for applying First Amendment scrutiny. The judge misconstrued the Supreme Court case Sorrell v. IMS Health to say, first, that all laws governing the “availability and use” of information require heightened First Amendment scrutiny; and second, that all laws requiring companies to produce or distribute words or information require heightened First Amendment review. By relying solely on an erroneous reading of Sorrell, the judge failed to seriously examine how First Amendment precedent applies to the business practices of tech companies.
Sorrell does not say that all data protection laws implicate the First Amendment
First, the judge in Bonta announced that Sorrell “unequivocally” decided how the First Amendment applies to privacy laws: a law that “restricts the availability and use of information by some speakers but not others, and for some purposes but not others, is a regulation of protected expression.” Basically, Bonta says that data is protected speech under the First Amendment and that judges must give all data protection laws a hard look unless they apply to everyone in every context.
That is not what Sorrell held. The protected speech category in Sorrell was marketing messages, not data. And the Court in Sorrell applied heightened scrutiny not because the law made “speaker-based” distinctions but because the law discriminated against one speaker’s viewpoint.
The law challenged in Sorrell restricted the sale and use of data about physicians’ subscribing histories. The Court declined to decide whether the data at issue itself was protected speech. Instead, the Court looked to how the restriction on the availability and use of the physicians’ data caused a burden on protected speech—the marketing of brand-name drugs by pharmaceutical companies. The Court analogized restrictions on the availability and use of data with restrictions placed on access to ink or the use of newsracks in previous cases. Ink and newsracks were not themselves protected speech, but because ink and newsstands were essential to the creation or distribution of certain types of protected speech, restrictions on their availability and use could sometimes impact speech in ways that raised suspicions under the First Amendment.
The idea that data is sometimes an essential ingredient of protected speech and not protected speech itself tracks best with the interests underlying the First Amendment. The First Amendment was meant to protect against the suppression of acts or words that express a message, idea, or viewpoint. The user data tech companies collect does not express any kind of message, idea, or viewpoint, nor is the act of collecting user data expressive. But the data can sometimes be used to express messages or viewpoints. Privacy laws should not automatically be subject to First Amendment review, but when a law’s restrictions on the availability and use of data have the necessary effect of suppressing a particular type of message, they may require a closer look.
The judge in Bonta did not identify any reason to suspect that the AADC’s data protection provisions had the effect of suppressing any message. The only supposedly suspect effect the Bonta court identified was that the AADC only applied to for-profit companies and not government agencies or non-profits—that is, Bonta appliedthe so-called Sorrell “speaker-based” discrimination standard.
But there is no Sorrell “speaker-based” discrimination standard. The problem with the challenged law in Sorrellwas not that it applied to some entities and not others. The problem, according to the majority, was that the burden on protected speech was placed entirely on pharmaceutical manufacturers marketing brand-name drugs while allowing virtually all other speakers—including those pushing generic drugs—to use the same data for their own marketing. In other words, the law discriminated against pharmaceutical manufacturers based on their viewpoint, which the First Amendment cannot abide without an iron-clad justification.
A legislature’s decision to regulate some entities and not others is not itself a reason for the judiciary to scrutinize a law under the First Amendment. All laws must make such choices. It is only when the distinctions that a statute draws raise suspicion that the law is an excuse to suppress a specific viewpoint that a law requires a closer look. The Bonta judge did not identify any reason why the AADC should regulate non-profits or the government, or why not regulating those entities suggests that the AADC might be a tool for the suppression of the for-profit companies’ speech. Thus, the distinctions the AADC made between for-profit companies and other entities was constitutionally irrelevant and should not have justified First Amendment review.
Sorrell also does not say that every law that requires companies to use words implicates the First Amendment
As for the AADC’s transparency and reporting requirements, the judge in Bonta applied an incredibly overbroad rule: any regulation that requires a person to produce or distribute words or information triggers heightened scrutiny. The judge said the AADC’s data protection impact assessment provision “requires a business to express its ideas and analysis,” which “regulate[s] the distribution of speech and therefore trigger[s] First Amendment scrutiny.” The rest of the AADC’s transparency and reporting requirements implicated the First Amendment and thus required heightened scrutiny because they “require businesses to affirmatively provide information to users, and by requiring speech necessarily regulate it.”
Since almost everything tech companies do involves using words or information in some way, practically all regulation of tech companies—and much of the rest of online commercial regulation—would trigger heightened scrutiny almost by default.
The only support the judge provided for this sweeping rule was a single sentence from Sorrell taken completely out of context. The sentence is this: “this Court has held that the creation and dissemination of information are speech within the meaning of the First Amendment.” The purpose of this sentence was to introduce a discussion of cases that raised issues related to the one in Sorrell: whether restricting the availability and use of information violated the First Amendment. But the cases the Court discussed, like Sorrell, were more about the negative impacts such restrictions had on forms of expression that are unambiguously protected by the First Amendment, such as news reporting and marketing.
Indeed, when the Eleventh Circuit recently faced the same question about whether the First Amendment applied to a disclosure requirement for tech companies in another case brought by NetChoice, NetChoice v. Moody, the court found that it did, not because it required the companies to produce words but because it would deter companies from engaging in protected speech. At issue in Moody was a requirement for social media companies to provide an explanation to every user when the company removed the user’s content from the platform. The Court said that the requirement implicated the First Amendment because the burden of providing explanations would discourage companies from removing content they would otherwise remove, and decisions about whether to remove content were editorial judgements protected by the First Amendment.
The only negative impacts on speech that the Bonta judge identified here were related to the requirement that companies “enforce published terms, policies, and community standards established by the business, including, but not limited to, privacy policies and those concerning children.” In the judge’s opinion, these requirements would “press private companies into service as government censors, thus violating the First Amendment by proxy.” But the AADC does not require that the companies’ policies reflect any government position on censorship, only their own. And while companies may be pressed to act as censors, it would only be to the extent that the companies promise to do so. Companies can avoid liability simply by not making promises they don’t intend to keep. The state confirmed during litigation that the provision can’t be used against companies that say they will decide whether to apply their polices on a case-by-case basis.
The Bonta court’s analysis here also seems to imply that promissory estoppel or misrepresentation claims against tech companies that promise to take an action to, say, block a bully or remove defamatory content and fail to follow through are possibly unconstitutional. But if there is no way to enforce a tech company’s statements or policies, companies can say whatever they want to lure users in without consequence. That would make it almost impossible to align user expectations with business practices, a key aim of consumer protection law.
Besides the policy enforcement provision, the Bonta court pointed to no other provision that had any impact on the regulated companies besides requiring them to produce words. That cannot be the test for implicating the First Amendment. All transparency and reporting measures require regulated entities to produce words. Indeed, it could be argued that all regulation requires companies to produce words, because complying with the law requires communication, and communication requires words. The First Amendment would thus swallow up the whole of law, a clearly absurd result that proves the absurdity of the rule.
Next blog post: how the judge in Bonta found the AADC unconstitutional
The first blog post in this series on NetChoice v. Bonta analyzed the problems with the court’s overbroad First Amendment standard and how the decision could negatively impact current and future privacy and consumer protection laws.
The next blog post will examine how the judge in Bonta aggressively substituted their own policy views—and those of NetChoice—with those of the legislature to strike down every challenged provision of the AADC. The reasoning is not only wrong, it shows how a judge can strike down an entire privacy framework by minimizing the harm invasive data practices have on consumers and by deferring to tech companies’ views on how the internet should work.