Analysis
Far From a Punt, SCOTUS’s NetChoice Decision Crushes Big Tech’s Big Litigation Dreams
July 16, 2024 |
Earlier this month, the Supreme Court sent NetChoice’s challenges against Texas’ and Florida’s content moderation laws back to the lower courts because NetChoice hadn’t met its burden of showing that the laws were unconstitutional in their entirety. Calling the decision a punt, as some commentators have, vastly underestimates its importance. The decision is a huge blow to Big Tech’s litigation strategy of requesting the broadest possible relief from regulation based on nothing more than vibes.
Over the last several years, tech companies—either directly or through their trade association, NetChoice—have asked courts to declare a series of regulations unconstitutional in their entirety, often before they have been enforced or any company has even tried to comply. These kinds of lawsuits, called facial challenges, are not the typical way courts have determined the constitutionality of statutes. More commonly, litigants bring “as-applied” challenges, where they argue that a specific application of a law violates the Constitution. This is for good reason. An as-applied challenge allows a judge to evaluate a law’s constitutionality with actual facts and enforcement situations in mind. This helps the judge to avoid issuing an overbroad ruling based on speculation. A facial challenge, on the other hand, requires a court to issue an authoritative interpretation of an entire statute in one fell swoop, guessing how the law might be applied in all possible situations.
Big Tech, and NetChoice in particular, has tried to exploit the handwavy nature of facial challenges to extract overbroad relief from courts. And they have been concerningly successful at convincing both courts and important civil society groups that they deserve relief. Last September, a district court judge granted NetChoice’s request to block the California Age-Appropriate Design Code in its entirety and in the process awarded Big Tech a golden get-out-of-regulation free card by announcing an overbroad rule that would make all data protection laws presumptively unconstitutional under the First Amendment.
In a separate lawsuit, X Corp. challenged California’s AB 587, which requires social media companies to disclose certain information about their content moderation practices. While the district court judge in that case rightly denied X’s request for an injunction, X’s appeal of that decision has received support from civil liberties groups like Reporter’s Committee for Freedom of the Press and the Electronic Frontier Foundation. EPIC filed amicus briefs in support of AG Bonta in both the NetChoice and X cases.
Neither NetChoice nor X come close to meeting the facial challenge standard set out in the Moody/Paxton decision. Both cases are based on pure speculation about how companies may comply with the laws and how the laws may be enforced.
In the aftermath of Moody/Paxton, NetChoice and X should be prepared for some severe grilling on their failure to meet the required burden in a facial challenge. The appeals in both the NetChoice and X cases are currently before the Ninth Circuit (both hearings are on Wednesday, July 17, before the same three-judge panel). The district court’s dangerous decision in NetChoice v. Bonta should be vacated because the court did not apply the correct standard, and the X decision should be affirmed.
Big Tech Has a Big Burden in Facial Challenges
It is not often that all Nine Supreme Court justices agree on something, but in the NetChoice cases, each one agreed that First Amendment facial challenges carry a heavy burden and that NetChoice did not come close to meeting that burden.
Justice Kagan, who wrote for the majority, proclaimed that facial challenges are “disfavored” because they “often rest on speculation” and “threaten to short circuit the democratic process by preventing duly enacted laws from being implemented in constitutional ways.” Because of the dangers facial challenges pose, the decision to challenge a statute on its face “comes at a cost.” That cost is a heightened evidentiary burden. In the First Amendment context, the challenger must lay out “a law’s full set of applications, evaluate which are constitutional and which are not, and compare the one to the other.” A law can be facially invalidated “only if the law’s unconstitutional applications substantially outweigh its constitutional ones.” Basically, a facial challenge cannot be an abstract attack on a statute; it must encompass every possible as-applied challenge—a dizzying burden.
That is not how NetChoice litigated the Texas and Florida cases. NetChoice “did not address the full range of activities the laws cover and measure the constitutional against the unconstitutional applications.” Instead, “they treated these cases more like as-applied claims than like facial ones,” focusing solely on “how the laws applied to Facebook’s News Feed and YouTube’s homepage.”
Remedying the record in these cases will be an onerous task for NetChoice. The Court said that NetChoice needs to answer “what activities, by what actors, do the laws prohibit or otherwise regulate?” Then, for each actor, and each regulated activity, NetChoice needs to show “whether there is an intrusion on protected editorial discretion.” The breadth and variety of platforms and platform activities, combined with the Court’s granular, fact-specific approach to determining protected editorial discretion for online platform activities, suggests that the burden here will be crushing. As Justice Barrett noted in her concurrence, “dealing with a broad swath of varied platforms and functions in a facial challenge strikes me as a daunting, if not impossible, task.”
NetChoice’s strategy in the Supreme Court cases and in others has been to elide the differences in platforms, their features, and their activities, with the hope of securing as broad an immunity rule as possible for Big Tech. They have relied on sloppy statutory interpretation, barebones records, and haphazard interpretations of constitutional doctrine. Moving forward, lower courts must insist on careful statutory interpretation, robust records, and specificity in constitutional arguments—and in their absence, to throw the cases out. Lower courts are already responding to the Moody/Paxton decision by doing just that. Justice Jackson put it best,
“[L]ower courts must address these cases at the right level of specificity. The question is not whether an entire category of corporations (like social media companies) or a particular entity (like Facebook) is generally engaged in expression. Nor is it enough to say that a given activity (say, content moderation) for a particular service (the News Feed, for example) seems roughly analogous to a more familiar example from our precedent. . . . Even when evaluating a broad facial challenge, courts must make sure they carefully parse not only what entities are regulated, but how the regulated activities actually function before deciding if the activity in question constitutes expression and therefore comes within the First Amendment’s ambit.”
The Ninth Circuit will have the perfect opportunity to check Big Tech in two cases brought against California’s Attorney General: NetChoice v. Bonta and X v. Bonta.
NetChoice v. Bonta: The Injunction That Never Should Have Been
The district court’s decision to grant NetChoice an injunction against the California Age-Appropriate Design Code (CAAADC) because, in its view, the law likely violates the First Amendment, encapsulates pretty much everything the Supreme Court said was wrong with facial challenges: it hijacked the democratic process by blocking a law based on mere speculation about its application.
The CAAADC is the California legislature’s response to growing concerns about child safety online. Unlike laws passed in states like Arkansas and Texas, which force companies to verify the age of users before they can access social media and other websites, the CAAADC targets the data and design practices of tech companies. The law applies to any online service or feature “likely to be access by a child” and includes several different requirements: companies must complete data protection impact assessments, or DPIAs, before releasing new features; minimize data collection, use, and disclosure; apply default privacy settings; not use dark patterns to subvert users’ privacy preferences; not mislead users about their policies; and provide users with a certain level of transparency. The law gives companies the option to either apply heightened privacy settings to all users or to only apply those settings to users they estimate to be minors. The level of certainty in the age estimate must be proportionate to the risks posed by the company’s data practices, as revealed by the mandated DPIAs.
From this high-level description of the law alone, it should be clear that the CAAADC is a complex regulation that applies to many different types of platforms, services, features, and activities, all of which, under the Supreme Court’s facial challenge standard, require separate analysis. In other words, the CAAADC is just the kind of law that Justice Barrett said seems “impossible” for a court to assess in a facial challenge.
NetChoice did not even attempt to assemble the requisite record. The trade organization admitted, in its briefing, that the CAAADC applies to “the vast majority of companies operating online,” listed ten categories of companies, and then proceeded to analyze the statute as if it applied to each of these companies the same. NetChoice did even less to identify which features and activities of these companies are regulated by the CAAADC, how each of these features or activities are expressive, and how the regulation burdens such expression. Features and activities were discussed in the abstract as examples, like autoplay and newsfeeds, which fails to provide the level of specificity required by the Moody/Paxton decision. Even in the abstract, NetChoice didn’t bother to explain how autoplay was expressive—it just assumed that it was. Finally, NetChoice didn’t weigh the unconstitutional applications against the constitutional ones. Instead, it relied on a few speculative, potentially unconstitutional applications to justify striking down the whole law.
Far from seeing their burden in a facial challenge as a disadvantage, NetChoice turned the impossibility of assessing the full scope of the CAAADC to its advantage. For example, NetChoice argued that some of the CAAADC’s provisions, like the age-estimation option, were too hard to apply, and so the statute must be either void for vagueness or as an unconstitutional restriction on speech because companies would err on the side of caution and use invasive age verification. The district court ate this argument up and used it as one of the primary bases for enjoining the law.
But like all fact-sensitive standards of law, how the CAAADC’s provisions apply depends on specific facts about a company’s data and design practices. NetChoice did not even try to determine how the law would apply in a realistic fact scenario, let alone in each as the Moody/Paxton decision requires. Even the New York Times, whose amicus brief was influential in the lower court’s decision, described the CAAADC’s application to itself in very abstract terms and did not attempt to describe how the law might apply to the companies’ specific data and design practices. Instead, NetChoice and its supporters flipped the facial challenge standard on its head: their failure to meet their evidentiary burden was proof that the law was unconstitutional.
The other way NetChoice used its burden to its advantage was through sloppy statutory interpretation. In the typical course of enforcement, statutes like the CAAADC are interpreted over many lawsuits, each of which would focus on the application of a specific set of provisions to a specific fact scenario. Hundreds of pages of briefing could be dedicated to parsing just one provision—even one word. Even in these more focused cases, ambiguity can be difficult to resolve.
In a pre-enforcement facial challenge like the one against the CAAADC, where no judge has ever interpreted the law before, a court has the impossible task of resolving all statutory interpretation questions at once on necessarily inadequate briefing. That increases the risk that the court will choose the simplest interpretation, especially if it comports with their existing prejudices. The risk is especially high with a law like the CAAADC whose subject is an emerging area of regulation and requires a lot of careful parsing to figure out how different provisions work together. NetChoice exploited this risk: it offered a simple interpretation of the CAAADC’s key provisions as limits on what content can be shown to children, which is likely to trigger the prejudice of any judge suspicious of government censorship online. The problem is that NetChoice’s interpretation is wrong, and the correct answer is more complex.
Take, for instance, the unfairly derided DPIA provision. NetChoice claims that this provision requires companies to minimize kids’ exposure to harmful content. But that is not what the plain text of the law says. The CAAADC only requires companies to assess, at most, two things: how the companies’ data and design practices risk exposing kids to harmful content. Neither of these tasks is the same as asking companies to assess whether any particular piece of content on their service is harmful and blocking access to it. The data practices assessment essentially asks how companies use kids’ data to select and rank content in a feed, which can increase the likelihood that kids are shown materials that feed into their insecurities and fears. The design practices assessment asks companies to assess what features it uses to show users content, how those features work, and how they may present risks to kids.
As part of the design assessment, companies may have to determine whether including certain categories of content in feeds, or whether allowing all users to access all content on a site, risks exposing kids to harmful content. But that is all the companies have to do, assess; they are not required to minimize any risk stemming from design alone. Section 1798.99.31(a)(2) of the CAAADC says that companies are only required to document and minimize the risks stemming from their data practices, not their more general design practices. In other words, the CAAADC requires companies to be aware of how their design might expose kids to harmful content, but they aren’t required by law to do anything about it. They do not need to block kids from accessing any content or stop showing kids any categories of content in their feeds. At most, companies only need to change how they use kids’ data to select and rank content. As the Supreme Court suggested in the Moody/Paxton decision, this kind of content-neutral regulation of data practices is likely not expressive and so does not trigger heightened First Amendment scrutiny.
Admittedly, this interpretation of the DPIA provision isn’t as simple as NetChoice’s, but it is the correct one. Unfortunately, the lower court unquestioningly adopted NetChoice’s strategic misreading of the statute instead of careful parsing the law’s text—which is just what NetChoice was hoping for.
NetChoice clearly failed to meet the burden the Supreme Court set out for facial challenges in Moody/Paxton, and the Ninth Circuit panel should vacate the injunction that never should have been.
X v. Bonta: A Quasi-Jawboning Case That Does Not Work As a Facial Challenge
X Corp’s challenge of California’s AB 587 rests on even less than NetChoice’s lawsuit against the CAAADC. AB 587 requires social media companies to disclose their terms of service, their content moderation policies for a few categories of content, and figures about their enforcement of these policies. Unlike the CAAADC, AB 587 has gone into effect, and companies are currently complying.
X sued to enjoin AB 587 in its entirety, arguing that it violated the First Amendment on its face. But X did not argue that complying with AB 587 has interfered with their speech by forcing it or any other company to express any message with which they disagree. Indeed, they cannot argue this, because the law only requires companies to disclose their existing policies, not to adopt new ones. X also didn’t argue that complying with the law forced them to change how they moderate content. Instead, X’s argument against AB 587 is that the AG might try to use his authority to pressure companies to change their content moderation practices.
In essence, X is making a jawboning claim, and an extremely weak one at that. To support this claim, X proffered a single letter from the AG’s office, sent to the major social media companies and published online, that mentions AB 587 along with a number of other laws as obligations the companies have related to misinformation and disinformation on their services. The letter was sent before AB 587 went into effect and did not say that the AG planned to enforce the law if he didn’t like how companies were moderating election-related content.
Bonta’s letter falls far short of the coercive actions condemned in cases like Bantam Books and NRA v. Vullo, where officials made specific, private threats of enforcement to pressure companies to censor or dissociate from disfavored speech and speakers. At most, the AG’s letter can be read as pressure for companies to enforce their existing content moderation policies, which closely resembles the kind of activity the Supreme Court indicated did not amount to jawboning in Murthy v. Missouri.
Even if X were able to make out a strong jawboning claim, it would be an odd fit for its request for relief. Statutes are not invalidated on their face when they are misused, even in the First Amendment context. In NRA v. Vullo, for instance, the NRA requested retrospective relief from Vullo’s actions (i.e., damages); it did not ask the court to invalidate the laws used to coerce companies into dissociating from them. Even in Bantam Books, where the censorship commission responsible for the jawboning was established by statute, and plaintiffs requested that statute be invalidated, the Supreme Court refused and instead enjoined the actions and procedures of the commission. That is because the harm in jawboning stems not from the law itself but a specific use, and relief should be tailored to that use. Facially invalidating a law because it has been misused would be overkill. Facially invalidating a law because it might be misused would be ridiculous. The right time to challenge a coercive threat to use AB 587 would be when the threat is made, and the correct choice of relief should be tailored to the harm.
In short, X falls far short of the evidentiary standard set out in Moody/Paxton. X does not show how complying with AB 587 forces it, or any other company, to express a message that it disagrees with or otherwise chills content moderation decisions. Nor does X show how enforcement of AB 587 would interfere with their expression other than in a purely speculative way. Even if there was a risk that AB 587 could be used to jawbone social media platforms, X would have to compare that risk to other potential enforcement applications, like failure to file a report, filing an incomplete one, or getting caught in an obvious misrepresentation, all of which would be perfectly constitutional. Thus, even with a full record, X is unlikely to show that AB 587 is facially invalid.
***
In the next few months, there is going to be a big reckoning for Big Tech. Their strategy of bringing facial challenges based on barebones records and sloppy statutory interpretation is not going to fly in the post-Moody/Paxton world. Companies will have to dream smaller. They will likely need to focus more on as-applied challenges, and even then will need to build robust records with careful statutory and constitutional analysis. All of this is good for legislatures, who can feel more confident that the laws they pass to address Big Tech harms will not be thrown out based on little more than vibes.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate