Murthy v. Missouri and the Threat of Election Disinformation

March 21, 2024 | Grant Fergusson, Equal Justice Works Fellow, and Tom McBrien, EPIC Counsel

As the 2024 U.S. Presidential election heats up, we’re returning to the thorny problem of election disinformation through the lens of content moderation. This is our second in a series of U.S. election-related blog posts, so if you missed our first entry on the ways that generative AI is turbocharging election disinformation, check it out here.

Why content moderation, and why now? More than ever, people are getting their election news from online platforms like Facebook, X, and Tiktok. On the flip side, online election disinformation circumvents many of our country’s traditional election law safeguards, like campaign finance laws and regulations on political advertising. Online platforms and their content moderation policies are often the first line of defense for stopping the spread of election disinformation online. And this week, we saw a glimpse of what the Supreme Court thinks the government’s role can and should be in this fight.

In this blog post, we’ll explain just what’s at stake in Murthy v. Missouri, the recent Supreme Court case over whether the government can influence content moderation decisions, then show why the COVID-19 misinformation case may inadvertently change the way we deal with election disinformation.


Why Murthy Matters

On Monday, March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, a case about whether the Biden Administration’s communications with social media companies concerning their content moderation efforts were unconstitutional. In the case, a group of social media users and conservative states sued the Administration, alleging that it violated the First Amendment by effectively coercing online platforms to silence users. The specific practices they allege to be coercive included sharing information about elections and vaccines with the social media companies, requesting that the companies combat COVID-19 vaccine misinformation, and requesting that companies ban accounts impersonating President Biden’s family. This raises the question of how to tell when “jawboning”—when government officials attempt to encourage private companies to act without requiring them to do something—crosses the line into unconstitutional coercion. Under the First Amendment, the government cannot generally restrict private speech based on its content, but when can it suggest that online platforms do so?

Murthy is the first time the Supreme Court has considered the constitutionality of jawboning since a case called Bantam Books, Inc. v. Sullivan in 1963. That case established that government actors are free to inform and persuade private parties to censor speech, but once they cross the line into coercion, the First Amendment applies. But judges and commentators alike are split on whether the Biden Administration’s practice of information-sharing and coordination violates the Constitution, variably arguing that the Biden Administration is (1) improperly coercing private action, (2) merely advising online platforms, or (3) even improperly deputizing online platforms to function as state actors. The case also highlights difficult procedural questions for the Court like standing, which this blog post won’t cover in detail.

Monday’s oral argument did little to clarify the future constitutionality of jawboning or government information-sharing more generally. Justices seemed to ask questions based on very different assumptions about what was relevant in the case.

On one side, Justice Clarence Thomas seemed to suggest that the implicit power dynamics at play when government actors contact platforms means that any information-sharing or collaboration would be impermissibly coercive, triggering and automatically failing strict scrutiny. Platforms may feel compelled to alter their content moderation policies because of information shared by the government, meaning that government action would have led to the censorship of certain speech.

Many other Justices, such as Justices Coney Barrett, Kagan, Kavanagh, and Sotomayor, seemed to lean towards a very fact-dependent, contextual approach to determining when persuasion crossed the line into coercion. Justices Kavanagh and Kagan referenced their pre-SCOTUS histories as government agents who had tried to persuade journalists to write stories different ways, seeming dismissive about the argument that they were violating the Constitution in those circumstances.

Taking a different tack, Justice Ketanji Brown Jackson suggested that even if the government’s actions did constitute coercion in this case, thus triggering strict scrutiny, the government’s compelling interest in ensuring effective communication of public health information during the pandemic was enough to overcome the First Amendment.

Despite competing lines of questioning, the Murthy oral argument did highlight three murky areas within the First Amendment doctrine around jawboning.

First, the Justices and oralists grappled with several competing thresholds for unconstitutionality: what meaningfully separates coercion, significant encouragement, mere encouragement, and mere exhortations—and where should we draw the line of constitutionality? Ultimately, the oralists’ answers depended on the factual circumstances of government action. For example, the oralist for Missouri argued that the government could tell online platforms about an issue it is seeing but couldn’t encourage them to do anything about it. Similarly, the Court grappled with whether implicit or perceived threats of repercussions were enough to turn simple government information-sharing into improper coercion. For example, does it matter whether the government actor sharing information with a platform and exhorting them to do better is the head of an FBI department or, instead, a lowly agency clerk? According to Missouri’s oralist, no. According to the Government’s, yes.

Second, the Court coalesced around the notion that coercion could come from a threat of consequences for following or the promise of inducements. While this argument may not seem surprising at first glance, the examples the Court raised were whether implied threats of antitrust enforcement or efforts to reform Section 230 of the Communications Decency Act might suffice for coercion.

Third, Justices Roberts and Kagan expounded on the ways that government-platform communications can be bilateral and contradictory. For example, platform companies may feel pressure from multiple agencies, like the EPA and the Army Corps of Engineers, asking for competing and contradictory content decisions. Platforms often reach out to the government for advice and guidance on issues such as stopping the spread of terrorist content or child sexual abuse materials—when that’s the case, how does it change the calculus for whether the government’s response is considered coercive? And similarly, platforms may exercise their own considerable market power to influence public opinion of the issue and advocate their own positions in front of agencies and lawmakers. Do the contours of improper government coercion change at all when an online platform is powerful enough to ignore the government?

While the oral argument sheds little light on how and whether the Supreme Court will rule in Murthy v. Missouri, the implications the case may have on content moderation and government efforts to combat disinformation warrants additional analysis.


What Murthy Means for Election Disinformation

Although Murthy focused largely on content moderation for COVID-19 misinformation, similar issues arise for election disinformation. At their core, disinformation campaigns of all types work by altering individuals’ perceptions and behavior; false information can similarly convince someone to refuse the COVID-19 vaccine and refuse to vote, for example. And unlike violations of campaign finance laws or advertising regulations, the impact of election disinformation depends entirely on how much public attention it gets. For example, if an anonymous user posts AI election disinformation to an audience of one, the risk to U.S. election security and democracy is nearly nonexistent. Therefore, understanding Murthy and its recent oral arguments requires us to evaluate not only what the government should and should not do when they identify specific instances of disinformation, but also how disinformation spreads online and influences platform users.

First, let’s consider the scenario championed by Missouri and Justice Thomas: one where the Supreme Court rules that any information sharing about disinformation that could influence content moderation is prohibited. This is the scenario we found ourselves in last year. Although federal agencies began to share updates and information about election threats following reports of Russian interference in 2016, that information flow stopped in July 2023 whilethe courts considered Murthy. (The Supreme Court stayed the injunction stopping information-sharing in October 2023.)

Without government information channels, online platforms become the sole investigators of what disinformation exists on their platforms—and the sole arbiters of what disinformation remains on their platforms. Platforms may serve these functions well, even going so far as sharing relevant information among each other and with governments. However, three issues emerge:

  1. State and federal officials have access to certain nonpublic information that could substantially improve platforms’ efforts to mitigate the impact of election misinformation—and information that platforms want—such as information gleaned from classified foreign surveillance activities.
  2. Platforms may differ in their approaches to or definitions of election disinformation, allowing disinformation to spread on some platforms even while others do their best to reduce its spread.
  3. A platform-centric election disinformation paradigm puts companies in an important, quasi-governmental role even without government involvement.

What do we mean when we say platforms take on a quasi-governmental role? Many of the harms of disinformation are public harms: they deceive individual platform users, undermine public institutions like elections, and shift public discourse. But disinformation does not necessarily harm the platform companies themselves; on the contrary, some misinformation and disinformation may increase user engagement and platform value. Disinformation online therefore creates negative externalities for society that platforms do not pay for. Without incentives to aggressively moderate election disinformation, platforms may choose to under-moderate election disinformation compared to a scenario with coordination between platforms and government actors. At a time of growing distrust in American institutions, such under-moderation can do real harm.


Moving Beyond Murthy

Next, let’s consider what could happen if the Supreme Court ultimately holds that federal and state officials can share information about election disinformation in some capacity. As the Justices suggest during the Murthy oral argument, the effect of government information-sharing and co-moderation will depend in part on what government agencies get involved. The FBI sharing information may be interpreted as more coercive than a state elections administrator, for example. But ultimately, coordination on content takedown decisions or fact-checking procedures will only go so far to mitigate the impact of election disinformation. Between narrow content policies and slow moderation processes for contested and political content, many instances of election disinformation will likely remain on platforms for weeks before they are taken down—plenty of time to spread across platforms and impact public perceptions. And even when moderation happens quickly, common forms of moderation, like fact-checking, only provide modest improvements to the accuracy of user beliefs.

Beyond jawboning and moderation decisions, combatting election disinformation will require new ways of handling algorithmic amplification. As mentioned above, the impact of any user-generated content in today’s attention-based platform ecosystem stems not only from its substance, but also from its reach. For example, users may repost election disinformation after it is taken down, and posts stemming from disinformation may continue to spread online through organic user engagement, algorithmic amplification, or coordinated inauthentic user behavior. To effectively combat election disinformation, then, we need stronger de-amplification techniques—and stronger regulations around the development and deployment of amplification algorithms. By focusing on the conduct of algorithmic design instead of user content, government regulators may be able to extend the reach of election disinformation mitigation techniques beyond the thorny First Amendment landscape in which Murthy treads.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate