Analysis
In NetChoice Cases, Supreme Court Labels a Surprisingly Narrow Class of Online Platform Company Activities as Protected Expression
July 10, 2024 |
Last week, the Supreme Court issued a surprising landmark decision in two combined cases, Moody v. NetChoice and NetChoice v. Paxton. These cases dealt with the question of when platform companies’ curation of user-generated content receives First Amendment protections. While the Court decided the case on other grounds, it signaled that, in future cases, it would interpret a relatively narrow class of online platform curation practices as expressive: those that enforce a platform’s content and community guidelines. This means that platforms will bear a substantial burden to show that other curation choices are protected expression, leaving ample room for legislatures to regulate content-agnostic platform design activities.
In its decision, the Court made careful delineations among different platform company practices and asked thoughtful questions about what is expressive. The Justices emphasized that First Amendment protections for social media platforms needed to be decided on narrow, case-by-case bases.
This was a blow to NetChoice—the tech industry trade association that brought the lawsuit—which had asked for an incredibly broad rule from the Court. EPIC had submitted an amicus brief in the case urging the Court to avoid issuing an overbroad decision that would unwittingly label non-expressive business practices as expressive speech, and the narrow, careful path is exactly what the Court chose.
For an explanation of how the decision played out, Justice Kagan wrote the majority opinion. All of the Justices agreed on one narrow point: NetChoice had failed to establish that the laws are facially unconstitutional, and the lower courts would have to clean things up on remand. Justice Kagan’s opinion then addressed the scope of protected editorial judgment, which four Justices signed onto. Justices Barrett and Jackson (who joined Justice Kagan’s opinion) also filed separate concurrences. Since their votes were necessary to form a majority, either one could represent the swing vote in a future case deciding the First Amendment issues raised by the case. Justice Barrett’s narrower interpretation of the scope of protected editorial judgement is thus just as important as Justice Kagan’s. There were also various less important concurrences and splits that will be described below.
This blog post will discuss three major takeaways. First, the Court recognized that platform companies are engaging in protected expression in the narrow set of circumstances in which they enforce their community and content guidelines. Because humans at the companies used their editorial discretion to develop those guidelines, they reflect an expressive judgment about whether individual pieces of content conform to those guidelines. Second, the Justices signaled that when companies use machine-learning algorithms to enforce community and content guidelines, a lack of human input or oversight may mean the expression is more attenuated and less deserving of constitutional protection. Third, when companies curate user-generated content using non-content-based signals such as user behavior, it is unlikely to be considered expressive.
The general guidance that can be pulled here is that courts need to drill into the expressiveness of a curatorial activity at a granular level of specificity. Not everything companies do to select and display content is inherently expressive.
Takeaway 1: A majority of Justices agreed that content moderation is expressive when it is based on a platform’s content and community guidelines.
A majority of the Justices agreed that only one type of platform activity is unquestionably expressive: removing or downranking content that violates a platform’s content and community guidelines. The laws at issue in the case interfered with this activity by preventing platform companies from removing or downranking content based on the views expressed therein.
The opinion tightly constrained expressiveness to enforcement of platforms’ content and community guidelines. As Justice Kagan’s opinion explained, “When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices.” (emphasis added).
The Justices placed this case within a line of First Amendment precedent that struck down regulations of speech curators’ editorial discretion. The Court has recognized, for example, that when newspaper editors decide which stories to run or when parade organizers decide which floats to permit, they are engaging in protected expression. Specifically, these curators are acting expressively because they are determining how each individual piece of content—a news story or parade group—affects the message of the overall newspaper or parade.
The Justices said that platforms act similarly to newspaper editors and other curators when the platforms promulgate and enforce community and content guidelines. When a platform’s employees decide, for example, not to allow pro-Nazi content on their platform, and remove content they believe is pro-Nazi, these decisions represent expressive value judgments that receive First Amendment protection. A law that would require the platform to include pro-Nazi speech in users’ feeds when that content violates the platform’s guidelines thus interferes with the platform’s First Amendment rights.
A large majority of Justices also explicitly recognized that a company’s guidelines-based curation activities may still be regulated consistent with the First Amendment so long as the government’s interest is valid. Texas and Florida justified the laws at issue by citing a need to balance the expressive environment online, but this justification has consistently been found to be anathema to the First Amendment. However, the majority opinion and Justice Alito’s concurrence recognized that other justifications may change the constitutional equation. They noted that the government has historically been allowed to regulate expressive practices in the interest of promoting competition or consumer choice. This same reasoning could permit future platform regulations that give users more control over what they see in their feeds, such as design mandates that give users more control or interoperability mandates that allow third parties to offer alternative curation to users.
Ultimately, the majority strictly limited their recognition of expressive activities to content moderation based on community guidelines and said everything else will require its own analysis. As will be explored in the next takeaway, they specifically signaled that algorithmic enforcement of community guidelines could change their analysis of the First Amendment issues at stake.
Takeaway 2: Machine-learning algorithms that enforce content and community guidelines may receive less constitutional protection.
Interestingly, many of the Justices signaled that how a company enforces its content and community guidelines may change the constitutional analysis. Justice Barrett, along with Justices Alito, Thomas, and Gorsuch, noted that humans have First Amendment rights to evaluate whether a given piece of content comports with the overall curatorial message being expressed. So, any analysis of First Amendment protections for machine decisions needs to consider how closely those decisions reflect human choices and evaluations. Based on this logic, the Justices noted that implementing black-box algorithms to identify and moderate non-conforming content may receive less constitutional protection because this activity is more attenuated from human-developed editorial discretion. This plays into a long-running debate about whether and when using machine-learning algorithms is expressive.
The Justices, while largely supportive of the idea that platforms sometimes act like newspaper editors, were impressively attuned to differences between these two types of curators. Newspaper editors read and edit each story carefully, giving them intimate knowledge of each piece’s context and viewpoint. This understanding feeds into their expressive decision about the ideas and beliefs deserving of publication. But most online platforms use algorithms to label and moderate user-generated content. These black box algorithms don’t truly “know” anything about the content, and they often mislabel or misunderstand user’s speech. The Justices, without deciding the issue, raised questions about whether these activities should receive the same amount of constitutional protection given their attenuation from human decision-making.
Justice Barrett’s concurring opinion that touched on this point was especially noteworthy because her vote will likely be necessary to form a majority opinion in future First Amendment cases based on this issue. As she wrote in her concurrence,
“What if a platform’s owners hand the reins to an AI tool and ask it simply to remove ‘hateful’ content? If the AI relies on large language models to determine what is ‘hateful’ and should be removed, has a human being with First Amendment rights made an inherently expressive ‘choice . . . not to propound a particular point of view’? In other words, technology may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings’ constitutionally protected right to ‘decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence.’”
Similarly, Justice Alito wrote in his concurrence, “Newspaper editors are real human beings, and when the Court decided Tornillo (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the ‘curation’ and ‘content moderation’ carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know. . . . And when AI algorithms make a decision, ‘even the researchers and programmers creating them don’t really understand why the models they have built make the decisions they make.’ Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?”
Overall, the Justices were impressively tuned into the idea that online platforms’ use of black-box algorithms to enforce platform guidelines may change the constitutional equation in future cases. As the next takeaway will explain, the Justices were even more skeptical of another tech company claim: that deploying algorithms to curate content based on factors other than the platform’s guidelines is an expressive act.
Takeaway 3: The Court was skeptical that using algorithms to curate content outside a platform’s published guidelines is expressive, such as recommending content based on user behavior or other non-content signals.
The Justices also signaled that a platform company is likely not engaging in protected expression when it uses content-agnostic algorithms to curate content. This is a crucially important ruling for laws that would regulate this aspect of platform companies’ curation practices.
Platform companies use far more than just content and community guidelines to rank and deliver content. They also use algorithms that rely on content-agnostic signals, such as user behavior. Recommending content based on user behavior, also known as engagement, has been shown to have a variety of negative side effects and is receiving increasing academic scrutiny. Legislators are increasingly looking to regulate some of these practices through laws such as New York’s SAFE Act.
The Court’s opinion juxtaposed content-based algorithms that enforce a platform’s content and community guidelines with content-agnostic algorithms that, for example, recommend content based on a user’s past behavior. To the Court, the former seemed more likely to be expressive because it enforces the company’s human-developed, content-based guidelines, whereas the latter was not tied to any human message. As Justice Kagan’s opinion explained, “We therefore do not deal here with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards.”
Justice Barrett also paid attention to this point. She explained,
“But what if a platform’s algorithm just presents automatically to each user whatever the algorithm thinks the user will like—e.g., content similar to posts with which the user previously engaged? The First Amendment implications of the Florida and Texas laws might be different for that kind of algorithm.”
Similarly, Justice Alito noted, “Because not all compilers express a message of their own, not all compilations are protected by the First Amendment. Instead, the First Amendment protects only those compilations that are ‘inherently expressive’ in their own right, meaning that they select and present speech created by other persons in order ‘to spread [the compiler’s] own message.’” This directly contradicts NetChoice’s argument that any compilation it makes, and thus any activity involved in making that compilation, is automatically protected expression.
———-
The general guidance that can be pulled from these cases is that courts need to drill into the expressiveness of a curatorial activity at a granular level of specificity. Platforms will need to explain what algorithms are impacted by a challenged law, how the algorithms work and why they are expressive—all questions that companies have thus far avoided in ongoing litigation. Not everything companies do to select and display content is inherently expressive. The Court signaled that only a single, narrow category of curatorial activities—ones that reflect the company’s content policies—resemble the kind of editorial judgement protected under the First Amendment. For every other activity, platforms will have to advance novel constitutional theories that will be subject to rigorous questioning when they arrive at the Supreme Court.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate