In Gonzalez v. Google, the Supreme Court Should Recognize That a Narrow Reading of Section 230 Will Help Achieve a Better Internet

February 17, 2023 | Megan Iorio, Tom McBrien, and Grant Fergusson

Section 230 of the Communications Decency Act was meant to encourage tech companies to monitor their services for harmful user content, but it has largely done the opposite. Section 230 says, in part, that a tech company that provides an internet service “shall not be treated as the publisher or speaker” of user-generated content. For years, tech companies have used this single sentence to ignore abuse on their services and to avoid accountability for their own harmful conduct simply because that conduct can be construed as publishing user content.

On February 21, the Supreme Court will hear a case that could change all that. Gonzalez v. Google is about whether Google can use Section 230 to escape a lawsuit alleging that YouTube’s targeted recommendations of ISIS videos violated the Anti-Terrorism Act. The outcome of the case will have broad implications for when people can hold tech companies responsible for their harmful design decisions.

EPIC filed an amicus brief in the case arguing that Section 230 does not allow tech companies to automatically escape liability whenever their conduct can be construed as publishing user content; rather, Section 230 simply says that a tech company should not face the same liability as the user that posted the harmful content. If the Court adopts this view, tech companies would finally be forced to answer for a variety of harms they have long evaded. (And no, it would not break the internet.)

This blog post explains what Section 230 is, how it morphed into tech companies’ all-purpose immunity shield, how tech companies have used the law to avoid accountability, how algorithms harm, and why a restrained Section 230 won’t break the internet.

What is Section 230?

In the early-to-mid-1990’s, internet services like AOL began to host online message boards that attracted thousands of user posts a day. Soon enough, people began to sue these internet services for hosting content that caused them harm. These lawsuits tried to take theories of liability from the traditional publishing world and apply them to the internet.

Under traditional defamation law, for instance, the original speaker of a defamatory statement is liable for defamation, and so is anyone who repeats the defamatory statement. So, a person who is defamed by an article in a newspaper can sue the source, journalist, newspaper, and any newsstands that carry the paper for defamation. But there is one caveat: the plaintiff has to prove more to hold a newsstand liable than to hold any of the other actors liable. The newsstand had to know that they were distributing defamatory material, while the newspaper (and everyone else upstream) did not. In other words, speakers and publishers can be held liable under the same theory of liability, while distributors can only be held liable under another, harder-to-prove, theory.

Plaintiffs in cases against tech companies that hosted defamatory user-generated content tried to analogize the companies to newspapers instead of newsstands. Early attempts failed because many message boards at the time operated like passive conduits for information—more like newsstands than newspapers, according to the courts. But some internet companies had begun to moderate user content and prohibit sexual, violent, or harassing content. When one such company, Prodigy, was sued in New York State court for defamation, the court said that, because the company exercised editorial control over the content on its message boards, it was more like a newspaper than a newsstand and could be held liable as a publisher instead of a distributor of the defamatory content. In other words, the Prodigy case made it easier for plaintiffs to sue tech companies that tried to maintain non-abusive platforms than companies that turned a blind eye to abuse.

Members of Congress rightly saw that the Prodigy case created the wrong incentives: if moderating its services led to increased liability, tech companies would refuse to moderate and abusive content would proliferate online. Section 230 was the answer.

Section 230 has two major provisions meant to encourage tech companies to moderate their internet services. One provision, 230(c)(2), allows tech companies to avoid litigating every decision to remove or otherwise restrict access to objectionable content as long as the company takes action in good faith. The other, 230(c)(1), overturns the rule from the Prodigy case. It says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This provision severs the link between acting like a publisher and being treated like one for liability purposes. It allows tech companies to moderate user content without fear that they will face the same liability as the users that post harmful content.

As we suggest in our amicus brief, a simple test for whether a claim treats a company as the publisher or speaker of user-generated content is to ask whether the claim could be brought against the user that posted the content. If a user posts defamatory content, the tech company can’t be sued for defamation. If a user sends harassing messages that inflict emotional distress, the tech company can’t be sued for intentional infliction of emotional distress just for hosting those messages. This protects tech companies from a wide range of liability.

But that is not how courts have interpreted Section 230. Instead, Section 230 has become tech companies’ go-to defense whenever they are charged with causing harm.

The Distortion of Section 230

Almost immediately, Section 230 morphed from a limited-purpose tool­ that prevented excessive tort liability for user-generated content into an all-purpose immunity shield for claims involving third-party content. The culprit was a Fourth Circuit opinion in a case called Zeran v. AOL.

Zeran sued AOL after an unknown user repeatedly posted his phone number alongside an advertisement for tasteless t-shirts following the Oklahoma City bombing, which resulting in Zeran receiving a continuous stream of harassing phone calls. One of the claims Zeran brought against AOL was defamation. AOL invoked the newly enacted Section 230 to immediately dismiss the suit. Zeran argued that, even if he couldn’t hold AOL liable as a publisher, Section 230 left open distributor liability, and since AOL knew about the defamatory posts, AOL could be held liable as a distributor.

The Fourth Circuit rejected Zeran’s argument about distributor liability. The court said that distributors were a subset of publishers because distributors—along with everyone else who can be charged with defamation—publish the offending information.

The scope of the term “publisher” will be one of the major points of contention during the Gonzalez oral argument. But including distributors within the category of publishers does not radically enlarge what tech companies could be held accountable for. If the Fourth Circuit had stopped there, the appropriate test for whether a claim treated a company as a publisher still would have been whether the claim could be brought against the original poster—a fact the Zeran court acknowledged.

But the Fourth Circuit did not stop there. Instead, the court said that Section 230 precludes suits that seek to “hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.” This was wrong: Section 230 doesn’t give tech companies immunity for what a publisher does, it prevents tech companies from facing claims under the legal theory that they were the ones that published harmful content.

The textual support for this test—the “traditional editorial functions” test—will be a focus of questioning during the Gonzalez oral argument. Justice Thomas has already signaled that courts should abandon it as atextual. There is another reason to abandon it: as long as courts are in the business of analyzing whether a claim involves thinking of a tech company as a publisher of third-party content, Section 230 will be hopelessly overbroad.

Indeed, tech companies have used this part of Zeran to radically expand their immunity. Much of what an internet service does is display third-party content, and tech companies argue that displaying third-party content reflects a decision “whether to publish, withdraw, postpone, or alter content.” As a result, many claims that involve third-party content in some way have become off limits, even if the harm targeted by the claim is caused, in whole or in part, by the tech company.

How Tech Companies Have Used Section 230 to Avoid Accountability

One significant category of liability that companies have used Section 230 to avoid is liability for defective product design. For example, a man brought a lawsuit against Grindr for failing to stop his ex-boyfriend from using the app to send a continuous stream of men to his apartment with promises of sex. The suit alleged that Grindr had a defective design because it lacked basic safety features to prevent harassment and impersonation—for instance, the company said it had no way to block the ex-boyfriend’s IP address, even though other apps had such a capability. The court found that the lawsuit treated Grindr as the publisher of the ex-boyfriend’s content because the ex-boyfriend’s creation of a fake profile and communication with Grindr users was a but-for cause of the claims in the lawsuit. The lesson: apps could fail to build basic safety features and ignore pleas for help from people being harassed by their users—the exact opposite of what Section 230 intended.

Other companies have escaped product liability claims because the service feature was a “neutral tool” that wasn’t explicitly designed to do harm. But “neutral tool” appears nowhere in the text of Section 230, which is why this test is likely to be the target of harsh criticism from the Supreme Court. The “neutral tool” text also misunderstands products liability suit—it doesn’t matter whether the product was designed to harm, only that it does harm and there were reasonable alternative designs.   

Besides product liability claims, companies have been able to escape suits alleging that they published inaccurate background information about people in violation of the Fair Credit Reporting Act because that information came from third parties. Companies have also been able to avoid suits alleging that they used people’s likenesses to sell access to their services. Backpage was even able to escape a suit that alleged that it designed its website to facilitate sex trafficking in violation of consumer and sex trafficking laws. The dismissal prompted Congress to amend Section 230 to allow certain sex trafficking claims.

How Algorithms Harm

The need to hold tech companies responsible for how they design their products is only growing in importance as companies use more and more automated decision-making and machine learning systems to execute important functions. Advertisement-targeting algorithms can facilitate racial discrimination, violating laws like the Fair Housing Act. Content moderation algorithms can disproportionately silence Black users while permitting certain hate speech and child sexual abuse material to remain—even against the wishes of victims. Given the expansive interpretations given to Section 230 today, platforms could be immunized from many of these harms without courts ever having to grapple with why. That’s a problem.

These harms flow from platform design decisions and flawed training data, not just from hosting harmful user conduct, which is why Section 230 should not automatically immunize them. Every time a platform makes a decision around what factors a recommendation algorithm should consider or what content a content moderation system should remove, it changes the environment in which users post—it incentivizes users to post different types of content in different formats aimed at different audiences. And when these same platforms train their algorithms on unfiltered and biased data, they develop an algorithm that goes beyond merely shepherding content from user to user. These platform-side decisions change what content is created and how users interact with that content, producing harms that would not otherwise exist because of user content itself.

If Gonzalez wins, it will not be a death knell to social media algorithms. Without Section 230(c)(1)’s broad immunity, platforms will need to grapple with a wider range of substantive legal questions about how their platforms operate and defend their decisions on the merits in court. Some cases will lead to liability, forcing platforms to think more carefully about the ways that algorithms can cause harm. But some—perhaps even most—cases will not, bolstering by other provisions like Section 230(c)(2) and other legal doctrines like the First Amendment.

Returning Section 230 to Its Original Meaning Won’t Break the Internet

Tech companies warn that cutting back on broad interpretations of Section 230 would “break the internet.” But these concerns ignore the many legal protections that tech companies already enjoy. They also ignore the ways that the internet is already broken.

First, if a tech company’s actions don’t violate the law, they can still get out of lawsuits early. The Gonzalez case is a prime example. Google argued below that Gonzalez’s allegations don’t state a claim under the Anti-Terrorism Act (ATA) because there isn’t a close enough connection between Google’s alleged conduct and terrorist acts. Google is likely correct on that account, so it doesn’t really need Section 230 to protect it. A merits decision in favor of Google (or Twitter in the companion case, Twitter v. Taamneh) would make similar ATA claims against tech companies much less likely, further obviating the need for Section 230. The same thing would happen in other contexts: a few tech companies would have to face early merits decisions in a few unmeritorious cases, but then plaintiffs will stop filing those cases, or they will be quickly dismissed.

Artful pleading is also not as much of a problem as Google and its amici make it out to be. Take the type of products liability suit brought in Herrick v. Grindr. To make a products liability claim, plaintiffs can’t just show that a product caused harm. They must show that there is a reasonable alternative design of the product that would have avoided the harm, or that a reasonable consumer’s expectations were violated in how the product was designed Herrick did this by showing that other hook-up apps had safety features that Grindr refused to implement. A defamation suit couldn’t successfully masquerade as a products liability suits because the plaintiff has to prove different things in each case.

Another powerful protection for tech companies is the First Amendment, which already prevents many of the types of suits that tech companies claim they need a broad reading of Section 230 to avoid. Tech companies warn that without a broad Section 230, they could face legal liability for hosting and transmitting controversial speech that crosses legal lines, so they will have to be overly censorious. But the First Amendment already protects them by invalidating hypothetical laws that would criminalize sharing speech about hot-button topics such as abortion or vigorous political speech adjacent to violent topics. For example, in a recent case Volokh v. James, a court issued an injunction against a New York law requiring online platforms to develop reporting mechanisms and public policies for responding to hateful speech. It relied on the First Amendment, not Section 230, to find that the law would impermissibly chill speech and give the government too much control to label what is proper and improper speech.

Defenders of a broad interpretation of Section 230 also ignore the fact that the internet is already broken in many important ways. The internet is a painful and dangerous place for many people: countless people have their intimate images shared widely without their consent; people use the internet to stalk and harm others; people’s credit reports are riddled with errors that may prevent them from getting approved for housing, mortgages, and jobs; people advocating for causes like the Black Lives Matter movement or Palestinian rights are heavily censored without recourse. The internet is not a utopia at risk of being destroyed, but a place in desperate need of certain reforms that Section 230 currently blocks.

The internet is a complicated place, full of wonderful elements as well as toxic and harmful ones. Tech companies shape and control the internet. By providing nearly blanket protection for internet companies for their negligent or even malicious decisions, overly broad readings of Section 230 remove an important incentive that would make the online world a healthier place.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate