Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services like Google from civil lawsuits when they make targeted recommendations of harmful content provided by their users.
One of the most important laws that has shaped how the internet has developed is Section 230 of the Communications Decency Act, 47 U.S.C. § 230. This law, originally passed in 1996, immunizes internet companies from very specific types of legal claims. But as internet companies have grown more sophisticated and powerful, overbroad interpretations of the law have immunized wide swaths of their harmful behavior. Section 230 is an important law that can and should provide a large amount of protection to internet companies so that they do not chill free expression, but it was never meant to completely immunize them from the harms they increasingly cause.
Section 230(c)(1) says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In effect, this means that a court must dismiss lawsuits that treat the website owner as the publisher of illegal information posted on the website it by third parties. This rule seems straightforward, but courts have struggled to apply it and have erred on the side of overbroad interpretations. The real question is what it means to “treat” the internet company “as the publisher or speaker” of harmful information posted to its site. To understand what it means, we must look at why Congress passed Section 230 and what they were thinking about at the time.
When Congress passed Section 230, they were responding to a case in which a message board was held liable for defamation because one of its users posted defamatory statements on it. In defamation law, a publisher of defamatory material—like a newspaper—is just as liable as the person who originally wrote it, such as a reporter. In the case to which Congress was responding, the court thought that the message board was like a newspaper—and thus liable for defamation—because the company’s content moderation system was similar to the editorial functions that newspapers perform. This outcome disturbed Congress, who thought that websites should be using content moderation to avoid violence, pornography, and other offensive materials on websites. The perverse result was that a website trying to do the right thing was held liable where a website that did no moderation at all would escape scot-free. So Congress passed Section 230 to prohibit civil lawsuits like defamation that could hold the website liable for something one of its users posted.
These laws have a very specific legal definition of “treating a defendant as a publisher or speaker of third-party information.” Under defamation law, any party that plays a role in editing and publishing a piece of information can be held liable if that information is defamatory because they should have realized it was unlawful to publish. In the decades since Section 230 was enacted, however, many courts have adopted a broad interpretation of Section 230 that encompasses several “traditional editorial functions” like reviewing, editing, and deciding whether to publish material. Today, internet companies are generally immune from liability unless they materially contribute to illegal content. However, the Supreme Court has never before interpreted Section 230, and some Justices, including Justice Thomas, have indicated their willingness to interpret the statute more narrowly than lower courts.
Gonzalez v. Google LLC, No. 21-1333, is the first Supreme Court case to consider the scope of Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which immunizes internet companies from liability against claims treating them as the publisher or speaker of user content. In 2015, Nohemi Gonzalez was studying abroad in Paris, France, when she was killed during a series of ISIS attacks on civilians. Her estate sued Google, alleging that the company aided and abetted ISIS in violation of the Antiterrorism Act, 18 U.S.C. § 2331 by permitting ISIS to post videos on YouTube, recommending those videos to vulnerable users, and sharing advertising revenue with ISIS. The plaintiffs argued that YouTube’s services were central to the growth of ISIS activity, as their recommendation algorithm pushed ISIS recruitment videos to users that were susceptible to ISIS messaging. Seeking to dismiss the case, Google asserted that Section 230 immunized them from plaintiffs’ claims because the plaintiffs were seeking to hold Google liable for harms caused by content posted by ISIS.
The district court ruled in Google’s favor, holding that Section 230 immunized Google from plaintiffs’ claims because they sought to hold Google liable for harms caused by ISIS content. The Ninth Circuit, hearing this case alongside Twitter v. Taamneh and Clayborn v. Twitter, mostly affirmed the district court’s ruling, but held that Section 230 did not bar one claim: an allegation that Google shared advertising revenue with ISIS. The case is now before the Supreme Court, where the Court has limited its review to whether Section 230 immunizes internet companies when they recommend harmful content to users.
On December 7, 2022, EPIC filed an amicus brief in support of neither party, urging the Supreme Court to interpret Section 230 in a way consistent with its original purpose, which would immunize internet companies from claims based on publishing third-party content but not from claims alleging harmful product design or those brought under statutes like the Fair Housing Act, 42 U.S.C. §§ 3601 et seq., and Fair Credit Reporting Act, 15 U.S.C. § 1681a(f), which target non-publishing conduct. EPIC argued that a simple rule, consistent with the original purpose behind Section 230, would resolve many of the difficult cases that implicate Section 230: “[can] the claim also be brought against the original speaker or publisher, whom Section 230 calls the ‘information content provider’”? By requiring internet companies to show that (1) a claim alleges that they published or spoke information and (2) they did not provide, create, or develop the information, EPIC’s suggested interpretation of Section 230 would cover the types of harms that Congress had in mind when enacting the statute without immunizing harms caused by internet companies themselves, including those caused by harmfully designed products and algorithms, as well as violations of reporting and data privacy obligations under state and federal law.