Updates

Supreme Court Avoids Major Section 230 Ruling in Case About Digital Platforms Recommending Terrorist Content

May 18, 2023

The Supreme Court has declined to address whether Section 230 of the Communications Decency Act, a law that encourages tech companies to moderate content on their platforms, immunizes companies like Google and Twitter from lawsuits alleging that their recommendation algorithms promoted terrorist activity. In a pair of decisions released today—Gonzalez v. Google and Twitter v. Taamneh—the Court found that the families of ISIS attack victims failed to allege that either tech company had knowingly assisted the terrorist group under the Anti-Terrorism Act. Without a plausible claim under the Act, the Supreme Court no longer had reason to decide whether Section 230 immunizes tech companies from liability for harms caused or facilitated by their recommendation algorithms.

By deciding both Gonzalez and Taamneh based on the feasibility of each plaintiff’s claims under the Anti-Terrorism Act rather than on Section 230 immunity grounds, the Supreme Court showed that tech companies can survive—and even win—cases without resorting to overbroad interpretations of Section 230. However, the Court’s decision to focus narrowly on the Anti-Terrorism Act in both cases ignores a variety of other ways that tech companies and their algorithms can facilitate harm. For example, an online platform’s advertising algorithms can discriminate against users of color and its content moderations algorithms can silence Black users while permitting hate speech and child sexual abuse material to remain. While these examples involve user content, tech companies play a role in mitigating or exacerbating harm in each case.

EPIC filed an amicus brief in Gonzalez v. Google, arguing that Section 230 does not permit tech companies to automatically escape liability whenever user content is involved. Rather than assume that a tech company is immune when a claim involves user content, EPIC argued that courts should instead ask whether a plaintiff’s claim could be brought against the user that created the content—or whether there was some proportion of harm that could only be ascribed to a tech company. Our test would still protect tech companies from a wide range of liability, but also incentivize those companies to monitor and mitigate the ways that their platforms encourage, facilitate, or exacerbate online harms.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate