Analysis
In Anderson v. TikTok, the Third Circuit Applies Questionable First Amendment Reasoning to Arrive at the Correct Section 230 Outcome
October 10, 2024 |

On August 27, the Third Circuit issued a decision in Anderson v. TikTok, an important case at the intersection of online platform accountability, Section 230, and the First Amendment. The case centered around the question of whether TikTok could invoke Section 230 of the Communications Decency Act to dismiss a lawsuit claiming that TikTok’s recommendation algorithm played a role in a child user’s accidental suicide.
The opinion correctly held that Section 230 did not bar the claims to the extent that they alleged TikTok’s own algorithmic design caused the tragic circumstances, but it arrived at its decision through a questionable First Amendment analysis instead of careful Section 230 reasoning. TikTok is now asking the entire Third Circuit to re-hear the case and overturn the initial ruling.
This blog post will walk through the background of the case, what was notable about the court’s opinion, and why it matters for the ongoing project to hold platform companies accountable in a way that benefits users and society.
Case Background
The case stems from a horrible situation in which a ten-year-old TikTok user accidentally killed herself while attempting to complete a TikTok challenge called the “Blackout Challenge.” TikTok challenges are viral phenomena in which users record and share videos of themselves engaging in funny, shocking, strange, or otherwise interesting activities. The Blackout Challenge prompted users to share videos of them choking themselves until they passed out. Multiple kids have injured or accidentally killed themselves engaging in the Blackout Challenge.
The family of a young woman who accidentally killed herself attempting the Blackout Challenge claims that TikTok should be held responsible for her death. They argue that TikTok was negligent in how it ran its platform and that its recommendation algorithm is an unreasonably dangerous product.
The Relevance of Section 230 to the Case
TikTok attempted to get the court to throw out the case by invoking a law called Section 230. The most relevant portion of the law reads, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Since the law was passed, litigants and courts have disagreed sharply over what exactly those words mean. To simplify slightly, Section 230 was meant to prevent lawsuits attempting to hold online services directly liable for the content posted by users, but it is not meant to immunize the services when their own bad behavior harms users.
As EPIC has explained, Section 230 offers a crucial, but limited, protection to tech companies—one that they have been trying to stretch into a get-out-of-jail-free card for decades. Section 230 is powerful because when a judge decides that a lawsuit triggers Section 230, then the judge must dismiss the case, regardless of whether they believe the company acted wrongfully or not. But Section 230’s scope is limited because it only blocks lawsuits that fit a specific pattern: seeking to hold a platform liable for the harms caused solely by user-generated content on that platform. The hotly debated question in cases across the country is whether Section 230 applies when harmful user-generated content is related to a harm, but a company’s own bad behavior caused it.
Different courts of appeal have developed different tests for determining whether Section 230 applies to a given lawsuit or not. Some are better than others. For example, the Ninth Circuit’s detailed, duties-based Section 230 test is very good at distinguishing claims that should be barred from claims that shouldn’t. The Ninth Circuit asks, for each claim, whether the claim (1) is against an interactive computer service provider, (2) treats that provider like a publisher, and (3) seeks to hold the provider liable for content developed by a third party such as a user. Within each of these prongs, the Ninth Circuit has developed sub-tests to determine whether to apply Section 230 or not. If the answer to each prong is “yes,” then Section 230 prohibits the claim. By contrast, the Second Circuit’s overbroad Section 230 test merely asks if the lawsuit is in any way related to harmful third-party content, which sweeps in many claims Section 230 was never meant to prohibit since third-party content is related to almost everything an online platform does.
As EPIC has explained at length in blog posts and amicus briefs, interpreting Section 230’s scope correctly is a critically important task for courts. Overly narrow and overly broad interpretations of Section 230 harm users in different but serious ways. Overly narrow interpretations can put online platforms into a situation in which they are forced to choose between two bad options: foregoing content moderation altogether or censoring their users. This is the exact situation that Congress wanted to avoid when it passed Section 230. But interpreting the law too broadly, as tech companies consistently ask courts to do, has equally harmful effects. It prevents people who have been harmed by foreseeable, avoidable, and irresponsible tech company behavior from getting any relief. It removes an important incentive for tech companies to design healthier and more useful platforms, and it prevents the development of useful caselaw that can distinguish between good and bad claims against tech companies.
Analyzing the Third Circuit’s Opinion Part I: The Correct Outcome
In TikTok v. Anderson, the Third Circuit ruled that the lawsuit against TikTok did not trigger Section 230 because the claims sought to hold TikTok accountable for its own harmful behavior (bad algorithmic design), not merely for hosting objectionable content created by users. In other words, the Andersons are seeking to hold TikTok liable for knowingly designing a system that harmfully force-feeds videos to its most vulnerable users, not for simply hosting harmful videos.
The three-prong framework that many courts of appeals use to evaluate Section 230 cases confirms that this outcome is the correct one. Prong three asks whether the harm being alleged was caused by harmful third-party materials or, instead, by the defendant’s own speech or conduct. The Anderson’s claim is that TikTok’s algorithm led to Ms. Anderson’s death because it repeatedly showed Blackout Challenge videos to Ms. Anderson that led her to try the challenge and die. The claim is not based on the simple fact that Ms. Anderson saw a Blackout Challenge video on the site, and, if it were, then Section 230 would prohibit that claim. Instead, it is TikTok’s own conduct that is alleged to cause the harm, and TikTok should have to defend itself on the merits instead of using Section 230 to dismiss the claim.
It is important to note that the decision here only determines that TikTok should not receive Section 230 protection, not that it should ultimately be found liable. TikTok still has many legal protections against liability in the case, including arguments on the merits of the products liability claims and First Amendment defenses.
When Section 230 does not block courts from evaluating these kinds of arguments, everyone benefits by getting judicial opinions that answer these important questions. That is exactly what happened in a recent pair of Supreme Court Section 230 cases where the Court decided not to evaluate the Section 230 arguments because it could clearly explain why the defendant was not liable for the harm the plaintiffs alleged.
Analyzing the Third Circuit’s Opinion Part II: The Questionable Process
However, the way the Third Circuit got to its answer lacked the rigor that some other courts have shown when evaluating Section 230 cases and made some questionable assumptions about how the First Amendment would apply in the case.
Instead of using the normal three-prong test, the Third Circuit relied on two recent Supreme Court cases about the First Amendment to rule that Section 230 did not block the claims in this case. In Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court opined that online platforms are engaging in protected expression when they enforce their community and content moderation guidelines to remove or downrank objectionable content in users’ newsfeeds. In other words, engaging in values-based content moderation is speech, largely because it resembles what newspaper editors do when they decide not to run a story they disagree with. The Court explicitly left for another day the question of whether non-values-based content moderation—such as using engagement-maximizing algorithms to decide who sees what content—is also protected expression. It is exactly this latter type of content moderation that is at issue in this case. TikTok’s algorithm is famously automated and based on maximizing users’ time spent on the platform. It is almost certain that it was the engagement-maximizing algorithm that repeatedly showed Blackout Challenge videos to Ms. Anderson.
The Third Circuit ignored the narrowing language in Moody/Paxton, interpreting the cases to mean that all activities undertaken to construct a social media newsfeed are speech. In this interpretation, TikTok’s own speech (newsfeed creation) caused the harm in this case, not speech created by a user, so Section 230 does not apply.
This reliance on labeling engagement-maximizing algorithms as speech was unnecessary. If this case had been brought in another circuit that uses the three-prong test, then the outcome could have been the same. Those circuits have repeatedly ruled that, under prong three, claims premised on a company’s own conduct aren’t prohibited by Section 230. There was no need to label that conduct as speech in this case in order to rule that Section 230 did not apply.
Why It Matters
It is important for courts to not over-label business activities as speech because once something has been labeled as speech, it becomes much harder for democratically elected legislatures to regulate that activity. While it would be dangerous to give the government too much control over the speech-like functions of online platforms, it is also dangerous to improperly deny the government the ability to regulate platforms’ harmful, non-speech business activities.
Here, there are serious reasons to doubt that implementing a black-box engagement-maximizing algorithm is expressive like implementing a process to remove pro-Nazi hate speech would be. That is why the Supreme Court refused to decide that question in Moody and Paxton. If legislatures fearing the addictive effects of engagement-maximizing algorithms wish to regulate the practice, it would be almost impossible to do so if using an engagement-maximizing algorithm is labeled as protected speech.
Exactly this type of faulty and overbroad First Amendment reasoning was used to strike down privacy provisions in the California Age-Appropriate Design Code that would have prohibited selling children’s data to data brokers or tracking children’s location without consent. Luckily, the Ninth Circuit reversed that opinion, but the close call demonstrates how courts’ willingness to engage with these nuances is crucial for the ability to pass crucial consumer protection and privacy laws.
To walk this careful line, it is important to think carefully about which activities actually are speech and which are not. The Supreme Court did so in Moody and Paxton, but the Third Circuit erased this nuance in its description of that case. While the effects of this erasure may be limited for now given that Anderson v. TikTok is not a First Amendment case, it is important to be on guard against this kind of overbroad reasoning in future cases.

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate