Analysis
Design-Based Lawsuits Against Platform Companies Reveal Fault Lines in Courts’ Section 230 Interpretations
November 1, 2024 |
Over the past year, attorneys general from across the country have filed lawsuits against Meta alleging, among other things, that the company violated consumer-protection laws by using dark pattern features to addict and manipulate users on its platforms. Dozens of the AGs’ suits have been swept up into a multi-district litigation (MDL) being heard in the Northern District of California, while other suits are being litigated independently in state courts. Meta has moved to dismiss claims in all of the suits based on Section 230 as well as the First Amendment and other doctrines. Recently, the judges in two of these cases—Massachusetts v. Meta and the Social Media MDL—came to nearly opposite conclusions about whether Section 230 blocks certain claims against Meta. Analyzing the differences between the courts’ orders shows an important trend in Section 230 litigation: while some courts are increasingly capable of applying a nuanced, careful Section 230 analysis, others remain hopelessly confused and apply Section 230 in a harmfully overbroad way.
This blog post will provide background on the cases, compare the courts’ analyses in them, and explain what they might mean for the ongoing project to regulate and hold social media companies accountable in ways that prevent their worst abuses while respecting the privacy and speech rights of their users.
Litigation Over Social Media Addictive Design
Kids’ use of social media platforms has increased drastically at the same time that kids have been experiencing skyrocketing rates of depression and anxiety, hours of lost sleep per night, stunted attention spans, and more. Not all of these ills can be blamed solely on the effects of social media, given that other factors are at play and proving causality in this complicated area is difficult. But companies’ harmful platform design decisions clearly amplify and profit from these issues. Whistleblower documents, confessions from regretful ex-social-media executives, and independent research have shown that the companies have seen and feared the ways certain platform features harm children, but they have implemented the features anyway in pursuit of more revenue.
Many of the lawsuits stemming from these harms allege that the companies broke the law in various ways, such as by designing unsafe products (“design defect claims”), violating states’ consumer protection laws (“consumer protection claims”), misrepresenting their activities to users (“misrepresentation claims”), failing to warn users about known dangers of certain platform features (“failure-to-warn claims”), and others.
Massachusetts v. Meta was brought by Massachusetts’s attorney general against Meta and is being heard in Massachusetts state court. The court issued its opinion on Friday, Oct. 18. In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation (“Social Media MDL”) is a large multi-district litigation that has consolidated hundreds of lawsuits brought on behalf of children, school districts, state attorneys general, and other parties across the country against social media companies such as Meta, TikTok, and Snap. It is being heard in federal district court for the Northern District of California, where the judge issued an order on the AGs’ claims just a couple weeks ago.
Overbroad Interpretations of Section 230 Threaten to Stymy Platform Accountability
As the lawsuits proceed through the courts, social media companies have relied on their usual weapons to try to escape accountability, including overbroad interpretations of Section 230.
This raises a crucially important question for the future of the internet and society: are users allowed to sue tech companies who knowingly design their platforms in a way that hurts users but helps profits?
The answer should clearly be “yes,” but overbroad Section 230 interpretations may be a barrier to platform accountability.
As EPIC has explained, Section 230 provides important but limited protections to social media companies in order to protect the rights of their users, but it has been consistently abused by the companies in order to escape accountability for avoidable and foreseeable harms they cause.
Right and Wrong Ways to Analyze Section 230
Most courts use a three-prong test to analyze whether Section 230 applies to a given claim. The Massachusetts court labeled “prong 2” as “prong 3” and vice versa, but I will follow the typical prong ordering below:
- Prong 1: Is the defendant and interactive computer service provider (ICS)?
- Prong 2: Does the claim treat the defendant as a publisher?
- Prong 3: Does the claim try to hold the defendant liable for information provided by a third party like a user?
If the answer to all three prongs is “yes,” then the claim is prohibited by Section 230. If the answer to any prong is “no,” then the claim survives.
The three-prong Section 230 test is tightly tied to Section 230’s statutory language and the purpose of the law. Each prong serves a specific purpose. Prong 1 is rarely disputed, but prongs 2 and 3 are.
Prong 2: Confusion Over What It Means to Treat a Defendant “As a Publisher”
Prong 2 asks whether a claim treats the defendant like a publisher. This has a very specific meaning in the law, but it has been one of the biggest areas of confusion and misinterpretation.
Treating a defendant like a publisher means seeking to hold them liable for the content-based harm of what they published. For an analogy, a New York Times editor has a publisher’s duty not to publish a defamatory article about me. Section 230 would prohibit a claim against a tech company alleging it has this same duty. A New York Times editor also has a duty that anybody else has to not roll up a copy of the newspaper and whack me in the face with it. Section 230 would not prohibit a claim against a tech company alleging it has a similar duty because this is not a publisher’s duty, even if published content is in some way involved. This latter example is exactly what most design defect and consumer protection claims are doing: seeking to hold tech companies liable in their capacity as businesses for the harms that occur because of irresponsible ways they design their services, not because of the harms from the actual content of that information they spread.
Understanding this distinction is crucially important but difficult because many online harms can appear to be a combination of a tech company’s design of its platform and the harmful content that users post. What’s needed is a good analytical framework for making these distinctions.
The Ninth Circuit’s duties-based framework for prong 2 is adept at distinguishing claims that treat defendants as a publisher from claims that don’t. To test whether this alleged duty is that of a publisher or not, the Ninth Circuit asks two questions. First, what actions could the defendant have taken to comply with the alleged duty? If the only choices were to engage in publishing activities, such as monitoring for harmful content and editing or removing it, then it is likely a publisher’s duty. Second, why is the defendant alleged to have this duty? If it is simply because they publish information, then it is likely a publisher’s duty. But if it’s for another reason, such as the fact that they promised they would remove a piece of harmful content, or because they have a duty to design a product that is safe for users, then it is not a publisher’s duty.
Other courts have developed inferior tests that tend to over-apply Section 230, immunizing companies from suits about their own harmful business activities instead of user’s harmful posts.
For example, the Second Circuit uses a “but-for” test for prong 2 that applies Section 230 any time harmful content is a but-for cause of the harm. This test simply asks, “Is harmful content in any way related to the lawsuit? If so, the suit treats the defendant as a publisher.” This test is legally wrong because it ignores the history and purpose of Section 230. Even worse, it leaves users with no recourse when tech companies harm them in foreseeable ways because almost every online harm can be tied to harmful content in some way. Many other circuits have rejected the but-for test, warning that it risks creating “a lawless no-man’s-land on the internet,” but in many areas, it lives on.
Prong 3: An Important Part of the Test, But Not the Whole Test
Prong 3 asks a different question: whether the harmful content was created by a third party or, instead, by the defendant itself. Section 230 only prevents claims seeking to hold a defendant liable for third party’s harmful content. In certain cases, companies have been found to lack Section 230 protection because they themselves contributed to the harmful third-party content. The usual question is whether, through their actions, the defendants made a “material contribution” to the harmful content. For example, when a roommate-matching company designed its website in a way that forced users to rank potential roommates based on protected characteristics such as race and sexuality, the website was found to have materially contributed to the discriminatory postings because it gave users no other options.
Courts don’t often struggle to apply prong 3 as much as they struggle to apply prong 2. But another problem arises from courts’ overinterpreting prong 3 to the extent that they decline to also do a prong 2 analysis. Remember: If a plaintiff establishes that their claim is not treating the defendant as a publisher (prong 2) or if they establish that the defendant authored or materially contributed to the harmful content (prong 3), Section 230 does not apply. It’s important to conduct both analyses to make this determination. Just because a company did not contribute to what made a piece of content harmful (prong 3) does not mean they are being sued over the harm caused by that content (prong 2).
Often, however, courts do an improper “features test.” They only analyze prong 3, ignoring prong 2, and incorrectly rule that Section 230 wholesale protects certain features. In many old Section 230 cases, plaintiffs did not make prong 2 arguments at all, resting their entire case on the idea that a defendant’s website design materially contributed to harmful content. In many of those cases, courts ruled that Section 230 applied because the feature was just a neutral tool that did not contribute to the harm under prong 3, and the plaintiff had already conceded or lost on prong 2. The problem is when modern courts interpret those cases to mean that Section 230 applies whenever the same neutral features are involved, whether or not the plaintiff prevails on prong 2.
Like the prong 2 “but-for test,” the prong 3 “features test” applies Section 230 too broadly. Section 230 prohibits certain types of claims; it does not wholesale protect certain kinds of features or activities. A company can, for example, face liability for its publishing activities under Section 230 if it broke a promise related to them.
Contrasting the Massachusetts’s Court’s Correct Opinion with the MDL Court’s Very Wrong One
Within the past couple of weeks, two courts have reached almost diametrically opposed conclusions on whether Section 230 prohibits the AGs’ consumer protection claims. In Massachusetts v. Meta Platforms, the Massachusetts state court applied a careful, nuanced framework to determine that none of the claims was barred by Section 230. In the Social Media MDL, the federal district court dismissed a large portion of the consumer protection claims on Section 230 grounds, though it permitted some claims to survive.
The MDL court completely misunderstood Ninth Circuit precedent. The district court judge ignored an important part of the Circuit’s Section 230 analysis and ruled that entire classes of features were immunized, instead of analyzing whether claims were immunized, as required by the Ninth Circuit.
Where the MDL Court Went Wrong
One major point of conflict in the MDL and Massachusetts decisions was whether claims about content-neutral tools that do not materially contribute to a harm are immunized under Section 230. The MDL judge said that they were, while the Massachusetts judge correctly recognized that they were not.
For example, the plaintiffs claimed the companies acted in a harmful way by incorporating Intermittent Variable Rewards (IVRs), borrowed from the gambling industry, in their notification delivery systems. This means the platforms do not serve notifications such as “likes” to users in real-time, but instead in artificially spaced patterns called IVRs that trigger larger dopamine responses. IVRs result in more anxiety felt by users, more time spent on-platform, and a greater number of times returning to the platform.
The MDL judge got it wrong in part because she applied the “features test” to the claims instead of conducting a careful prong-by-prong analysis required in the Ninth Circuit. The judge’s order cited a prong 3 case about notification delivery called Dyroff to rule “where notifications are made to alert users to third-party content, Section 230 bars plaintiffs’ product defect claims.” But the argument made in Dyroff was completely different: There, the plaintiffs made no prong 2 argument, instead arguing that, just by delivering notifications to users, a website was responsible when one of these notifications was for a drug deal that ultimately went wrong.
As the Massachusetts court remarked, “[I]nternet companies do not enjoy absolute immunity from all claims related to their content-neutral tools; liability may arise from such tools provided plaintiffs’ claims do not blame them for the content that third parties generate with those tools.”
It is a common big tech tactic to urge courts to ignore or conflate prongs 2 and 3. With its more detailed and correct analysis, the Massachusetts court was able to identify and push back on misleading tech company arguments. It noted “In its briefing, Meta did not separately analyze Prongs 2 and 3, but conflated the analysis after noting that the two prongs tend to overlap. This overlap does not mean that the distinction between the prongs should necessarily be ignored. I analyze each prong separately.”
It therefore pushed back on a similar citation to Dyroff, explaining, “In support of its position, Meta cites several decisions from around the country. However, in each, Prong 2 was conceded, undisputed, or satisfied because the action alleged harm caused by third party content.”
Interestingly, the Massachusetts court even called out the MDL court: “Meta also relies on a recent decision by a federal district court in California, which concluded that claims against Instagram and other platforms were barred insofar as they were based on notifications, infinite scroll, autoplay, and ephemeral postings. However, that decision does not explicitly conduct a Prong 2 analysis, and I do not find it otherwise persuasive as it pertains to those features.”
The other major conflict between the opinions was that the MDL court applied the forbidden “but-for” test to some claims, while the Massachusetts court did not. For example, the MDL court ruled that any claims about features that cause users to spend more time on the platform than they otherwise want to, such as autoplay and infinite scroll, are prohibited by Section 230. The MDL court thought these claims treated the Meta as a publisher because the claims “would necessarily require defendants to publish less third-party content.” Putting aside the fact that that isn’t factually true, it also is a perfect example of using the but-for test. These claims are about harmful delivery of content, not about delivery of harmful content. As the Massachusetts court explained, “The Massachusetts court got it right: “The Commonwealth alleges physical and mental harm to young users from Instagram’s design features themselves, which purportedly cause addictive use, and not from the viewing of any specific third-party content or from design choices that contributed to the development or creation of that content. In other words, the alleged harm occurs regardless of the content that users see.”
The MDL court repeated its errors for many of the features forming the basis of the plaintiffs’ design defect claims.
What the MDL Court Got Right
Luckily, the MDL court did get a few things right. It ruled that the plaintiffs’ misrepresentation claims and failure-to-warn claims survived Section 230 and could proceed. This was an easier analysis for the court because these claims are factually very similar to other claims that the Ninth Circuit has held to clearly survive Section 230. It also ruled that claims about a narrow class of design features could proceed.
What It Means Going Forward
These orders will have implications for the cases, obviously, and for the ongoing debate over platform company accountability.
While the MDL court’s orders used a faulty analysis, some claims were allowed to proceed. The plaintiffs may therefore be able to achieve some restitution for the harms that befell them. This is especially true if the social media company defendants settle to avoid discovery and further litigation.
The damage from the MDL court’s bad orders may be easy to contain. This is a district court order, not an appellate or Supreme Court opinion, so it does not create bad binding precedent. Also, if the claims that did survive incentivize the defendants to enter a settlement agreement, they may agree to rein in some of their more harmful business practices.
In general, however, these very different outcomes show that many courts are confused as to how to apply Section 230 to design-based claims. Courts are increasingly recognizing that many design claims do not treat tech companies as publishers by applying a robust prong 2 analysis. But other courts are stuck in the past, applying precedent that only analyzed prong 3.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate