Analysis
It’s Time for Courts to Tackle AI Harms with Product Liability
July 25, 2024 |

If you’ve been following the news, you may know that the Supreme Court recently overturned a cornerstone of federal agency regulation—”Chevron deference”—in Loper Bright Enterprises v. Raimondo. Together with two other recent Supreme Court decisions (SEC v. Jarkesy and Corner Post, Inc. v. Board of Governors of the Federal Reserve System), this Supreme Court term has been, in EPIC Director of Litigation John Davisson’s words, “a calculated blow to the power of federal agencies to protect the public from harms posed by emerging technologies, including AI.”
Where exactly do these recent Supreme Court cases leave federal AI regulation, and how might we still be able to protect the public from AI harms? In this blog, join us as we delve deeper into the recent Supreme Court cases, explore promising regulatory measures overseas, and offer one path forward for AI regulation: product liability lawsuits.
I. The State of AI Regulation After Loper Bright
Our story starts in 1984, in the case Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., which answered the crucial question of whether judges or executive agency staff get more of a say in interpreting ambiguous laws. There, the Supreme Court was asked to judge whether it was permissible for the EPA to interpret a term in the Clean Air Act very narrowly, thus reducing environmental protections. Writing for a unanimous court, Justice John Paul Stevens held that courts should generally defer to agencies’ interpretations of ambiguous statutory language when (1) the agency’s interpretation is reasonable and (2) Congress has not spoken to the precise issue. That ruling, which started as a blow to environmental groups, laid the groundwork for forty years of “Chevron deference”—a relatively deferential posture that courts take in cases involving challenges to agency interpretations of ambiguous statutes. The reasoning behind the deference? Agencies have technical experts with a greater understanding than judges of new issues and technologies impacting an agency’s regulatory mandate.
The Supreme Court’s recent Loper Bright decision overturns the Chevron decision, requiring courts to now independently interpret statutory language without deferring to agencies. Agencies can still provide their technical and factual expertise, which courts can consider under a doctrine called Skidmore deference, but now courts will be freer to insert their own (often non-expert) views of whether agency interpretations and regulations concerning new technologies are the best way to carry out a statute’s requirements. An agency’s ability to effectively regulate new technologies like AI will now depend, in part, on what a judge thinks is the right approach—or whether a judge thinks the agency even has the authority to regulate.
Loper Bright’s companion cases, Jarkesy and Corner Post, increase the likelihood that agency interpretations and regulations will be challenged in court. In SEC v. Jarkesy, the Court was asked to decide whether the Seventh Amendment, which protects the right to a jury trial in certain civil cases, prohibited the SEC from fining a defendant for securities fraud without a formal jury trial. Holding that it did, Chief Justice Roberts effectively restricted the ways that agencies can enforce their regulations without going through lengthy (and costly) litigation in a post-Loper Bright world. In Corner Post, Inc. v. Board of Governors of the Federal Reserve System, the Court extended the likelihood of new litigation even further, holding that lawsuits challenging agency action under the Administrative Procedure Act could continue if the plaintiff was injured within the last six years—even if the underlying agency action was finalized decades ago. In laymen’s terms, plaintiffs who are impacted by an agency’s regulations or enforcement actions—even actions that were finalized long before the plaintiff was born (or for companies, created)—can now bring lawsuits over those actions if they were impacted within the last six years.
What does this all mean? Without a dedicated digital regulatory agency, the federal government must rely on other agencies to regulate AI through their existing regulatory authorities. After the recent Supreme Court term, however, federal AI regulation is in limbo: while agencies like the FTC, FCC, and CFPB are still moving forward with important regulatory measures to protect the public from AI harms, their ability to effectively carry out these measures in the future is now uncertain. While many judges may continue to agree with and uphold agency interpretations and regulatory decisions, many others may strike down longstanding agency regulations or restrict what agencies can do altogether. And with cases like Jarkesy and Corner Post restricting what agencies can do while opening new paths to lawsuits against them, the likelihood that agencies’ authority to regulate new technologies like AI will be challenged in court will only increase.
While agencies can and should continue to fight for consumer protections, their future as the main avenue for AI regulation is unclear. It’s time that we consider additional avenues forward.
II. What We Can Learn from Europe’s Approach to AI
In the United States, the lack of domestic regulation allows AI developers to run roughshod over privacy rights, civil rights, and civil liberties. And while some states have come together to pass piecemeal laws to rein in particular AI harms, such as election deepfakes, only one state has thus far passed a general AI law attempting to address the wide concerns around the industry. In the absence of statutory guidance, American lawyers have pulled from various existing legal doctrines, such as intellectual property law, to protect individuals from AI harms. However, these lawsuits typically manifest at the individual level, and the remedies fall short of meaningful, systemic change.
AI regulation is different abroad. In Europe, for example, the European Union has focused on sweeping, industry-wide AI issues by passing the EU’s Artificial Intelligence Act, a law focusing on rigorous AI risk management practices and transparency measures so that government actors and the public can hold companies accountable for defective and harmful AI products. As with the GDPR, the international community is looking to the EU’s AI Act as a test case.
The EU AI Act tackles AI risks in three ways. First, the AI Act’s harms-based approach centers the tangible risks of the technology and actively weighs the pros and the cons of a wide variety of automated technologies. Second, the AI Act creates a direct liability chain from the provider (i.e., the developer) to the deployer and anyone in between to ensure that the appropriate entities are held accountable for the harms they cause to affected individuals. Finally, to meet EU product safety certifications, AI products classified as high-risk under the Act must meet risk management qualifications, creating a high bar for entrance into the market. This blogpost focuses on key narratives under the AI Act; for a comprehensive look of the AI Act’s provisions, see our previous blogposts summarizing and analyzing the strengths and weaknesses of its provisions.
The EU AI Act’s harms-based approach acknowledges the risks and limitations of different kinds of AI systems, ensuring that civil rights and civil liberties are considered. The law prohibits the sale, import, and export of AI systems on the EU market when the risks to fundamental rights entirely outweigh any positive benefits. When the risks to fundamental rights are significant, but the technology is deemed sufficiently beneficial, the law requires both the providers (e.g., developers or entities who have an AI system developed on its behalf) and deployers (e.g., the entities using the AI system) to engage in strong risk management practices to mitigate those risks, such as human oversight, data governance procedures, and AI output accuracy testing. For example, the use of AI systems by public authorities to evaluate the trustworthiness of individuals based on social behavior or personal/personality characteristics is prohibited entirely due to the fundamental derogation of free expression and privacy rights. On the other hand, algorithms that help job recruiters parse through resumes and job applications can greatly increase the speed of the job recruitment process and help companies find individuals tailored to their companies needs faster. However, the pervasive discrimination and bias issues in AI means that job application screening tools need to be closely scrutinized to ensure that minority groups are not structurally barred from employment opportunities. For that reason, job application screening tools are a high-risk AI system and are subject to the strong risk management requirements under the AI Act. These requirements include human oversight by design, risk management built into the development and deployment of an AI system, fundamental rights impact assessments, quality management systems throughout the lifecycle of the system, transparency and training between the providers and the deployers, and a built-in duty of information and corrective action.
The AI Act also creates a direct liability chain to ensure that developers and deployers are all held accountable for the harms their AI systems cause. Providers have obligations to engage in risk management throughout the lifecycle of the AI system, including during deployment. A provider must also train deployers on how to use its AI system as intended—including training on AI system limitations—and retain technical documentation throughout AI deployment. Deployers similarly have to engage in risk management practices and quality management practices such as built-in human oversight for high-risk systems. Finally, if an entity takes an open-source AI system from the internet and makes a significant change to the AI system, the new entity becomes the provider under the AI Act, and the original provider is released from its obligations. It is unclear, however, if the original provider would be released from its obligations if an individual merely took the AI system wholesale and deployed it. Regardless, there is a system in place that acknowledges open-source systems and the challenges of regulating open-source software.
To put an AI system on the EU market at all, developers of high-risk AI systems must engage in all of the requirements above and complete a conformity assessment to ensure that their AI systems comply with the AI Act. When an AI system is deemed compliant, it will receive an EU product certification tag.
One final point: the EU has already a proposed a follow-up regulation to address gaps relating to open-source AI and generative AI. The world is watching, and a few countries like Chile and Brazil have begun the process of adopting the AI Act’s structure into their own regulatory systems. Other countries who have historically taken the wait-and-see approach, such as Japan, are beginning to look towards risk management systems as a path forward. And while the AI Act’s harms-based approach has its flaws, the focus on liability and harms to civil rights and civil liberties offers one particularly promising path forward for countries willing to emulate it.
III. The Promise and Plausibility of AI Product Liability
The EU’s approach offers us a comparative vision for what regulating AI could look like in a post-Loper Bright world. While U.S. legislative and regulatory oversight are still necessary, the EU AI Act’s inclusion of developer and deployer liability for AI product harms suggests that U.S. courts could play a larger role in AI regulation as well.
To start, the EU’s approach to AI liability isn’t just some convenient analog to how foreign jurisdictions handle AI harms; it’s a feasible solution for adjudicating AI harms under U.S. law too. The United States also has a long history of applying product liability theories to new, technological harms—a history going back as far as nineteenth century contract law. In the intervening two centuries, product liability has routinely evolved and developed to address new technological harms: industrial factory injuries, car accidents, and the list goes on. As the EU AI Act suggests, tackling AI harms is no different. Today, we face a wave of new, automated products that routinely exhibit errors, biases, and other defects caused by irresponsible AI training and development: major data brokers provide fraud detection systems that erroneously block individuals from receiving public benefits; generative AI systems produce nonsense and generate nonconsensual sexual images; automated facial recognition systems exacerbate racial discrimination in policing; and the list goes on. These harms can and should be remedied in U.S. courts—but how?
In the United States, the evolution of product liability has led many courts to use a “risk-utility” test for determining liability. The risk-utility test is a simple cost-benefit analysis: to determine whether a developer, manufacturer, or seller should be liable for harms caused by their products, courts weigh the risk of product defects against the costs of taking additional safety precautions. If a court thinks the cost of implementing a precaution outweighs the risk of harm that the precaution is designed to prevent, then there’s no liability. As legal scholar Catherine Sharkey has argued, the goal of this modern risk-utility view of product liability is identifying the cheapest cost avoider of a product’s harm—and then making them do something to prevent the harm.
For many of the AI harms we see today—privacy violations, inaccuracies, bias, and more—AI developers are not only the cheapest cost avoider of harm, but the quickest cost avoider. Simple additions to the AI development cycle—simple design changes, training data quality controls, system audits, and more—would effectively mitigate many of the harms we see today, yet time and time again, these easy-to-avoid AI harms fall on the public without AI developers facing any pressure to change their practices. Instead, developers push for deregulation, voluntary commitments, and voluntary AI risk management frameworks.
Congress and federal agencies still have much they can do to protect the public from the worst AI harms, but they aren’t the only parties who can remedy AI’s worst impacts. In a post-Loper Bright world, we can and should push the judicial branch to tackle AI harms too. The harms of negligently designed AI and automated systems are actively harming millions of people today: families kicked off welfare rolls, Black and Brown communities facing discrimination, minors subjected to nonconsensual deepfakes, and more. The public should be able to redress those harms in court, and AI product liability offers one promising path to do that.

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate