Updates
EPIC Asks Federal Court to Rule that Narrow Regulation of Misleading Elections-Related Deepfakes Is Facially Constitutional
April 23, 2025

In an amicus brief filed yesterday in Kohls v. Bonta, EPIC—represented by Harvard Law School’s Cyberlaw Clinic—asked the Eastern District of California Court to issue a narrow ruling recognizing that California’s “Deceptive Media in Advertisements” law (AB 2839) is facially constitutional. This case requires the court to carefully weigh two important values: free speech and fair democratic elections. EPIC’s brief seeks to aid the court in this task by illuminating aspects of deepfakes that make them dangerous to election integrity and worthy of regulating consistent with the First Amendment.
Generative AI tools and their outputs, including deepfakes, are becoming increasingly widespread. Deepfakes can cause harm in a variety of different ways, including misinforming people about elections. For example, deepfakes can be used to portray candidates for elected office doing or saying something they did not do, falsely portray elections officials tampering with voting machines, and more. Since 2020, several states including California have passed laws targeting deepfake content used to deceive voters.
EPIC’s brief explains why it is important to recognize California’s law as facially constitutional. Striking down otherwise-constitutional laws because of a remote chance of unconstitutional enforcement is not how facial challenges are supposed to work. Such rulings short circuit the democratic process and leave legislatures powerless to stop foreseeable harms. When a facially constitutional law is enforced in an unconstitutional way, harmed persons can sue to vindicate their First Amendment rights.
EPIC’s brief then urges the court to think carefully about deepfakes’ unique nature when engaging in its First Amendment analysis. For instance, deepfakes resemble many different forms of speech that courts have recognized can be regulated consistent with the First Amendment: defamation, impersonation of an elections official, and misappropriation of a person’s likeness. Also, unlike other types of harmful speech, deepfakes cannot easily be defeated by truthful counter speech, especially on today’s engagement-maximizing social media platforms.
EPIC regularly submits amicus briefs to help courts protect free speech while allowing legislatures to rein in tech company excesses and works to protect democratic values and human rights from irresponsible uses of AI.

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate