Analysis

California Legislative Session Roundup: Which Key Privacy and AI Bills Were Enacted and Which Were Vetoed? 

October 11, 2024 | Kara Williams, EPIC Law Fellow

The California Legislature was once again busy this session working on numerous tech-focused bills, and legislators passed several bills on both privacy and AI as their session wrapped up Aug. 31. Governor Gavin Newsom signed many of these bills into law throughout the month of September, but he also vetoed three major tech bills. Below is a rundown of the major privacy and AI bills that were signed into law this session and a brief explanation of what each law does. Following that is a summary of each of the vetoed bills along with Gov. Newsom’s reasoning for returning these bills without his signature.  

Signed

Defines “artificial intelligence” under California law (AB 2885) 
Effective date: January 1, 2025 

AB 2885 formally defines the term “artificial intelligence” for purpose of California law as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”  

Watermarking requirements for AI-generated content (SB 942) 
Effective date: January 1, 2026 

SB 942, known as the California AI Transparency Act, places labeling requirements on developers of certain generative AI systems. Specifically, the law mandates that covered generative AI developers must label all image, video, and audio created by their system with an invisible, difficult-to-remove watermark. The law also requires these developers to offer users the ability to place a visible watermark on AI-generated or AI-modified image, video, or audio content. Finally, the law requires covered AI developers to make a free AI detection tool available to users that informs them of whether specific content was created or altered by their generative AI system.  

Transparency in generative AI (AB 2013)  
Effective date: January 1, 2026 

AB 2013 places transparency requirements on the developers of generative AI. The law requires developers of any generative AI system that is made publicly available to publish documentation on training data for the AI system on their website. The disclosure must be a high-level summary of the datasets used to train generative AI, including whether any data is personal data, synthetic data, or subject to copyright, trademark, or patents. 

Government use of generative AI (SB 896) 
Effective date: January 1, 2025 

SB 896, known as the Generative Artificial Intelligence Accountability Act, tasks the California Office of Emergency Services with conducting an annual analysis of the risks posed to the state’s critical infrastructure by the use of generative artificial intelligence. It also requires state entities to disclose their use of generative AI if they use generative AI to communicate with individuals about government services or benefits and to provide instructions for how to contact a human.  

AI-generated content related to elections (AB 2655, AB 2355, AB 2839) 
Effective date: January 1, 2025, for AB 2655 and 2355; immediately for AB 2839  

California has three new laws seeking to target AI-generated deepfake content related to elections. AB 2655, known as the Defending Democracy from Deepfake Deception Act, requires large online platforms to both block materially deceptive content related to elections in California and label fake or false content as “manipulated” and “not authentic” during certain timeframes before and after an election. The law also requires these online platforms to implement mechanisms for users to report fake content that had not been blocked or labeled that should have been.  

AB 2355 requires any political advertisement that was generated or substantially altered using AI to be labeled with a disclosure.  

AB 2839 prohibits anyone from knowingly distributing an elections-related advertisement or communication that contains “materially deceptive content,” including deepfakes, within certain timeframes before and after an election. A federal judge in California blocked this law in a ruling on Oct. 2, citing First Amendment concerns.   

AI in robocalls (AB 2905) 
Effective date: January 1, 2025 

AB 2905 requires robocalls to disclose if the recording uses a voice that was generated or significantly altered by generative AI. 

AI-generated nonconsensual intimate images (SB 926, SB 981) 
Effective date: January 1, 2025 (both) 

Both SB 926 and 981 both address nonconsensual intimate image deepfakes. SB 926 makes it illegal for individuals to create or distribute nonconsensual deepfakes, and SB 981 places specific obligations on social media companies regarding nonconsensual deepfakes on their platforms.  

Specifically, SB 926 criminalizes nonconsensual intimate image deepfakes by making it a crime to create and distribute deepfake  images that appear realistic if the creator/distributor should know that creation or distribution of the image would cause emotional distress and it does, in fact, cause that distress. SB 981 requires social media platforms to offer an accessible way for users to report nonconsensual intimate image deepfakes, lays out specific timeframes for platforms to investigate these reports, and requires platforms to, first, temporarily block the image from public view pending their investigation and, second, remove the image if the platform determines it is a nonconsensual intimate image deepfake.  

AI-generated child sexual abuse material (AB 1831, SB 1381) 
Effective date: January 1, 2025 (both) 

AB 1831 and SB 1381 both expand California’s laws against child sexual abuse material (CSAM) to also apply to content generated or digitally altered by artificial intelligence.  

Protections for the entertainment industry (AB 2602, AB 1836) 
Effective date: January 1, 2025 (both)  

AB 2602 and AB 1836 both provide protections for actors and others working in the entertainment industry. These laws require studios to obtain consent from actors, or from the estates of deceased actors, before creating or using digital replicas of their voices or likenesses.  

AI in health care (AB 3030) 
Effective date: January 1, 2025 

AB 3030 requires health care providers to disclose to patients if they use generative AI to generate communications to patients about their health statuses. These providers must also give patients clear instructions about how to contact a human.  

Protections for minors on addictive feeds (SB 976)  
Effective date: January 1, 2027 

SB 976 prohibits operators of an “addictive internet-based service or application” from providing an addictive feed to a user unless the operator has actual knowledge that the user is over 18 years of age or obtains verifiable parental consent. Additionally, it is unlawful to provide push notifications to users between midnight and 6 a.m. unless the operator has actual knowledge that the user is over 18 or obtains verifiable parental consent. The law also directs the Attorney General to promulgate regulations for how a covered operator should determine whether a user is a minor. This law is similar to the NY SAFE for Kids Act

Clarifies definition of personal information (AB 1008) 
Effective date: January 1, 2025 

AB 1008 amends the California Consumer Privacy Act (CCPA) to specify that personal information can exist in various formats, including artificial intelligence systems.  

Protections for neural data (SB 1223)  
Effective date: January 1, 2025 

SB1223 amends the CCPA to add neural data as a category of sensitive information. This amendment ensures that consumers’ neural data receives the same heightened protections that all other forms of sensitive information also receive. California follows Colorado, which enacted a similar law in April, to become the second state to enact such protections for neural data.  

Vetoed

Browser requirements on opt-out preference signals (AB 3048) 

AB 3048 would have required browsers and operating systems to include a setting allowing users to send an opt-out preference signal to businesses. The bill would also have granted rulemaking authority to the California Privacy Protection Agency to ensure the law could keep pace with evolving technology.  

In his veto statement, Gov. Newsom expressed hesitation over the technical feasibility of requiring browsers and operating systems to include an opt-out mechanism and said these questions should be “first addressed by developers, rather than regulators.”  

EPIC supported this legislation and sent a letter, along with several other organizations, to the committee considering the bill noting widespread civil society support.  

Safety requirements for frontier models (SB 1047) 

SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would have placed safety requirements on developers of the largest AI models. These safety obligations would have required developers to implement the capability for a full shutdown of the AI model, document and maintain safety and security protocols, retain an independent auditor annually to assess compliance with this bill, and report any safety incidents associated with an AI model to the Attorney General.  

The bill would also have prohibited developers from making a model publicly available if it had an unreasonable risk of causing or enabling a critical harm, such as the creation of a nuclear weapon or enabling a cyberattack on critical infrastructure.  

This bill garnered massive attention nationwide, including fierce opposition from portions of the tech industry, most notably OpenAI, and some politicians, including former House Speaker Nancy Pelosi. In vetoing SB 1047, Gov. Newsom expressed concern that this bill may stifle AI innovation without effectively addressing potential risks of the technology. He cited as problematic that the bill covered only large foundation AI models and did not take the context in which the model was deployed into account. 

Greater protection for minors’ personal data (AB 1949) 

AB 1949 would have amended the CCPA to provide increased protection for minors. Currently, the CCPA prohibits a business from selling or sharing personal information if it has actual knowledge that a user is under 16 years of age. AB 1949 would have amended the age determination mechanism and provided protections to more minors by prohibiting the sale, sharing, use and disclosure of information for consumers who are under 18 years of age, unless the minor (for those 13-18) or the minor’s parent (for those under 13) authorizes the sale or sharing of their data.  

Gov. Newsom expressed concern in his veto statement that this bill would require businesses to determine whether consumers were minors or adults at the time of collecting their data.  

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate