Analysis

Update: Tracking of Real Algorithmic Harms in California as Enforcement Continues

April 3, 2025 | Mayu Tobin-Miyaji, Law Fellow

As part of our project Assessing the Assessments—supported by the Rose Foundation—EPIC is continuing to track instances of real AI harms that are relevant to enforcement of the California Consumer Privacy Act. EPIC began tracking these harms since the start of the project in 2023 with an update in 2024

The examples below draw on two taxonomies of AI harms to describe the types of harms done: 

  • Danielle Citron’s and Daniel Solove’s Typology of Privacy Harms, comprising physical, economic, reputational, psychological, autonomy, discrimination, and relationship harms; and 
  • Joy Buolamwini’s Taxonomy of Algorithmic Harms, comprising loss of opportunity, economic loss, and social stigmatization, including loss of liberty, increased surveillance, stereotype reinforcement, and other dignitary harms. 

These taxonomies do not necessarily cover all types of AI harms. EPIC uses these taxonomies to help readers visualize and contextualize AI harms without limiting the types and variety of AI harms that readers consider.

While many of the examples are focused on California, examples of the effects of an AI system in one jurisdiction will often apply to other states and countries as well.

ExampleExplanationCitationTypes of Harm
School securitySchools installed sensors that can detect vaping, smoking, and gunshot and aggression detection technology, which has high rates of false positives.Sensors detect vaping in bathrooms at Modesto high schools. Orwell’s Classroom: Psychological Surveillance in K-12 SchoolsIndividual: Psychological, autonomy, discrimination 
School security A day in a college student’s life in California involves ubiquitous and unavoidable data tracking, including learning management systems, automated license plate readers, student IDs to swipe into buildings, campus wifi, remote proctoring software, and campus security cameras.He Wanted Privacy. His College Gave Him NoneIndividual: Psychological, autonomy, political & community participation 
Remote proctoringCalifornia colleges continue to use remote proctoring despite a federal district court decision holding that room scans performed by some remote proctoring software are unconstitutional under the Fourth Amendment.California colleges still use remote proctoring despite court decisionIndividual: Psychological, autonomy, political & community participation
Generative AI in EducationSchools’ use of generative AI for college and career counseling may be eroding interpersonal opportunities to connect and provide guidance.AI Chatbots Can Cushion the High School Counselor Shortage — But Are They Bad for Students?Individual: autonomySocietal: public education
Generative AI in EducationTeachers in California use AI to grade student assignments and provide feedback with little oversight and knowledge by districts or parents. California’s two biggest school districts botched AI deals. Here are lessons from their mistakes.Individual: autonomySocietal: public education
Website filtering in schoolsSchools’ subjective and unchecked website filters block sources and content for classwork and disproportionately filter content related to reproductive health, LGBTQ+ issues and content about people of color.Online censorship in schools is ‘more pervasive’ than expected, new data showsIndividual: Autonomy (chilling effects), discrimination, political & community participationSocietal: public education, equality
Generative AI in EducationGenerative AI tutor from Khan Academy showed significant racial and gender biases.How Harmful Are AI’s Biases on Diverse Student Populations?Individual: Discrimination, autonomy Societal: Public education, equality
Surveillance PricingAI-based surveillance pricing uses customer personal data for pricing, leading to unfair price gouging and privacy violations.AI Can Rip You Off. Here’s How California Lawmakers Want to Stop Price DiscriminationIndividual: economic, autonomySocietal: equality
Law EnforcementAI-powered gunshot detection technology from Flock Safety had 34% confirmed false positives from Feb. 2023 to July 2023, despite company claiming it is 90% accurate.The Mystery of AI Gunshot-Detection Accuracy Is Finally UnravelingIndividual: safety, autonomySocietal: Equality, rule of law
Automated license plate readers California police agencies continue to share driver locations gleaned from automated license plate readers with anti-abortion states despite California law prohibiting such data sharing. Dozens of Police Agencies in California Are Still Sharing Driver Locations with Anti-Abortion States. We’re Fighting Back.Individual: psychological,  autonomySocietal: Rule of law
Facial Recognition Police officers in San Francisco, a city that banned police from using facial recognition technology, worked around the prohibition by repeatedly asking neighboring municipalities to run photos through facial recognition programs.   These cities bar facial recognition tech. Police still found ways to access it.Individual: autonomySocietal: Equality, rule of law 
Employment decisions Ride-hailing apps systemically use AI-driven deactivation processes that punish drivers for reporting safety incidents and refuse to provide effective appeals processes. This especially affects Black, latine, and immigrant drivers.Driven Out by AI: How Uber’s Deactivations Force Drivers Into Chatbot Hell and Financial CrisisIndividual: economic, psychological, autonomy, discrimination, political & community participationSocietal: equality
Hiring decisions Plaintiff alleged that Workday’s algorithmic employment applicant screening tool discriminated against him based on race, age, and disability. California Court Finds that HR Vendors Using Artificial Intelligence Can Be Liable for Discrimination Claims from Their Customers’ Job ApplicantsIndividual: economic, psychological, autonomy, discrimination, political & community participationSocietal: equality
Tenant screening37% of surveyed California landlords reported following the automated recommendation with no review, likely leading to discriminatory outcomes for applicants.Screened Out of Housing: How AI-Powered Tenant Screening Hurts RentersIndividual: economic, psychological, autonomy, discriminationSocietal: equality
Autopilot in autonomous vehiclesThe National Highway Traffic Safety Administration found that Tesla’s Autopilot, or the more advanced Full Self-driving, did not adequately ensure that drivers maintained sufficient attention on driving, which can have fatal results. Tesla’s Autopilot and Full Self-Driving linked to hundreds of crashes, dozens of deathsIndividual: physical, economic 
Autopilot in autonomous vehiclesA test showed that a Tesla with Autopilot will crash into a wall painted to look like a road because of Tesla’s decision against using radar sensors.Man Tests If Tesla Autopilot Will Crash Into Wall Painted to Look Like RoadIndividual: physical, economic
Generative AIAverage amount of time a user spent on an AI companion app based in California, Character AI, is longer than TikTok, Instagram, and ChatGPT. AI companions can be toxic, encourage unhealthy and dangerous behavior such as self-harm or hurting others, or encourage hypersexual conversations. AI friendships claim to cure loneliness. Some are ending in suicide.Individual: physical, psychologicalSocietal: dis- and misinformation
Healthcare decisionsA contractor hired by major American health insurance companies used an AI-based system to adjust reviews of prior authorizations for medical care to increase delays and denials. “Not Medically Necessary”: Inside the Company Helping America’s Biggest Health Insurers Deny Coverage for CareIndividual: physical, economic, psychological.Societal: Equality, public health

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate