Privacy & Racial Justice
Background
Marginalized communities are disproportionately harmed by data collection practices and privacy abuses from the both the government and private sector. Communities of color are especially targeted, discriminated against, and exploited through surveillance, policing, and algorithmic bias.
Documents
Protecting privacy means promoting equity and ensuring that civil and human rights are at the forefront of systems design and implementation. But the use of automated decision-making tools, facial recognition, and various other surveillance tools have targeted and often criminalized Black, Brown, and Indigenous communities.
In the post 9/11 landscape, the government has systematically expanded its mass surveillance. Many people of color have surveilled at one point or another in the name of “national security” or “public safety.” Racial justice activists have historically been targeted by law enforcement. More recently, the Federal Bureau of Investigation has been tracking Black Lives Matter activists and deploying a variety of surveillance technology like facial recognition to monitor antiracism protests.
Facial recognition technology misidentifies and misclassifies people of color. The use of this technology has led to a disproportionate number of false arrests and wrongful incarceration of Black men. In the private sector, corporations are using facial recognition in hiring and to monitor their workers’ movement and productivity.
Online platforms and other service providers use consumer data to discriminate against people of color and deny them opportunities in housing, employment, commerce, and other services. Companies, however, are not transparent about their data collection practices and how they use personally identifiable information and other data points in automated decision-making tools. For instance in healthcare, the use of AI in clinical decision-making have been found to worsen disparities for people of color, even if the AI system was meant to help all people.
Online ad targeting can be discriminatory when it selectively promotes jobs, goods, and other opportunities for select groups of people. For example, online ads disproportionately suggest that Black people may have arrest records. Ad-driven social media platforms like Facebook have discriminatory algorithms that can exclude certain demographic and racial groups from seeing employment and housing ads.
Racial Bias is Embedded in Tech
Even if not intentional, structural racism is embedded in the design and implementation of technology. Under representative data sets, biased training data, implicit bias of coders and designers, and a lack of transparency and accountability are some factors that affix racism in technology.
Facial Recognition Technology
The government, especially law enforcement, and the private sector have increasingly relied on the use of facial recognition technology. Most of this technology, however, is flawed and prone to bias.
A U.S. National Institute of Standards and Technology (NIST) study found that US-developed facial recognition technology exhibited higher false positives for one-to-one matching on Black, Native American, and Asian faces. One-to-one matching is where a photo is matched with another photo to see if they are the same person. In one-to-many matching, where a photo is matched to multiple photos within in a database, facial recognition systems had higher rates of false positives for Black women. These higher rates of false positives could put Black women at the highest risk of being falsely accused of a crime. In a different study, commercial facial recognition systems took longer to match and were less accurate on darker skin tones.
Automated Decision-Making Tools
Automated decision-making systems are tools that use statistical analysis, artificial intelligence (AI), and machine learning algorithms to aid in decision-making. Both in the public and private sector, these tools are used in high-stakes decisions such as decisions around housing, insurance, lending, employment, criminal justice, access to public benefits, and education. Algorithmic racism occurs when the use of AI, machine learning, and other big data practices, generate outcomes that reproduces racial disparity and exacerbate inequities.
Disparity is inherently built into the system from start to finish. Some factors that reinforce bias are that programmers have historically been white and male, the training data used to teach algorithms to make decisions are incomplete or unrepresentative, the datasets used in making automated determinations often reflect existing inequalities, and the tools produce flawed outcomes.
The use of automated decision-making tools to make predictions, recommendations, or help in decision-making often reinforces bias and discrimination—subjecting people to algorithmic racism. Communities of color are disproportionately harmed by these tools because people are unfairly denied opportunity and access as well as face more punitive outcomes in criminal justice system.
Targeted Surveillance Impedes Racial Justice
People of color are disproportionately subjected to surveillance and tracking. These systems of surveillance erode personal privacy and marginalizes people of color.
Moreover, the lack of regulation in surveillance technology and its use results in the exploitation of people of color. For example, companies develop surveillance tools that collect large swaths of data on people and will share this data with law enforcement. The shared data tends to be misused against minorities.
Law Enforcement Surveillance
Flawed facial recognition technology perpetuates racial bias in the criminal justice system. Predictive policing systems also reinforce bias and can lead to over-policing in communities of color.
The federal government and local law enforcement use facial recognition technology to surveil people of color, scanning and identifying individuals often secretly and tracking their whereabouts through cameras equipped with the technology. People of color are disproportionately enrolled in biometric databases used in criminal investigations.
The fight for racial justice is undermined when law enforcement use of surveillance tools on protestors. Covert surveillance of protestors chill free speech and disrupt organizing efforts. Law enforcement agencies use AI tools to monitor social media platforms to identify protest organizers and participants. Law enforcement agencies also deploy drones, automated license
How EPIC is Promoting Racial Justice
EPIC supports transparency and accountability in the deployment of AI, algorithmic decision-making tools, and other surveillance technology. We center equity and fairness in our advocacy work. Several of EPIC’s program areas focus on addressing bias and promoting human rights in privacy like:
- EPIC launched a campaign to Ban Face Surveillance, calling for a moratorium on facial recognition technology for mass surveillance in all countries
- EPIC’s Screening and Scoring Project produces comprehensive resources that identifies instances and screening and scoring of everyday life, articulates common issues with these tools, analyzes potential violations of existing law with their use, and works to protect the public from algorithmic harm.
- EPIC’s AI and Human Rights Project advocates for the adoption of transparency, equitable, and commonsense development of AI policy and regulations.
- EPIC has also partnered with other civil rights, consumer groups, and racial justice organizations in coalition letters urging Congress, agencies like the Federal Trade Commission and DHS, and the White House to address issues ranging from the use facial recognition technology in the federal government, studying data and discrimination issues in agency enforcement investigations, to taking action against the unconstitutional surveillance of racial justice protestors and more.
Recent Documents on Privacy & Racial Justice
-
Amicus Briefs
New Jersey v. Arteaga
New Jersey Superior Court Appellate Division
Whether a defendant who was identified using a facial recognition systems is entitled to detailed discovery on the system and the specifics of how he was identified.
-
APA Comments
NYPD POST Act Disclosures
Top Updates
Resources
-
Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities
Rashida Richardson | 2022
-
Privacy as Civil Right
Alvaro Bedoya | 2020
-
Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
Richardson, Shultz, Crawford | 2019
-
A Taxonomy of Police Technology’s Racial Inequity Problems
Laura Moy | 2019
-
Race After Technology
Ruha Benjamin | 2019
-
Dark Matters: On the Surveillance of Blackness
Simone Browne | 2015
-
Discrimination in Online Ad Delivery
Latanya Sweeney | 2013
Support Our Work
EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.
Donate