Artificial Intelligence and Human Rights
Government Use of AI
Background
Governments at all levels are using AI and automated decision-making systems to expand or replace law enforcement functions, assist in public benefit decisions, determine housing eligibility, and more. These systems are often faulty and opaque, but still poorly regulated.
Documents
The adoption of AI by government agencies is now widespread, covering a broad range of functions from surveillance monitoring to screening, scoring, transcription, and more. For example, a February 2020 report found that nearly half of the 142 federal agencies studied had “experimented with AI and related machine learning tools,” and by 2023, a GAO report revealed roughly 1,200 distinct AI use cases in federal agencies. Even with new and ongoing attempts to regulate government use and procurement of AI, though, many of the AI tools procured by federal, state, and local agencies have proven to be deeply flawed. And you likely interact with these AI tools without knowing it: they’re used in law enforcement and the broader criminal legal cycle, in public benefit administration, in housing processes, and more. Certain states have pending legislation that would improve transparency and accountability of these tools state-wide, but none have passed yet.
AI and the Criminal Legal Cycle
In the criminal justice system, the deployment of AI and other algorithmic decision-making tools tends to exacerbate discriminatory policing patterns that already disadvantage minorities. A 2019 National Institute of Standards and Technology (“NIST”) study of facial recognition tools—which are typically AI-based—found that the systems were up to 100 times more likely to return a false positive for a non-white person than for a white person. Specifically, NIST found that “for one-to-many matching, the team saw higher rates of false positives for African American females,” a finding that is “particularly important because the consequences could include false accusations.” A 2022 NIST study reaffirmed these discriminatory findings, even when the facial recognition system used high-quality photos for both the probe image and the images in the dataset. A separate study by Stanford University and MIT, which looked at three widely deployed commercial facial recognition tools, found an error rate of 34.7% for dark-skinned women compared to an error rate of 0.8% for light-skinned men. Similarly, a review of Rekognition—an Amazon-owned facial recognition system marketed to law enforcement—revealed indications of racial bias and found that the system misidentified 28 members of U.S. Congress as convicted criminals. The same troubling racial and gender bias trends occur time and time again across AI and automated decision-making tools used in pretrial dispositions, sentencing, and prison settings often yield inaccurate or biased results that perpetuate existing inequalities.
Many of the AI tools used in the criminal legal system also exacerbate passive surveillance and racialized policing trends. In 2023, for example, EPIC urged the Department of Justice to investigate the discriminatory effects of automated, acoustic gunshot detection systems. Extensive research—including one 2021 study by the Inspector General of Chicago—has shown that these systems, which cities have procured and placed disproportionately in majority-minority neighborhoods, produce tens of thousands of false positives that increase police activity and lead to false arrests.
AI and Public Benefits
Since 2021, EPIC has used state Freedom of Information Act (FOIA) requests to obtain detailed records about state and local use of AI, including AI systems developed by private companies. The results of our research are concerning: across the country, state and local governments are experimenting with AI tools that outsource important government decisions to private companies, all without public input or oversight. In D.C., for example, 20 different agencies use AI and automated decision-making for important decisions like who should receive public benefits or access public housing.
While some form of automation is nothing new to public benefits programs, the scope and sophistication of AI tools used in public benefits programs has increased in recent years, in part due to increased demand for benefits during the COVID-19 pandemic. These tools are designed to automated, assist, or replace human decision-making for a variety of tasks, including eligibility determinations, fraud detection, and identity verification. And when these tools produce errors, they can dramatically impact peoples’ lives: automated public benefits decisions have incorrectly rejected eligible applicants, spurred on improper fraud allegations and overpayment recollection proceedings, and cost state governments millions.
Unfortunately, many of the AI systems on which state and local government rely do produce errors and biases. The accuracy, reliability, and effectiveness of an AI system depends entirely on the data used to train and operate the system, the analytic technique used to produce system outputs, and the system’s programmed risk tolerance. Without proper safeguards and oversight, AI systems can produce outputs that are flawed, biased, or overly simplistic. For example, one Thomson Reuters-backed fraud detection used in California was based on broad range of public records and commercial data that contained errors and historically biased data; when California audited the system in 2022, they found that it incorrectly flagged 600,000 eligible claimants as fraudulent. The system was only 46% accurate.
Agencies have begun to publish guidance for AI in public benefits programs, but without more transparency and safeguards in place, these AI tools will continue to produce errors and biases that disproportionately harm marginalized communities.
AI and Housing
In housing, there are several automated decision-making systems used: facial recognition and other biometric collection and analysis; algorithms deciding if someone is financially able to pay a mortgage or rent; profiles collected from opaque sources and combined into “trustworthiness” reports, and other instances of scoring and screening.
For lending, a rule that was struck down in 2020 created a defense to a discrimination claim under the Fair Housing Act where the “predictive analysis” tools used for lending decisions were not “overly restrictive on a protected class” or where they “accurately assessed risk.” The Judge explained that this regulation would “run the risk of effectively neutering disparate impact liability under the Fair Housing Act” in granting a preliminary injunction. Last October, EPIC and several others warned the federal housing agency that providing such a safe harbor for the use of algorithms in housing without imposing transparency, accountability, or data protection regulations would exacerbate harms to individuals subject to discrimination. The Alliance for Housing Justice called the rule “a vague, ambiguous exemption for predictive models that appears to confuse the concepts of disparate impact and intentional discrimination.”
For biometric identification, The No Biometric Barriers Housing Act was introduced in 2019 by Senator Booker (D-NJ), and Congresswomen Yvette D. Clarke (D-NY), Ayanna Pressley (D-MA) and Rashida Tlaib (D-MI) and would prohibit the usage of facial and biometric recognition in most federally funded public housing and direct the Department of Housing and Urban Development (HUD) to submit a report to Congress about the impact of the technology on it’s tenants.
For screening tools, they are used by landlords and housing authorities to help make decisions about whether to accept a tenant’s rent application. Companies who offer tenant screening tools collect, store, and select records for housing providers to use in evaluating tenants. These tenant screening reports often contain errors and misleading information, and there is little oversight of the companies’ record collection or matching practices. There are three major sources of error for tenant screening reports: lack of information in the records, record matching errors, and failure to update records databases. First, court records often lack enough information to accurately be matched to individuals. Second, screening reports continually match records to the wrong applicants. Companies may use overbroad search and matching practices, which lead to reports that contain records belonging to the wrong person. Frequently, these companies match records only based on first and last name, which can match a record to a different person. Third, these companies can collect or use outdated records, which do not accurately reflect updates to cases such as dismissals or sealing.
In 2024, EPIC filed a lawsuit on behalf of the National Association of Consumer Advocates (NACA) challenging one automated tenant screening company, RentGrow, over unfair trade practices tied to inaccurate records and insufficient oversight over its automated systems.
AI Procurement
With government agencies using a larger and more sophisticated array of AI tools, AI procurement has become one popular path toward more oversight into government use of AI. In 2024, for example, the federal Office of Management and Budget released Memo M-24-18, outlining federal government-wide policies for procuring AI systems, and California released its own state procurement guidelines for generative AI systems.
Procurement is particularly effective at regulating government AI systems because companies bidding for government contracts must (1) disclose information they may be unwilling to disclose publicly and (2) agree to contractual terms that can include strong oversight requirements. For example, after Michigan scrapped a faulty unemployment insurance fraud detection system, it required the replacement vendor to agree to a “source code escrow” provision, wherein an independent auditor could monitor whether the new system was accurate.
For more information on the challenges surrounding government use of procured AI systems and the potential for AI procurement reform, see EPIC’s 2023 report, Outsourced & Automated.
AI and Regulatory Enforcement Assistance
Many U.S. agencies such as the Securities Exchange Commission, Internal Revenue Service, and the Department of the Treasury use algorithms to help direct enforcement resources for potential fraud cases. These programs are detailed further in Government by Algorithm, a 2020 report from ACUS.
Recent Documents on Government Use of AI
-
Privacy Cases
NACA v. RentGrow
DC Superior Court
Challenging the unfair and deceptive practices of tenant screening company RentGrow's automated tenant screening reports
-
Publications
Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making
Building on two years of state contracting research, EPIC publishes new report on the oft-forgotten world of government AI procurement.
Top Updates
Resources
-
The Automated Administrative State: A Crisis of Legitimacy
Ryan Calo & Danielle Citron | 2021
-
Best Practices for Government Procurement of Data-Driven Technologies
Rashida Richardson | 2021
-
AI Procurement in a Box: AI Government Procurement Guidelines
World Economic Forum | 2020
-
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest
Emanuel Moss et al. | 2021
-
Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission
Rebecca Kelly Slaughter et al. | 2021
-
The Right to Privacy in the Digital Age
United Nations High Comm’r for Human Rights | 2021
-
The False Comfort of Human Oversight as an Antidote to A.I. Harm
Ben Green & Amba Kak | 2021
-
Suspect Development Systems: Databasing Marginality and Enforcing Discipline
Amba Kak, Rashida Richardson | Forthcoming 2022
-
Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities
Rashida Richardson | Forthcoming 2022
Support Our Work
EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.
Donate