Artificial Intelligence and Human Rights
Commercial AI Use
Background
AI and automated decision-making tools in commerce is wide-ranging and has significant impacts on the public. There is a significant crossover with Government Use of AI, as government often contracts commercial products for their use.
Documents
Many of the commercial uses have been shown to disproportionately effect black people, poor people, and people with disabilities. More broadly, the rapid proliferation of these tools has justify increased data collection and sharing, which is not yet regulated in the U.S. or meaningfully anywhere in the world. In light of this, EPIC has recently petitioned the Federal Trade Commission to promulgate a rulemaking “concerning the use of artificial intelligence in commerce” generally, as well as complaints urging investigations into hiring algorithms, housing screening algorithms, and scoring algorithms.
In a 2021 article, FTC Commissioner Rebecca Slaughter wrote “As a Commissioner at the Federal Trade Commission—an agency whose mission is to protect consumers from unfair or deceptive practices and to promote competition in the marketplace—I have a front-row seat to the use and abuse of AI and algorithms. In this role, I see firsthand that the problems posed by algorithms are both nuanced and context-specific. Because many of the flaws of algorithmic decision-making have long-standing analogs, related to both human decision-making and other technical processes, the FTC has a body of enforcement experience from which we can and should draw.” The United Nations High Commissioner for Human Rights called on governments to “ban AI applications that cannot be operated in compliance with international human rights law and impose moratoriums on the sale and use of AI systems that carry a high risk for the enjoyment of human rights, unless and until adequate safeguards to protect human rights are in place.” The 2021 report also stresses the need for comprehensive data protection legislation in addition to a regulatory approach to AI that prioritizes protection of human rights. UN High Commissioner for Human Rights Michelle Bachelet explained: “The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks.” EPIC has long advocated for comprehensive data protection legislation, moratoriums on particularly dangerous tools and commonsense AI regulation to protect the public.
Without requirements for transparency, notice, access, nondiscrimination, data minimization and deletion, consumers are at high-risk for adverse consequences.
Sector-Specific Examples of Commercial AI
Employment
AI and algorithmic tools used in hiring include resume scanners, face and voice analysis of interviews, and games for applicants that measure “fit” with a company. While employed, employers regularly surveil their employees whether they are in-person (i.e. surveillance cameras, productivity trackers) or remote (i.e. keystroke trackers, use limitations). EPIC filed a Federal Trade Commission complaint targeting HireVue’s use of opaque algorithms and facial recognition, arguing that HireVue’s AI tools—which the company claimed could measure the “cognitive ability,” “psychological traits,” “emotional intelligence,” and “social aptitudes” of job candidates—were unproven, invasive, and prone to bias.
Education
Algorithms in education are used both prior to enrollment and while an individual is enrolled. Prior to enrollment and throughout the enrollment process, there are programs that evaluate applications and help “manage enrollment.” Throughout, tools help dictate student guidance on classes and majors through Student Success predictors that often use Race as a major predictor, and invasive questionable proctoring and surveillance. On December 9, 2020, EPIC filed a complaint with the Office of the Attorney General for the District of Columbia alleging that five major providers of online test proctoring services have engaged in unfair and deceptive trade practices in violation of the D.C. Consumer Protection Procedures Act (DCCPPA) and the Federal Trade Commission Act. Specifically, EPIC’s complaint charges that Respondus, ProctorU, Proctorio, Examity, and Honorlock have engaged in excessive collection of students’ biometric and other personal data and have routinely relied on opaque, unproven, and potentially biased AI analysis to detect alleged signs of cheating.
Healthcare
Algorithms and automated decision-making systems are used to prioritize care for certain functions like kidney replacement, estimate morbidity risk for patients with COVID-19, and inform administration of pain medicine, among other functions.
Housing
In housing, there is significant use of algorithms and automated decision-making systems to screen tenants and surveil tenants, as well as significant use of biometric entry. The No Biometric Barriers Housing Act was introduced in 2019 by Senator Booker (D-NJ), and Congresswomen Yvette D. Clarke (D-NY), Ayanna Pressley (D-MA) and Rashida Tlaib (D-MI) and would prohibit the usage of facial and biometric recognition in most federally funded public housing.
Public Benefits
Technology companies create algorithmic tools that surveil, collect and analyze data to predict the likelihood a given public benefit recipient may be committing fraud. EPIC, through a freedom of information request, obtained new records about the D.C. Department of Human Services’ use of automated systems to track and assign “risk score[s]” to recipients of public benefits. The documents show that DCDHS has contracted with Pondera, a Thomson Reuters subsidiary, for case management software and a tool known as “Fraudcaster.” Fraudcaster tracks location history and other information about people receiving public benefits, combining this information with “DHS data and pre-integrated third-party data sets” to yield supposed risk scores. Factors that may cause the system to label someone as riskier include “travel[ing] long distances to retailers” and “display[ing] suspect activity.”
Insurance
AI is used in all facets of insurance: attempting to estimate risk, personalizing insurance rates, claim administration, and more. Insurance companies have used AI that uses untested or questionable AI to help predict insurance fraud and respond to claims.
Criminal Legal Cycle
Algorithms used for risk assessment pre-trial or at trial, predictive policing, and gang databases are all most commonly developed by commercial firms. Additionally, companies like Amazon and Citizen offer products that surveil individuals, crowd source reports of crime, and partner with local law enforcement.
Our Commercial AI Use Experts
-
Calli Schroeder
EPIC Senior Counsel and Global Privacy Counsel
-
Grant Fergusson
EPIC Counsel
-
John Davisson
EPIC Senior Counsel and Director of Litigation
Recent Documents on Commercial AI Use
-
Amicus Briefs
McCarthy v. Amazon
US Court of Appeals for the Ninth Circuit
Whether Amazon, the world’s largest online retailer, can be sued under product liability for selling sodium nitrite to minors who used the compound to commit suicide.
Top Updates
Resources
-
Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the FTC
Rebecca Kelly Slaughter | 2021
-
The right to privacy in the digital age
UN High Commissioner for Human Rights | 2021
-
The False Comfort of Human Oversight as an Antidote to A.I. Harm
Ben Green, Amba Kak | 2021
-
Suspect Development Systems: Databasing Marginality and Enforcing Discipline
Amba Kak, Rashida Richardson | forthcoming 2022
-
Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities
Rashida Richardson | 2022
-
Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products
Raji, I & Buolamwini, J. | 2019
-
The Undue Influence of Surveillance Technology Vendors on Policing
Elizabeth Joh | 2017
-
Risk-Needs Assessment: Constitutional and Ethical Challenges
Melissa Hamilton | 2016
-
The Scored Society: Due Process for Automated Predictions
Danielle Keats Citron, Frank Pasquale | 2014
Support Our Work
EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.
Donate