Analysis
EEOC and DOJ Put Employers on Notice of Algorithmic Discrimination Risks
June 3, 2022 |
In response to the growing problem of disability discrimination facilitated by algorithms, the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) Civil Rights Division released guidance to employers last month to clarifying how Americans with Disabilities Act restricts the use of automated decision-making tools in hiring and employment settings.
The EEOC’s technical guidance document, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” is part of the agency’s initiative on AI and algorithmic fairness announced last year. The document offers specific direction to employers on how to avoid violating the ADA and the rights of individuals with disabilities.
On the same day, the DOJ Civil Rights Division released guidance on “how algorithms and artificial intelligence can lead to disability discrimination in hiring.” The DOJ’s document outlines the industry of employment algorithms and explains obligations of employers.
The EEOC’s document begins by laying out common algorithmic tools marketed for use in hiring and employment, including automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.
The EEOC outlined where things can go wrong, articulating the three major pitfalls as:
- Failing to give a reasonable accommodation that would facilitate a fair and accurate rating from an algorithm.
- Relying on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. “Screen out” occurs when a disability prevents a job applicant or employee from meeting—or lowers their performance on—a selection criterion, and the applicant or employee loses a job opportunity as a result. A disability could have this effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether.
- Using an algorithmic decision-making tool that poses “disability-related inquiries” or seeks information that qualifies as a “medical examination” before giving the candidate a conditional offer of employment.This type of violation may occur even if the individual does not have a disability. An assessment includes “disability-related inquiries” if it asks job applicants or employees questions that are likely to elicit information about a disability or directly asks whether an applicant or employee is an individual with disability. It qualifies as a “medical examination” if it seeks information about an individual’s physical or mental impairments or health.
The EEOC recommends specific steps as part of “promising practices for employers,” including:
- Transparency
- “[D]escribe, in plain language and in accessible formats, the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating.”
- Notice
- Inform all job applicants and employees who are being rated that reasonable accommodations are available for individuals with disabilities and providing clear and accessible instructions for requesting such accommodations.
- Purpose specification
- “[E]nsure that the algorithmic decision-making tools only measure abilities or qualifications that are truly necessary for the job—even for people who are entitled to an on-the-job reasonable accommodation.”
- Ensure that necessary abilities or qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications.
- Due Diligence
- Ask the vendor to confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation prior to purchase when they’re considering vendor
The DOJ’s principles are less specific than the EEOC’s technical guidance but provide clear warnings for entities relying on algorithms or other automated decision-making technology, stating that “employers must consider how their tool could impact different disabilities.”
Studies vary, but it is not clear which tools companies are using, which are most used, and critically, how those tools work. Last year, the Data & Trust Alliance was created with the stated goal of mitigating potential bias in hiring algorithms. The 23 members of that group have several million employees and represent some of the largest companies in the world including Meta, American Express, Deloitte, NFL, UPS, Starbucks, GM, Comcast NBC Universal, and Nike. Although this attempt at self-regulation is insufficient on its own, the effort is a step toward compliance with the types of guidelines set out by the agencies.
Together, the agency guidance puts more responsibility on the entities implementing algorithms, because that is where impact is felt. The guidance reflects transparency, purpose specification, data minimization, and civil rights protections. It also puts the onus on companies not to only consider tools they plan on adopting in the future, but to demand more of their vendors now.
EPIC commends the EEOC and DOJ for taking action to prevent algorithmic discrimination and urges the agencies to bring enforcement actions swiftly to protect people. EPIC has long advocated for AI regulation to stop discrimination and promote transparency. Below is some of EPIC’s previous work related to AI and algorithmic fairness:
- In 2020, EPIC petitioned the FTC to conduct a rulemaking on commercial uses of AI, including protections against discrimination and unfair bias.
- In 2020, EPIC filed a complaint with the Office of the Attorney General for the District of Columbia alleging that five major providers of online test proctoring services have engaged in unfair and deceptive trade practices in violation of the D.C. Consumer Protection Procedures Act (DCCPPA) and the Federal Trade Commission Act. Specifically, EPIC’s complaint charges that Respondus, ProctorU, Proctorio, Examity, and Honorlock have engaged in excessive collection of students’ biometric and other personal data and have routinely relied on opaque, unproven, and potentially biased AI analysis to detect alleged signs of cheating.
- In 2019, EPIC filed a complaint with the FTC asking the Commission to investigate HireVue’s use of opaque, unproven AI and to require baseline protections for AI use.
- In 2022, EPIC and Consumer Reports published a white paper that urges the FTC to use its broad unfairness authority to establish a data privacy rule. The paper further encourages the FTC to adopt data transparency obligations for primary use of data; civil rights protections over discriminatory data processing; nondiscrimination rules, so that users cannot be charged for making privacy choices; data security obligations; access, portability, correction, and deletion rights; and to prohibit the use of dark patterns with respect to data processing.
- In 2020, EPIC filed a complaint with the FTC, alleging that Airbnb has committed unfair and deceptive practices in violation of the FTC Act and the Fair Credit Reporting Act. Airbnb secretly rates customers “trustworthiness” based on a patent that considers such factors as “authoring online content with negative language.” The company’s opaque, proprietary algorithm also considers “posts on the person’s social network account” as well the individual’s relationships with others and adjusts the “trustworthiness” score based on the scores of those associations. EPIC said the company failed to comply with “established public policies” for AI decision-making, such as the OECD AI Principles and the Universal Guidelines for AI.
- In 2022, EPIC and a coalition led by Fight For the Future called on videoconferencing company Zoom to halt plans to develop and incorporate emotion tracking software into its platform. The software claims to be able to identify the emotions an individual is experiencing based on their face and voice. The coalition argued that emotion recognition is harmful because it is based in pseudoscience, subject to racial biases, can be used to manipulate or punish individuals, and presents serious data security risks.
- In 2022, An EPIC-led coalition, along with the Algorithmic Justice League, Fight for the Future, and over 40 other groups urged federal and state agencies to end the use of ID.me and face verification for access to government benefits and services. The IRS later dropped its plan to use ID.me after criticism from members of Congress, EPIC, and many others. The coalition praised the decision by the IRS to end the use of ID.me and called “on other federal and state government agencies using or considering use of ID.me to follow suit and cancel the use of ID.me and other facial verification tools.”
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate