Comments
Comments of EPIC on the EEOC’s Draft Strategic Enforcement Plan for 2023–2027
EEOC-2022-0006
The Electronic Privacy Information Center (EPIC) submits these comments in response to the U.S. Equal Employment Opportunity Commission (EEOC)’s request for information about its Draft Strategic Enforcement Plan, published on January 10, 2023.[1]
EPIC is a public interest research center in Washington, D.C. Established in 1994, EPIC focuses public attention on emerging privacy and civil liberties issues and works to secure the fundamental right to privacy in the digital age for everyone through advocacy, research, and litigation.[2] EPIC has long advocated for data minimization, algorithmic accountability, and human rights safeguards on the use of artificial intelligence.[3] EPIC has also advocated for the regulation of AI hiring tools. In 2019, EPIC filed complaint with the Federal Trade Commission (FTC) alleging that HireVue, a recruiting company, engaged in unfair and deceptive trade practices through its use of facial recognition technology and opaque AI.[4]
In these comments, EPIC conveys its support for the Commission’s enforcement focus on technology-related employment discrimination. EPIC provides further details about the prevalence of automated decision-making in hiring and employment settings, the harms associated with the use of such technology, and the inconsistency and weakness of most audits of these systems. EPIC also provides specific recommendations for a possible Commission inquiry into the use of automated decision-making technologies in employment settings.
I. EPIC supports the EEOC’s decision to focus enforcement resources on technology-related employment discrimination.
a. Automated decision-making is ubiquitous in hiring and employment.
The use of automated decision-making systems in hiring is common and growing, but these tools can facilitate or exacerbate harmful discrimination. These targeting and profiling systems are designed to divide, segment, and score individuals based on their characteristics and behavior. In many cases, people are sorted and scored in ways that reflect and entrench systemic bias. Extensive research has shown racial and gender bias in automated decision-making systems.[5]
Algorithmic discrimination affects applicants and employers both directly and indirectly. Algorithms can influence which jobs are shown to an applicant when searching for employment.[6] Even if a job is shown to the job seeker, other algorithms may predict the candidate’s desired salary and assess whether the candidate meets minimum qualifications.[7] These predictive systems often rely on prior hiring data, which can reinforce existing institutional biases.[8] Employers are indirectly impacted by these algorithms because they can lose access to highly qualified job applicants that screened out on discriminatory grounds. The use of biometric identification and evaluation systems in employment and other settings also poses a significant risk of discrimination on the basis of protected characteristics.[9]
i. Prevalence
The use of automated decision-making in employment settings is sweeping. Hiring technology is often marketed to employers as a way to save time and money, increase efficiency and objectivity, or decrease bias. Automated decision-making tools are used in recruitment and hiring, establishing terms and conditions for employment, performance management, productivity monitoring, and promotion decisions.
The prevalence of algorithmic discrimination in U.S. commerce as a whole is hard to precisely quantify because commercial algorithms are often treated as proprietary information.[10] But as Chair Charlotte Burrows has noted, more than 80% of employers are using AI in some form of their employment and work decision-making.[11] HireVue—a service that uses a proprietary automated decision-making system to evaluate the fitness of job candidates based in part on biometric data—has over 700 corporate customers, including major companies like Amazon, Carnival Cruise Lines, Cathay Pacific, Delta, T-Mobile, BP, Sherwin Williams, Kraft Heinz, Boston Red Sox, Rackspace, Unilever, Emirates Group, Black Angus, TMX Finance, Maggiano’s Little Italy, and Beacon Health System.[12] Nonprofits and public sector employers also use HireVue’s assessment services, including Atlanta Public Schools and Thurgood Marshall College Fund.[13] Talview Inc., a competitor to HireVue, offers a similar suite of automated resume scanning, “AI video interviews with behavioral insights,” and “[o]nline assessments.”[14] And Affectiva, Inc. “analyzes human states in context,” using “computer vision, speech analytics, deep learning and a lot of data.”[15]
As early as 2017, 13% of human resource managers surveyed by Harris said they were already seeing evidence of automated decision-making becoming a regular part of HR, with 55% saying it would be within five years.[16] In 2018, 63% of talent acquisition professionals surveyed by Korn Ferry said that AI had changed the way recruiting is done at their company.[17] In a 2022 survey by the Society for Human Resource Management, nearly one in four organizations reported using AI tools or automation to support HR-related activities.[18] Over the next five years, 25% of organizations surveyed planned to start using or expand their use of AI tools or automation in recruitment and hiring, while 20% of organizations surveyed planned to start using or expanding the use of these tools for performance management.[19]
In the 2022 Society for Human Resource Management survey, 92% of organizations using AI tools sourced some or all of these tools from a third-party vendor.[20] The survey also noted that only 40% of the vendors supplying these tools are “very transparent” about steps taken to ensure that the tools prevent or protect against bias or discrimination.[21] But, legal, media, and scholarly sources demonstrate that algorithmic discrimination based on protected characteristics is widespread.[22]
ii. Harms
Automated decision-making technologies can cause myriad harms, from discrimination to reputational and dignitary harms to loss of opportunity and financial harms.
Automated decision-making systems can facilitate or exacerbate discrimination harms. Research has established racial and gender bias in advertisement delivery on social media[23] and racial, ethnic, and gender bias in all parts of the job acquisition process, from search[24] to resume screening[25] to interviewing, onboarding, promotion, and firing.
Automated decision-making systems can also cause or exacerbate less tangible harms like reputational harms and harms to consumer dignity. These dignitary harms occur when flaws or biases in automated decision-making systems negatively impact how applicants and workers are treated by and compared to their peers—often in hidden and unavoidable ways.
The use of untested and unproven automated decision-making systems can also lead to a loss of opportunity and related financial harm. EPIC previously filed an FTC complaint highlighting the unfairness of HireVue’s screening practices.[26] When a job candidate seeks employment at a company that uses HireVue’s algorithmic assessment services, HireVue administers an automated interview and/or an online “game-based challenge[]” to the candidate.[27] HireVue collects “tens of thousands of data points”[28] from each interview and a “rich and complex” array of data from each “psychometric game[.]”[29] HireVue then inputs these personal data points into “predictive algorithms”[30] that allegedly determine each job candidate’s “employability,” “cognitive ability,” “psychological traits,” “emotional intelligence,” and “social aptitudes.”[31] But HireVue does not give candidates access to the training data, factors, logic, or techniques used to generate each algorithmic assessment. In some cases, even HireVue is unaware of the basis for an assessment.[32] Faulty results from these assessments can jeopardize the integrity of hiring and firing decisions. This has a direct effect on people’s salaries and ability to obtain and hold consistent employment.
b. The audits used to evaluate these automated decision-making systems are inadequate and inconsistent.
Audits measuring the accuracy and bias of automated decision-making tools are not universal, consistent, or even required under existing law. Companies often conduct audits only when forced to or after extensive harm has been publicized—and even then, the audits they perform may be insufficient or opaque.
In 2021, HireVue[33] announced that it had undergone two audits by O’Neil Risk Consulting & Algorithmic Auditing, but did not freely release the audits in full. The audits came after scrutiny of the company’s use of opaque facial recognition and voice analysis in interview software, in part due to an EPIC FTC Complaint about these practices.[34] Although members of the public could access summaries of the audits on HireVue’s website, HireVue required the disclosure of personal information to view each summary and a commitment that the reader would not reproduce any part of the summary.[35] And at least one of the audits was an analysis narrowly tailored to a specific use case of HireVue’s platform—not the clean bill of health the company implied it was.[36] Further, key details about the algorithms used to make judgments in the hiring process are still kept secret from applicants under evaluation. These examples illustrate the broader phenomenon of performing incomplete, constrained, or misleading audits or impact assessments that give the false appearance of meaningful transparency or accountability.[37]
II. The EEOC should conduct a broad-ranging inquiry into the use of automated decision-making technologies in employment settings and publish the results.
Building on the Commission’s ongoing Artificial Intelligence and Algorithmic Fairness Initiative, the EEOC should use its investigatory and enforcement powers (1) to identify trends and harms related to the use of automated decision-making in employment settings; (2) to make its findings public to the extent allowed by law; and (3) to take action against companies that produce or rely on discriminatory or unvalidated automated decision-making systems for employment purposes.
EPIC recommends the following series of questions as a guide for the EEOC’s inquiry. Although responsive research and data already exists for many of these questions, the EEOC’s legal authority and focused attention promise to yield uniquely insightful and comprehensive information concerning the development and use of automated decision-making tools.
- In what circumstances do employers use automated decision-making technologies to make employment-related predictions, recommendations, or decisions?
- What types of automated decision-making technologies are used to make these predictions, recommendations, and decisions?
- What data is used to make these predictions, recommendations, and decisions?
- How is the personal data used to make these predictions, recommendations, and decisions safeguarded?
- What are the most widely used vendors of these technologies across different product types and industries?
- What representations or disclosures (if any) do employers make to applicants and workers concerning the employers’ use of automated decision-making technologies?
- What ability (if any) do applicants and workers have to opt out of automated decision-making technologies or to limit their use?
- How do employers and vendors attempt to identify bias and discrimination arising from the automated decision-making technologies they use?
- To what extent are automated decision-making technologies evaluated by the vendors that develop them? By the employers that use them? By independent auditors?
- What are the most widely used independent auditors of automated decision-making technologies used in employment settings?
- What information do independent auditors rely on to evaluate automated decision-making technologies? Will auditors ever decline to evaluate a system if they are denied access to certain information?
- What tests and audits are performed by employers, vendors, and auditors? At what point in the lifecycle of an automated decision-making system and with what frequency are such tests performed?
- To what extent do employers rely on the representations of vendors that the automated decision-making tools are free from bias or discrimination?
- How do employers, vendors, and auditors measure bias and discrimination?
- How are the results of audits and tests recorded and formatted?
- How are the results of audits and tests incorporated into employers’ decision-making processes?
- To what extent are audits and test results made available to applicants, employees, or other members of the public?
- How do employers respond if they identify bias or discrimination in the predictions, recommendations, or decisions generated by automated decision-making technologies they use?
- How, if at all, do employers notify individuals who may have been adversely affected by the employer’s use of an automated decision-making technology that was later found to exhibit bias or discrimination? What information are those individuals given access to? What recourse do employers provide to those individuals?
III. Conclusion
We applaud the Commission’s continued commitment to advancing equal employment opportunity for all and recognition of the challenges of addressing the national call for racial and economic justice. The Commission’s reaffirmed subject matter priorities highlight the ever-growing prevalence of automated decision-making systems in job advertisements, recruitment, hiring, and other employment decisions. The use of automated decision-making systems threatens to exacerbate discrimination in the workplace, and the Commission should investigate these practices and make its findings public.
We appreciate the opportunity to comment and are eager to engage further to address issues arising from the use of automated decision-making technology in employment settings. For further questions, please contact EPIC Senior Counsel Ben Winters at [email protected].
Respectfully submitted,
/s/ John Davisson
John Davisson
Director of Litigation
/s/ Enid Zhou
Enid Zhou
Senior Counsel
/s/ Ben Winters
Ben Winters
Senior Counsel
[1] Draft Strategic Enforcement Plan, 88 Fed. Reg. 1,379, https://www.federalregister.gov/documents/2023/01/10/2023-00283/draft-strategic-enforcement-plan.
[2] EPIC, About Us (2023), https://epic.org/about/.
[3] See EPIC, AI & Human Rights (2023), https://epic.org/issues/ai/.
[4] EPIC, In re HireVue (2019), https://epic.org/documents/in-re-hirevue/.
[5] Abeba Birhane, The Impossibility of Automating Ambiguity, 27 MIT Artificial Life 44, 46 (2021), https://direct.mit.edu/artl/article-abstract/27/1/44/101872/The-Impossibility-of-Automating-Ambiguity (noting that predictive algorithms rely on historical data that reproduces harmful trends for marginalized individuals); Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, Upturn (2018), https://www.upturn.org/work/help-wanted/ (concluding that bias will arise in predictive hiring tools by default if there are no active measures to mitigate them); Muhammad Ali et al., Discrimination Through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes 2 (2019), https://arxiv.org/abs/1904.02095 (demonstrating a skew in Facebook’s ad delivery process along racial and gender lines for employment and housing ads despite inclusive targeting parameters).
[6] Miranda Bogen & Aaron Rieke, Upturn, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias 21 (2018), https://www.upturn.org/work/help-wanted/.
[7] Id. at 26, 39.
[8] Id. at 28–29.
[9] See generally Kerri Thompson, Countenancing Employment Discrimination: Facial Recognition in Background Checks, 8 Tex. A&M L. Rev. 63 (2020).
[10] See Simson Garfinkel, A Peek at Proprietary Algorithms, 105 Am. Scientist 326 (2017), https://www.americanscientist.org/article/a-peek-at-proprietary-algorithms.
[11] Lindsey Wagner, Artificial Intelligence in the Workplace, ABA (June 10, 2022),
https://www.americanbar.org/groups/labor_law/publications/labor_employment_law_news/spring-2022/ai-in-the-workplace/.
[12] Customer Stories, HireVue (2023), https://www.hirevue.com/case-studies.
[13] Id.
[14] Talview (2023), https://www.talview.com/.
[15] Affectiva (2023), https://www.affectiva.com/.
[16] See CareerBuilder, More Than Half of HR Managers Say Artificial Intelligence Will Become a Regular Part of HR in Next 5 Years, PR Newswire (May 18, 2017), https://www.prnewswire.com/news-releases/more-than-half-of-hr-managers-say-artificial-intelligence-will-become-a-regular-part-of-hr-in-next-5-years-300458775.html.
[17] Korn Ferry Global Survey: Artificial Intelligence (AI) Reshaping the Role of the Recruiter, Korn Ferry (Jan. 18, 2018), https://www.kornferry.com/about-us/press/korn-ferry-global-survey-artificial-intelligence-reshaping-the-role-of-the-recruiter.
[18] Society for Human Resource Management (SHRM), Automation & AI in HR 3 (2022), https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf.
[19] Id. at 4.
[20] Id. at 7.
[21] Id.
[22] See generally Anirudh VK, How Is AI Changing the Finance, Healthcare, HR, and Marketing Industries?, Spiceworks (Feb. 10, 2022), https://www.spiceworks.com/finance/fintech/articles/how-is-ai-changing-industries/; Benjamin Cheatham et al., Confronting the Risks of Artificial Intelligence, McKinsey & Co. (Apr. 26, 2019), https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-artificial-intelligence.
[23] Cody Mello-Klein, Facebook’s Ad Delivery Algorithm Is Discrimination Based on Race, Gender and Age in Photos, Northeastern Researchers Find, Northeastern Global News (Oct. 25, 2022) https://news.northeastern.edu/2022/10/25/facebook-algorithm-discrimination/.
[24] Amit Datta et al., Automated Experiments and Privacy Settings: A Tale of Opacity, Choice, and Discrimination, arXiv 17 (Mar. 18, 2015), https://arxiv.org/pdf/1408.6491.pdf; Sheridan Wall & Hilke Schellmann, LinkedIn’s job-matching AI was biased. The company’s solution? More AI, MIT Tech. Rev. (June 23, 2021), https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/.
[25] Amani Carter & Rangita de Silva de Alwis, Unmasking Coded Bias: Why We Need Inclusion and Equity in AI 11 (2021), https://www.law.upenn.edu/live/files/11528-unmasking-coded-bias (“Evidence suggests resumes containing minority racial cues, such as a distinctively Black name[,] lead to thirty to fifty percent fewer callbacks from employers than do otherwise equivalent resumes without such cues.”).
[26] Complaint and Request for Investigation, Injunction, and Other Relief, In re HireVue (Nov. 6, 2019), https://epic.org/wp-content/uploads/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf [hereinafter EPIC HireVue Complaint].
[27] Id.
[28] How to Prepare for Your HireVue Assessment, HireVue (Apr. 16, 2019), https://www.hirevue.com/blog/how-to-prepare-for-your-hirevue-assessment; Nathan Mondragon et al., HireVue, The Next Generation of Assessments 6 (2019).
[29] Mondragon et al., supra note 28, at 5.
[30] Id. at 7.
[31] HireVue, supra note 28; Mondragon et al., supra note 28, at 6.
[32] Drew Harwell, A Face-Scanning Algorithm Increasingly Decides Whether You Deserve the Job, Wash. Post (Oct. 25, 2019), https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/.
[33] See EPIC HireVue Complaint, supra note 26; HireVue, supra note 28; Mondragon et al., supra note 28.
[34] EPIC HireVue Complaint, supra note 26.
[35] Download Report, HireVue (2021), https://www.hirevue.com/resources/orcaa-report. To access the report, the website requires entry of First name, Last name, Work Email, Company Name, and has the following information before the “Submit Button”: “Sharing your information helps us understand who is reading our research. The report you are downloading is being made available for review only. By downloading this document, you acknowledge and agree this report is the sole and exclusive intellectual property of HireVue, Inc., and you agree you shall not use, copy, excerpt, reproduce, distribute, display, publish, etc. the contents of this report in whole, or in part, for any purpose not expressly authorized in writing by HireVue, Inc.”
[36] See Alex C. Engler, Independent Auditors Are Struggling to Hold AI Companies Accountable, Fast Company (Jan. 26, 2021), https://www.fastcompany.com/90597594/ai-algorithm-auditing-hirevue (“[H]aving viewed a copy of the ORCAA audit, I don’t believe it supports the conclusion that all of HireVue’s assessments are unbiased. The audit was narrowly focused on a specific use case, and it didn’t examine the assessments for which HireVue has been criticized, which include facial analysis and employee performance predictions.”).
[37] See Mona Sloane, The Algorithmic Auditing Trap, Medium (Mar. 17, 2021), https://onezero.medium.com/the-algorithmic-auditing-trap-9a6f2d4d461d.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate