Testimony
Testimony on The Stop Discrimination by Algorithms Act (D.C.)
October 6, 2022
The Honorable Robert White, Chair
D.C. Council Committee on Government Operations & Facilities
1350 Pennsylvania Avenue, NW
Washington, D.C. 20004
Dear Chair White and Members of the Committee:
EPIC writes in strong support of Council Bill B24-0558, The Stop Discrimination by Algorithms Act (“SDAA”). The bill would be a landmark step in establishing baseline protections against algorithmic harm in the U.S and limiting the use of the most dangerous automated decision-making systems against District residents. Critically, it will bring civil rights protections explicitly into the 21st century. This bill is clear, does not put an unreasonable onus on businesses, and will protect consumers.
The Electronic Privacy Information Center (EPIC) is a public interest research center established in 1994 to focus public attention on emerging privacy and civil liberties issues.[1] EPIC has promoted algorithmic transparency for years and has litigated several cases on the front lines of AI use in the federal government.[2] EPIC successfully sued U.S. Customs and Border Protection to obtain documents relating to its use of secret, analytic tools to generate “risk assessments” of U.S. travelers.[3] EPIC compelled the Department of Homeland Security to produce documents related to a program that assesses “physiological and behavioral signals” to determine the probability that an individual might commit a crime.[4] EPIC successfully sued the Department of Justice to produce documents concerning the use of “evidence-based risk assessment tools” in the criminal justice system.[5] EPIC also petitioned the Federal Trade Commission to issue regulations on commercial AI use, which the Commission is currently exploring as part of a broader commercial surveillance rulemaking,[6] and submitted FTC complaints against Airbnb and HireVue for unfair and deceptive practices pertaining to their use of unsubstantiated automated decision-making systems.
EPIC has also made it a priority to ensure an open and inclusive process for U.S. policymaking on automated decision-making systems.[7] EPIC successfully sued the National Security Commission on Artificial Intelligence, forcing the Commission to open its records and meetings to the public.[8] EPIC has also submitted comments to the National Institute of Standards and Technology, the National Artificial Intelligence Research Resource, the Colorado Attorney General, the California Privacy Protection Agency, the U.S. Department of Justice, and other agencies in the past year concerning the regulation of automated decision-making systems.[9]
In 2020, EPIC published a report about pre-trial risk assessments in the criminal justice system, Liberty at Risk.[10] The report lays out states’ use of risk assessment tools, defines key terms, offers recommendations, and surveys litigation around the tools.
Although EPIC’s work has touched on a wide range of problematic AI use cases, new tools are being developed and adopted every day. Automated decision-making tools are increasingly used to make decisions concerning housing, public benefits, healthcare, hiring, and criminal justice. D.C. has a chance to be a leader in the regulation of automated decision-making systems by enacting B24-0558, which will ensure meaningful transparency and individually enforceable rights.
Third Party Contractors Use Automated Decision-Making Systems to Significant Legal Effect in D.C. With Minimal Transparency and Accountability
EPIC has researched automated decision-making systems in use across the D.C. government over the last 14 months, including third-party systems like RentGrow, used for public housing tenant screening reports,[11] and Pondera, used for investigations into potential benefits fraud.[12] These systems are deeply opaque and incentivize sensitive data collection. These systems are used in commercial settings for hiring, education, credit decisions, and housing just to name a few.
These tools are frequently given deference and accorded an air of objectivity, but they often reinforce bias and discrimination in the data the algorithm “learns” from and the contexts the tools are used in.
At present, there is little accountability for the human impact of these tools or transparency around their use and development. The SDAA will mitigate the discriminatory impacts of these systems by explicitly bringing civil rights protections into the digital era. The SDAA will also help by establishing oversight in the form of audits, impact assessments, and reports to the Office of Attorney General (“OAG”).
The SDAA Has a Strong Definition of Algorithm
The SDAA’s expansive definition of an algorithm is crucial. Simple but impactful tools or extremely common ones should not be exempt from this bill. The discriminatory potential of an algorithm has no correlation with its complexity.
The SDAA Correctly Includes a Private Right of Action and Overlapping Enforcement Authority for the Attorney General
The Pew Research Center recently found that most Americans are opposed to algorithms making decisions with consequences for humans.[13] Crucially, the SDAA gives individuals the right to sue when they suffer algorithmic discrimination with respect to important life opportunities. Given the difficulty of obtaining information about which automated decision-making systems are used and how they are used—something B24-0558 will improve—it is particularly important to give individuals the ability to sue when they experience discrimination in a housing, education, or similar context.
The layered nature of automated decision-making systems and the myriad aspects that are critical to accountability—the developer, the factors used, the data sources, the weight of the factors, data use and management policies, and more—require robust transparency. And beyond mere transparency, the SDAA introduces landmark safeguards around data privacy and cybersecurity.
The SDAA allows the OAG and individuals to obtain relief for harms that individuals suffer —in other words, it is regulation with desperately-needed teeth. The private right of action is sensible and essential: when people suffer individual harm from algorithms, they deserve individual remedies. The OAG’s jurisdiction is essential because of the expertise and capability of the office to address group harms—but it would be insufficient by itself.
The SDAA also allows claims to be based on impact, without intent, which is the correct standard and does not pose an unreasonable or unwarranted burden on business. Discriminatory impact is the basis of the Civil Rights Act of 1964, the Fair Housing Act, and the Age Discrimination in Employment Act.[14] Especially in the digital sphere, it is nearly impossible to prove discriminatory intent when essential information is hidden behind trade secrets or commercial protections. D.C. does not need to reinvent the wheel, and businesses should not get special deference in the digital sphere.
The SDAA’s Accountability Mechanisms Are Strong and Necessary—But Can be Improved
The SDAA’s accountability mechanisms are strong because the bill lays out what must be considered in yearly internal audits and what must be disclosed to the Attorney General. There are notices, discrimination audits, annual impact assessments, and reports required in the bill. The audits should add requirements about audits and annual impact assessments to be done by a third-party independent auditor with sufficient access. There should also be a clarification about what must be included in an Annual Impact Assessment consists of, and whether the Report to the Attorney General’s office and the Impact Assessment are the same document or largely overlapping.[15] SDAA strikes a good balance by requiring certain baseline information without overburdening businesses of any size.
The SDAA Would Address Documented Algorithmic Harm that Have Been the Focus of EPIC’s Consumer Protection Complaints
Over the years, EPIC has highlighted and acted against many of the harmful business practices that this bill addresses. EPIC has targeted companies including Airbnb, HireVue, and online proctoring platforms for their unfair and deceptive uses of algorithms. EPIC supports B24-0558 because it protects consumers in the District from algorithmic discrimination.
B24-0558 will have a positive impact on housing determinations in platforms like Airbnb, which makes opaque algorithmic determinations about renters and relies on data that can act as proxies for protected characteristics like race.[16] EPIC filed a complaint[17] with the FTC against Airbnb for its use of such algorithms to determine a potential client’s “trustworthiness.” The SDAA will provide critical protection for renters by ensuring that protected characteristics are not used as proxies in algorithmic decision-making.
Similarly, EPIC filed a complaint[18] with the D.C. Attorney General’s Office in 2020 against the five largest test proctoring companies to protect students in the District who were subjected to unfair, unproven, and opaque algorithms that could allegedly determine a student’s likelihood of cheating. However, independent third-party testing of such systems has shown that they rarely, if ever, catch any cheating students,[19] Other reports indicate that students of color,[20] students with disabilities,[21] and gender nonconforming students have been falsely flagged by proctoring algorithms at disproportionately high rates. EPIC urges the Council to pass the SDAA and protect students and other District residents from similar unfair and deceptive algorithms.
Finally, the SDAA imposes audit requirements and disclosure requirements that are crucial to protect consumers in the District. In 2019, EPIC filed a complaint against HireVue with the FTC.[22] HireVue is a company that provided pre-employment screening services on behalf of employers to screen potential job candidates. HireVue uses biometric data and algorithms to create a score of each candidate’s employability. Unsurprisingly, HireVue’s opaque software collects way too much personal information and—like all algorithms based on biometric analysis—presents a significant risk of disparate impact. HireVue has claimed that an independent audit found its software free of bias. But HireVue significantly overstated the findings of that audit, in part because it did not need to report the audit’s findings to any regulatory body. The SDAA will help prevent such harmful commercial practices with its strong audit disclosure requirements.
The SDAA will ensure that companies that use algorithmic decision-making comply with notice requirements, perform actual audits, and disclose essential information to protect individuals from discriminatory and unfair algorithms. These audits may prevent certain characteristics—such as race, gender identity or expression, sexual orientation, religion, familial status, national origin, and disability—from being used to discriminate against individuals. If there is a discriminatory impact, this bill allows both the OAG and individuals to obtain remedies. Accordingly, EPIC urges the Council to enact B24-0558.
Conclusion
When entities use automated systems to make decisions about people, it raises fundamental questions about accountability, due process, and fairness. Algorithms can deny people educational opportunities, employment, housing, insurance, and credit.[23] Many of these decisions are entirely opaque, leaving individuals to wonder whether the decisions were accurate, fair, or even about them.
Passage of Council Bill B24-0558 will benefit District residents by ensuring testing, accountability, and transparency when algorithms are used to make decisions about them. Critically, it brings civil rights protections into the 21st century and will empower individuals to sue for harms they experience. This bill is clear, does not impose an unreasonable burden on businesses, and will protect consumers. The SDAA will make clear the District’s commitment to fairness, transparency, and nondiscrimination in the use of algorithmic decision-making tools and establish D.C. as the nation’s leader in this policy space.
If EPIC can be of any assistance to the Committee, please contact EPIC Counsel Ben Winters at [email protected] or EPIC Counsel Sara Geoghegan at [email protected].
Sincerely,
/s/ Sara Geoghegan /s/ Ben Winters
Sara Geoghegan Ben Winters
EPIC Counsel EPIC Counsel
Ward 1 Resident Ward 3 Resident
[1] EPIC, About EPIC, https://epic.org/epic/about.html.
[2] EPIC, AI & Human Rights, https://epic.org/issues/ai/.
[3] EPIC, EPIC v. CBP (Analytical Framework for Intelligence), https://epic.org/foia/dhs/cbp/afi.
[4] EPIC, EPIC v. DHS- FAST Program, https://epic.org/foia/dhs/fast.
[5] EPIC, EPIC v. DOJ (Criminal Justice Algorithms), https://epic.org/foia/doj/criminal-justice-algorithms.
[6] Federal Trade Commission, Trade Regulation Rule on Commercial Surveillance and Data Security, 87 FR 51273 (Aug. 22, 2022), at https://www.ftc.gov/legal-library/browse/federal-register-notices/commercial-surveillance-data-security-rulemaking.
[7] See Letter from EPIC et al. to Michael Kratsios, Deputy U.S. Chief Technology Officer (July 4, 2018), https://epic.org/privacy/ai/OSTP-AI-Petition.pdf (“Unless the channels of public input are formally broadened and deepened substantially, the Select Committee will fail to understand and mitigate the risks of AI deployment.”).
[8] EPIC v. Nat’l Security Comm’n on Artificial Intelligence, No. 19-2906 (D.D.C. filed Sept. 27, 2019); EPIC, EPIC Challenges Closed Door Meetings of US AI Commission (Sept. 27, 2019), https://epic.org/2019/09/epic-challenges-closed-door-me.html.
[9] Comments of EPIC, Request for Comment on Study to Advance a More Productive Tech Economy by National Institute of Standards and Technology,86 Fed. Reg. 66287 (Feb. 15, 2022) https://epic.org/documents/epic-comments-to-nist-on-advancing-a-more-productive-tech-economy/; Comments of EPIC, Request for Feedback on the Artificial Intelligence Risk Management Framework: Second Draft, National Institute of Standards and Technology (Sep. 28, 2022), https://epic.org/wp-content/uploads/2022/09/EPIC-Comments-NIST-RMF-09-28-22.pdf; Comments of EPIC, On Proposed Rulemaking Under the Colorado Privacy Act of 2021, Colorado Attorney General (Aug. 5, 2022) https://epic.org/documents/epic-comments-on-colorado-privacy-act-rulemaking/;Comments of EPIC and Three Organizations, On Proposed Rulemaking Under the California Privacy Rights Act of 2020, California Privacy Protection Agency (Nov. 8, 2021) https://epic.org/documents/comments-of-epic-and-three-organizations-on-regulations-under-the-california-privacy-rights-act-of-2020/.
[10] EPIC, Liberty at Risk (2020) at https://epic.org/documents/liberty-at-risk/.
[11] District of Columbia Housing Authority, 2019 Oversight and Performance Hearing at 28 https://dcha.us/img/guest_uploads/temp_Uf9tOu36yq1550855713Q8mBF4DZk9upGMGzt6LI.pdf.
[12] EPIC, EPIC Spotlights Pondera’s Fraud Detection Algorithms for Public Benefits (July 5, 2022) https://epic.org/epic-spotlights-ponderas-fraud-detection-algorithms-for-public-benefits/.
[13] Pew Research Center, Public Attitudes Toward Computer Algorithms (Nov. 2018). https://www.pewresearch.org/internet/2018/11/16/public-attitudes-toward-computer-algorithms/
[14] Title VI, 42 U.S.C. § 2000d et seq;42 U.S.C § 3604; 29 U.S.C. §§ 621–634
[15] For suggestions on what should be included in an Algorithmic Impact Assessment, EPIC recommends following the guidance in Assembling Accountability. Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Claire Elish, Jacob Metcalf, Assembling Accountability: Algorithmic Impact Assessment for the Public Interest, Data & Society (2021), https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf.
[16] Jacob Ladd, Discrimination By Proxy: How Ai Uses Big Data To Discriminate, Michigan Technology Law Review (Apr. 12, 2022), http://mttlr.org/2022/04/discrimination-by-proxy-how-ai-uses-big-data-to-discriminate/.
[17] Complaint of EPIC, In re Airbnb (Feb. 26, 2020) https://epic.org/wp-content/uploads/privacy/ftc/airbnb/EPIC_FTC_Airbnb_Complaint_Feb2020.pdf.
[18] Complaint of EPIC, In re Online Test Proctoring Companies (Dec. 9, 2020), https://epic.org/wp-content/uploads/privacy/dccppa/online-test-proctoring/EPIC-complaint-in-re-online-test-proctoring-companies-12-09-20.pdf.
[19] Laura Bergmans, Nacir Bouali, Marloes Luttikhuis and Arend Rensink, On the Efficacy of Online Proctoring using Proctorio, 1 Proceedings of the 13th International Conference on Computer Supported Education (CSEDU 2021) 279-90 (2021), https://ris.utwente.nl/ws/portalfiles/portal/275927505/3e2a9e5b2fad237a3d35f36fa2c5f44552f2.pdf.
[20] Todd Feathers, Proctorio Is Using Racist Algorithms to Detect Faces, Vice (Apr. 8, 2021), https://www.vice.com/en/article/g5gxg3/proctorio-is-using-racist-algorithms-to-detect-faces.
[21] Lydia X. Z. Brown, How Automated Test Proctoring Software Discriminates Against Disabled Students (Nov. 16, 2020), https://cdt.org/insights/how-automated-test-proctoring-software-discriminates-against-disabled-students/.
[22] Complaint of EPIC, In re HireVue (Nov. 6, 2019), https://epic.org/wp-content/uploads/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf.
[23] Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014).
News
See All NewsSupport Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate