Testimony
Testimony on MA H64/S33 re: establishing a commission on automated decision-making by government in the Commonwealth
July 2023
July 21, 2023
Dear Chairs Farley-Bouvier and Moore,
EPIC writes in support of House Bill 64 and Senate Bill 33, An Act establishing a commission on automated decision-making by government in the Commonwealth. The problem we face with AI today is that it is being used broadly in society to replace human decision-making, with little to no rules about testing these systems for accuracy, effectiveness, or bias. And that has real, tangible harms, like the loss of life opportunities such as jobs or housing. And when the government is using AI to make decisions about individuals, the harms are magnified. This legislation is needed to catalogue what AI systems the state is using and craft best practices for those uses based on expert advice.
The Electronic Privacy Information Center (EPIC) is a public interest research center established in 1994 to focus public attention on emerging privacy and civil liberties issues and secure the fundamental right to privacy in the digital age for all people.[1] EPIC has promoted algorithmic transparency for many years and has litigated several cases on the frontlines of AI in the federal government.[2] EPIC successfully sued U.S. Customs and Border Protection for documents relating to its use of secret, analytic tools to assign “risk assessments” to U.S. travelers.[3] In EPIC v. DHS, EPIC sought to compel the Department of Homeland Security to produce documents related to a program that assesses “physiological and behavioral signals” to determine the probability that an individual might commit a crime.[4] EPIC also sued the Department of Justice to produce documents concerning the use of “evidence-based risk assessment tools,” algorithms that try to predict recidivism, in all stages of sentencing.[5]
The Need for Algorithmic Transparency
Artificial intelligence is currently used by states to determine bail and criminal sentences, evaluate public employees, and determine government benefit eligibility.[6] Bias and discrimination are often embedded in these systems yet there is no accountability for their impact. Criminal justice algorithms—sometimes called “risk assessments” or “evidenced-based methods”—are controversial tools that purport to predict future behavior by defendants and incarcerated persons.[7] These proprietary techniques are used to set bail, determine sentences, and even contribute to determinations about guilt or innocence. Yet the inner workings of these techniques are largely hidden from public view.
Many “risk assessment” algorithms consider personal characteristics such as age, sex, geography, family background, and employment status. As a result, two people accused of the same crime may receive sharply different bail or sentencing outcomes based on inputs beyond their control—but have no way of assessing or challenging the results.[8]Criminal justice algorithms are used across the country, but the specific tools differ by state or even county. In addition, because such algorithms are proprietary, they are not subject to state or federal open government laws.
All individuals should have the right to know the basis of an automated decision that concerns them. And there must be independent accountability for automated decisions.
Without knowledge of the factors that provide the basis for decisions, it is impossible to know whether government engages in practices that are deceptive, discriminatory, or unethical. The Pew Research Center recently found that most Americans are opposed to algorithms making decisions with consequences for humans, and 58% think algorithms reflect human bias.[9] Without transparency about what systems are used and how throughout the Commonwealth, which H64/S33’s commission can provide, the road past the problems of bias, fairness, and due process will remain elusive. This legislation is needed so the Commonwealth and its citizens can understand how state agencies are using artificial intelligence to make determinations about people.
EPIC’s Report on the DC Government’s Use of AI
Last fall, EPIC released a report about the use of AI by DC government agencies. EPIC spent 14 months investigating the D.C. government’s use of automated decision-making systems. Fourteen months just to find out what systems the government was using to screen and score its citizens. Through Freedom of Information requests, we discovered that DC agencies were outsourcing critical government decisions to a wide number of AI systems including third-party systems like RentGrow, used for public housing tenant screening reports,11 and Pondera, used for investigations into potential benefits fraud.12 As we said in our report,
The public does not have sufficient access to these systems to understand whether they are producing high-quality, accurate, and fair decisions. What little transparency we have does not paint a pretty picture. Overburdened agencies turn to tech in the hope that it can make difficult political and administrative decisions for them. Agencies claim ADM systems are necessary to efficiently decide who gets access to limited resources. At the same time, agencies ignore and downplay political decisions that reduce the amount of resources or create scarcity in the first place, like tax breaks to big businesses or higher hurdles for benefits recipients. Most agencies do not have the time, expertise, or incentives to conduct meaningful oversight. Agencies and tech companies block audits of their ADM tools because companies claim that allowing the public to scrutinize the tools would hurt their competitive position or lead to harmful consequences. As a result, few people know how, when, or even whether they have been subjected to automated decision-making.[10]
These tools are frequently given deference and accorded an air of objectivity, but they often reinforce bias and discrimination in the data the algorithm “learns” from and the contexts the tools are used in.
From a fiscal responsibility perspective, millions of taxpayer dollars are being spent on these systems that are unproven, inaccurate, and often simply do not work. It’s a waste of taxpayer dollars.
Similar State Efforts
The dangers of Artificial Intelligence used by governments is a policy challenge every state, county, city, and government is grappling with. Other states have created similar commissions to the one proposed in H64/S33.[11]
New York City
The New York City Council created a task force in 2017 to study how it uses AI and to provide recommendations on specific prompts. In November 2019, the council released their report.[12] In conjunction with this released report, Mayor De Blasio announced an Executive Order creating an “Algorithms Management and Policy Officer.”[13] An unofficial “shadow report” of the Task Force was also released.[14]
Vermont
In 2018, Vermont created a task force to study artificial intelligence and “make[s] recommendations on the responsible growth of Vermont’s emerging technology.” The Vermont Task Force includes government representatives, designees from trade associations throughout the field, and the ACLU Executive director.[15] The law requires a report including a summary of current use and development of AI in Vermont, and proposals for defining AI, state regulation of AI, and for the responsible and ethical development of artificial intelligence of the state.[16] In 2022, the state followed up by passing legislation creating a Division of Artificial Intelligence within the Agency of Digital Services to review all aspects of artificial intelligence developed, employed, or procured by State government. The law requires the Agency of Digital Services to conduct an inventory of all the automated decision systems developed, employed, or procured by state government in Vermont.[17]
Alabama
In 2018, Alabama created a commission to “review and advise the Governor and the Legislature on all aspects of the growth of artificial intelligence and associated technology in the state.”[18] The Alabama Commission includes Gubernatorial and Lieutenant appointees, and designations from the Secretary of Commerce and Information Technology and legislative leadership.[19]
Conclusion
Democratic governance is built on principles of procedural fairness and transparency. And accountability is key to decision making. We must know the basis of decisions made by government, whether right or wrong. But as decisions are automated, and organizations increasingly delegate decision making to techniques they do not fully understand, processes become more opaque and less accountable. It is therefore imperative that algorithmic processes be open, provable, and accountable.
When the government uses AI to make decisions about people, it raises fundamental questions about accountability, due process, and fairness. Algorithms deny people educational opportunities, employment, housing, insurance, and credit.[20] Many of these decisions are entirely opaque, leaving individuals to wonder whether the decisions were accurate, fair, or even about them.
We do recognize the value of AI techniques for a wide range of government programs. But government activities that involve the processing of personal data trigger specific legal obligations; the use of new techniques will raise new challenges that this Commission established under H64/S33 should explore.
Passage of H64/S33 will allow the Legislature and the citizens of the Commonwealth to understand how state agencies are using automated decision making. This is a crucial first step towards ensuring that accountability, transparency, public input, privacy, fairness, education, and due process must remain at the forefront of the rapid adoption of new AI technologies.
Sincerely,
Caitriona Fitzgerald
EPIC Deputy Director
Attachment:
EPIC, Screened and Scored in the District of Columbia (Nov. 2022)
[1] EPIC, About EPIC, https://epic.org/epic/about/.
[2] EPIC, Algorithmic Transparency, https://epic.org/algorithmic-transparency.
[3] EPIC, EPIC v. CBP (Analytical Framework for Intelligence), https://epic.org/foia/dhs/cbp/afi.
[4] EPIC, EPIC v. DHS- FAST Program, https://epic.org/foia/dhs/fast.
[5] EPIC, EPIC v. DOJ (Criminal Justice Algorithms), https://epic.org/foia/doj/criminal-justice-algorithms.
[6] Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014).
[7] Danielle Citron, (Un)Fairness Of Risk Scores In Criminal Sentencing, Forbes (July 2016), https://www.forbes.com/sites/daniellecitron/2016/07/13/unfairness-of-risk-scores-in-criminal-sentencing/
[8] Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[9] Pew Research Center, Public Attitudes Toward Computer Algorithms (Nov. 2018), http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/.
[10] EPIC, Screened & Scored in the District of Columbia 5 (Nov. 2022), https://epic.org/wp-content/uploads/2022/11/EPIC-Screened-in-DC-Report.pdf.
[11] See Caroline Kraczon, The State of State AI Policy (Aug. 2022), https://epic.org/the-state-of-ai/.
[12] N.Y. City, Automated Decision Systems Task Force Report (Nov. 2019), https://www.nyc.gov/assets/adstaskforce/downloads/pdf/ADS-Report-11192019.pdf.
[13] City of N.Y. Off. of the Mayor, Executive Order No. 50 Establishing an Algorithms Management and Policy Officer (Nov. 2019), https://www.nyc.gov/assets/home/downloads/pdf/executive-orders/2019/eo-50.pdf.
[14] AI Now, Confronting Black Boxes: A Shadow Report of the N.Y. City Automated Decision System Task Force (Dec. 2019), https://ainowinstitute.org/publication/confronting-black-boxes-a-shadow-report-of-the-new-york-city-automated.
[15] VT. H. 378 (May 21, 2018) https://legislature.vermont.gov/bill/status/2018/H.378
[16] Vermont Artificial Intelligence Task Force February 2019 Update Report, VT Artificial Intelligence Task Force (February 15, 2019) https://legislature.vermont.gov/Documents/2020/WorkGroups/Senate%20Government%20Operations/Artificial%20Intelligence%20Task%20Force/W~Brian%20Breslend~Vermont%20Artificial%20Intelligence%20Task%20Force%20Feb%202019%20Update%20Report-1~2-8-2019.pdf.
[17] 2022 Vt. Act 132, available at https://legislature.vermont.gov/Documents/2022/Docs/ACTS/ACT132/ACT132%20As%20Enacted.pdf.
[18] AL. SJR71 (May 15, 2019) http://alisondb.legislature.state.al.us/ALISON/SearchableInstruments/2019RS/PrintFiles/SJR71-int.pdf.
[19] Id. at § b(1)-(6)
[20] Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014).
News
See All NewsSupport Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate