EPIC Comments: DOJ Chatbot Market Survey
COMMENTS OF THE ELECTRONIC PRIVACY INFORMATION CENTER
to the Department of Justice
Criminal Justice Chatbot Market Survey
87 FR 9643 — April 8, 2022
____________________________________________________________
The Electronic Privacy Information Center (EPIC) submits these comments in response to the Department of Justice’s (“DOJ”) Request for Information regarding their Criminal Justice Chatbot Market Survey (“Chatbot Survey”).[1]
EPIC is a public interest research center in Washington, D.C. EPIC was established in 1994 to focus public attention on emerging privacy and related human rights issues, and to protect privacy, the First Amendment, and constitutional values.
EPIC has litigated cases against the DOJ and other federal agencies to compel production of documents regarding “evidence-based risk assessment tools.”[2] EPIC has also uncovered substantial information about risk assessment tools used in state and local agencies around the U.S.[3], and urges government actors to act now to regulate the use of technologies particularly when used in sensitive contexts.[4]
EPIC’s guidance is aligned with the Universal Guidelines for Artificial Intelligence, a framework for AI governance based on the protection of human rights, which were set out at the 2018 Public Voice meeting in Brussels, Belgium.[5] The Universal Guidelines for AI have been endorsed by more than 250 experts and 60 organizations in 40 countries.[6] The UGAI comprise twelve principles:
- Right to Transparency.
- Right to Human Determination.
- Identification Obligation.
- Fairness Obligation.
- Assessment and Accountability Obligation.
- Accuracy, Reliability, and Validity Obligations.
- Data Quality Obligation.
- Public Safety Obligation.
- Cybersecurity Obligation.
- Prohibition on Secret Profiling.
- Prohibition on Unitary Scoring.
- Termination Obligation.[7]
Among the key principles, the UGAI states: “All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome” (Right to Transparency); “Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions” (Fairness Obligation); “An AI system should be used only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system” (Assessment and Accountability Obligation); “Institutions must ensure the accuracy, reliability, and validity of decisions” (Accuracy, Reliability, and Validity Obligations); and “Institutions must establish data provenance and assure quality and relevance for the data input into algorithms” (Data Quality Obligation). Although some chatbots may not be considered “A.I.,” the DOJ can and should still consider these principles when evaluating, recommending, or adopting any chatbots.
The DOJ should carefully consider whether grant programs for chatbots throughout the criminal justice system are appropriate, particularly without robust data governance rules, and transparency requirements.
Regardless of whether the DOJ does ultimately facilitate grant programs for chatbot adoption, DOJ’s market survey will reflect the values of the agency and communicate accepted standards. In these comments, EPIC recommends that the DOJ investigate and highlight chatbot products that focus on and uphold strict data collection, minimization, and use policies; and. EPIC also urges the DOJ to investigate and highlight the very limited utility of chat bots, the potential dangers of overreliance, and collateral consequences of widespread adoption. Context is essential for this market survey, and DOJ should place the highlights of specific products in context about their use, including potential risks related to their use.[8]
- The Department of Justice Should focus on and establish strict requirements regarding data collection, minimization, and use
The DOJ developed a strong framework for thinking about use of chatbots in its’ National Institute for Justice Report, Chatbots in the Criminal Justice System.[9] Implementing the findings of that report however requires more than a simple checkbox of questions to ask.[10] The DOJ should develop and implement strict requirements to limit the use of chatbots in the criminal justice system, protect privacy, and ensure the safety of individuals using chatbots. A set of simple rules enforced by regular auditing could go a long way to mitigate the potential harms to individuals interacting with automated messaging systems in the particularly sensitive context of the justice system.
The NIJ Chatbot Report helpfully identifies four main categories of chatbots in the justice system: Law Enforcement Recruitment and Investigations, Court System Awareness and Access, Corrections and Community Supervision, and Victim Services and Support.[11] As discussed below, these baseline rules are not enough to govern the use of Investigative chatbots, which should not be developed or implemented. However, for chatbots that provide information to individuals interacting with the justice system, or those that provide basic “check in” monitoring services, the following rules should be implemented.
Baseline Rules for Chatbots
- Data Collection: chatbots should only collect the data necessary to provide the user with service;
- Data Retention: chatbots should only hold on to personal information for the span of time required to provide the service;
- Use Limitation: information disclosed to chatbots or inferences drawn from that information should only be used for the express purpose of the chatbot;
- Auditing: chatbots should be regularly audited for 1) effectiveness and 2) compliance with privacy rules.
These recommended rules dovetail with the Department of Homeland Security’s Fair Information Practice Principles (FIPPs), providing a good set of guidelines for evaluating chatbot systems.[12] The FIPPs include Transparency, Individual Participation, Purpose Specification, Data Minimization, Use Limitation, Data Quality and Integrity, Security, and Accountability and Auditing.
The National Institute for Justice recognized that “chatbots should collect as little personal information as possible.”[13] Restricting data collection is necessary to prevent abuses of chatbot systems and reduce the risk of mission creep for agencies administering chatbots. Individuals interacting with chatbots designed to provide information, support, or community supervision will often be in vulnerable positions. Strict data retention policies work in tandem with limiting data collection to prevent wrongful uses of personal data.
Limiting data collection and retention and designing chatbots to protect vulnerable individuals can prevent harm. For example, domestic violence experts recommend that websites providing resources to victims provide “quick escape” buttons.[14] For criminal justice chatbots, designers should reject cookies and decline to allow chatbots to recognize repeat visitors by IP address so that victims of domestic violence are not accidentally outed by such systems. Design choices like rejecting cookies and not saving IP addresses comply with data minimization and data retention principles by avoiding collecting or holding on to data that is not strictly necessary to provide the service. Even if such policies are slightly more inconvenient for users, the benefits vastly outweigh the harms.
Use limitation policies are equally necessary to minimize abuse and prevent either the government or third-party providers from obtaining new benefits from personal information disclosed solely to obtain information or a service. Even when chatbot systems clearly disclose how individuals’ data will be used, those individuals lack the opportunity for meaningful consent. Court system and victims services chatbots may be the only expedient means for individuals to get information about court rules, the progress of their case, or legal obligations. A choice between an hours-long wait time to speak to an individual and an instantaneous answer from a chatbot is no choice at all. And individuals using probation/parole chatbots lack any capacity to consent. Enforcing use limitations to prevent further investigations based on information disclosed from chatbots and to prevent third-party companies from using or selling personal information is necessary to preserve confidence in these systems and prevent harms to individuals.
The recent revelation that the Crisis Text Line was exploiting user data to develop an algorithm for a for-profit company illustrates why use limitation is necessary to preserve confidence in chatbot systems and keep these systems accessible and useful. Crisis Text Line is a suicide hotline texting service that amassed a large database of users reaching out for support and responding to automated prompts in a chatbot-like feature.[15] The non-profit used by millions every year then licensed that data to for-profit company building a service to counsel for customer service representatives on de-escalation techniques. By disclosing sensitive user data with a for-profit company, Crisis Text Line undermined trust with users, creating the risk that individuals contemplating suicide would decline to reach out for help.
The DOJ should be careful to evaluate products in the criminal justice system for similar risks, and where the possibility of data abuse exists, to minimize that risk by rigorously enforcing use limitation policies. The DOJ should also be highly skeptical of developing chatbot systems for use in the criminal justice system with companies that also provide commercial products to the private sector as the risk of misusing sensitive personal data will be much higher.
The Department should not endorse or provide funds to support the use of chatbots for investigative purposes. The NIJ Chatbot Report notes that in “New York, Los Angeles, Chicago, and Boston, law enforcement agencies have used chatbots in ‘stings’ in which the chatbots pose as minors offering commercial sex services as a campaign to identify buyers and combat sex trafficking.”[16] Unlike the other contexts, individuals interacting with these chatbots have no opportunity to provide consent and are unaware that they are interacting with an algorithm. Chatbots may magnify the risks of entrapment inherent in undercover operations by optimizing conversations to induce illegal activities. And these systems may also magnify the potential for biased policing by presenting police departments with a large selection of cases to pursue. Allowing officers to select between a large number of substantially identical cases with diverse perpetrators runs the risk officers will selectively enforce laws against poor and minority communities. Given the risks to privacy and civil liberties, the DOJ should simply opt not to pursue or support the use of chatbots for investigative purposes.
- The Department of Justice Should Require Transparency and Robust Oversight Mechanisms for Chatbot Products
The DOJ should not recommend, highlight, or give grants allowing purchases from companies that protect vital information about contract systems under allegations of trade secret or other commercial information protections.
Many government contractors, even in the criminal legal cycle, often only agree to contracts that require the agencies using their products to protect their commercial interests.[17] Even more are not proactively disclosed in procurement registries.[18] The DOJ should make clear that commercial protections to shield public records disclosure for any documents related to the ldogic or data practices of a chat bot product used in the criminal justice systems are inappropriate and unacceptable.
Chatbots used in the Criminal Justice System are going to be handling extremely sensitive data and impacting people’s livelihoods and liberty. In addition to being a venue for disclosure of sensitive information, people will interact with these systems at some of their most vulnerable times. As such, the DOJ must evaluate and hold these contractors to standards appropriate for the stakes, as EPIC details in part in Section I.
The DOJ should create a replicable regime of transparency, audits, impact assessments, approval, licensure, registration, and recertification to maximize accountability. The agency should be required to trace data collection, use, and lifecycle.
Transparency of what system a given government agency is using, along with understandable decision-tree logic of how the system works and data collection, use, sharing, and retention policies should be an absolute bare minimum requirement for any chatbot product the DOJ highlights or analyzes.
For any tool processing personal information, both audits and impact assessments can help improve accountability – with the critical caveat that weak mechanisms or these mechanisms without sufficient consequences or enforcement can reduce it to a box-checking exercise. EPIC recommends the DOJ use the following resources to guide development of Algorithmic Impact Assessment procedure:
- Kate Crawford, Dillon Reisman, Jason Schultz, & Meredith Whittaker, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute (2018),
- Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, & Madeleine Clare Elish, Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts, FAccT (Mar. 3, 2021),
- Ada Lovelace Institute, AI Now Institute, & Open Government Partnership, Algorithmic Accountability for the Public Sector (2021)
Independence, transparency, and regularity are key to any of these oversight mechanisms, and it’s critical to have operational consequences for incomplete compliance.
Conclusion
EPIC recommends the DOJ complete the market survey on Chatbots throughout the Criminal Justice System focusing on where they are appropriate, while recognizing the risks of increased data collection in sensitive contexts, misleading users, and exacerbating the digital divide.
Respectfully Submitted,
Ben Winters, EPIC Counsel
Jake Wiener, EPIC Law Fellow
[1] Criminal Justice Chatbot Market Surve, Justice Programs Office, 87 FR 9643 https://www.federalregister.gov/documents/2022/02/22/2022-03620/criminal-justice-chatbot-market-survey
[2] EPIC, EPIC v. DOJ (Criminal Justice Algorithms) https://epic.org/foia/doj/criminal-justice-algorithms/.
[3] Add cite
[4] Add cite
[5] Universal Guidelines, supra note 16.
[6] Universal Guidelines for Artificial Intelligence: Endorsement, The Public Voice (2020), https://thepublicvoice.org/AI-universal-guidelines/endorsement/.
[7] Universal Guidelines, supra note 16.
[8] See generally Todd feathers, Payday Lenders are Big Winners in Utah’s Chatroom Justice Program https://themarkup.org/remote-justice/2022/03/16/payday-lenders-are-big-winners-in-utahs-chatroom-justice-program ;
[9] Steven Schuetz, Jeri D. Ropero-Miller, & Jim Redden, Chatbots in the Criminal Justice System, National Institute for Justice (Oct. 2021), https://cjtec.org/files/chatbots-criminal-justice (hereinafter NIJ Chatbot Report)..
[10] Id. at 14, while a checklist may be helpful for identifying risks from a chatbot, simply requiring consideration of risks is not enough to prevent the use of harmful systems.
[11] Id. at 4.
[12] Hugo Teufel III, The Fair Information Practice Principles: Framework for Privacy Policy at the Department of Homeland Security, Mem.No. 2008-01, Dept. of Homeland Sec. (Dec. 29, 2008), https://www.dhs.gov/sites/default/files/publications/privacy-policy-guidance-memorandum-2008-01.pdf.
[13] Id. at 10.
[14] See, Exit From This Website Quickly, Tech Safety (last accessed Apr. 7, 2022), https://www.techsafety.org/exit-from-this-website-quickly.
[15] Keith Porcaro, The Real Harm of Crisis Text Line’s Data Sharing, Wired (Feb. 1, 2022), https://www.wired.com/story/consumer-protections-data-services-care/; Jasmine Hicks and Richard Lawler, Crisis Text Line stops sharing conversation data with AI company, The Verge (Feb. 1, 2022), https://www.theverge.com/2022/1/31/22906979/crisis-text-line-loris-ai-epic-privacy-mental-health.
[16] NIJ Chatbot Report at 4.
[17] EPIC Amicus Brief, Citizens for Responsibility and Ethics in Washington (CREW) v. Department of Justice, D.C. Circuit No. 21-5276, available at https://epic.org/documents/citizens-for-responsibility-and-ethics-in-washington-crew-v-department-of-justice/
[18] EPIC Liberty at Risk report, supra 2
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate