APA Comments
Comments of EPIC to PCLOB AI in Counterterrorism July 2024
COMMENTS OF THE ELECTRONIC PRIVACY INFORMATION CENTER
to the
Privacy and Civil Liberties Oversight Board
Request for Public Comment on
The Role of Artificial Intelligence in Counterterrorism
July 1, 2024
The Electronic Privacy Information Center (“EPIC”) submits these comments in response to the 2024 Request for Public Comment on the Role of Artificial Intelligence in Counterterrorism released by the Privacy and Civil Liberties Oversight Board (“PCLOB”) on May 23, 2024.[1] EPIC firmly believes that certain technology—such as facial recognition technology—should not be used at all for surveillance. EPIC also firmly believes that any use of artificial intelligence technologies must be based on a robust framework of safeguards that are both present prior to use and effectively enforced. As the PCLOB engages in oversight of the Intelligence Community (“IC”), EPIC renews our call to protect privacy, civil rights, and civil liberties.
- Interest of EPIC
EPIC is a public interest research center in Washington, D.C., established in 1994 to focus public attention on emerging civil liberties issues and to secure the fundamental right to privacy in the digital age for all people through advocacy, research, and litigation.[2] EPIC has a particular interest in accountability, fairness, privacy, civil rights, and civil liberties in the context of surveillance, algorithmic technologies, and related law enforcement techniques.[3] Over the last decade, EPIC has also consistently advocated for the adoption of clear, commonsense, and actionable AI regulations and protections—including those relating to facial recognition technology.[4] EPIC has litigated cases against the U.S. Department of Justice to compel production of documents regarding “evidence-based risk assessment tools”[5] and against the U.S. Department of Homeland Security to produce documents about a program purported to assess the probability that an individual will commit a crime.[6] EPIC also has a long history of advocating for increased privacy protections for non-citizens and opposing the expansion of surveillance at the border.[7]
II. Summary of Argument
Artificial intelligence (“AI”) is a broad term that encompasses several classes of technologies. As a baseline measure, this comment defines AI as a system that “replac[es] or impact[s] human decision-making[,]” including both sophisticated models as well as simpler automated processes.[8] This includes specific technologies such as facial recognition systems, person based predictive policing tools, and emotional recognition systems, as well as automated systems like large language models (“LLMs”) that summarize broad swathes of data and automated translation tools.[9] The 2021 National Security Commission on Artificial Intelligence (“NSCAI”)’s final report only includes “technologies that solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action, and technologies that may learn and act autonomously, whether in the form of software agents or embodied robots” in its scope of AI. The NSCAI’s definition is too narrow and would leave out broad swathes of technology that carry many of the inherent risks of AI but do not necessarily imitate human behavior or require autonomous activity. For example, a police record screening tool that provides recommendations on next steps would not be covered under the NSCAI definition of AI because it requires an intelligence official to take [10] by but not required by the AI system.[11]
The technologies used by the IC greatly increase the risk of violating constitutional rights and Title VI of the Civil Rights Act, while seriously harming Americans. These harms must be taken seriously when balanced with the purported (and as of yet unsubstantiated) benefits of AI. However, the IC is already using these technologies and is a major source of funding for these projects. To ensure the strongest privacy, civil rights, and civil liberties protections are in place, the PCLOB must engage in strong harm reduction measures. Importantly, the PCLOB must engage with the IC regarding how it develops, procures, deploys, and operates AI technologies. Special attention must be paid not only to the efficacy of these systems, their oversight processes, and their technical safeguards, but also the training of personnel who will interact with these systems.
However, regardless of efficacy, oversight, and personnel training, AI systems come with structural flaws that undermine their accuracy, security, and reliability, such as data security vulnerabilities and gaps in output explainability. Furthermore, the counterterrorism and intelligence gathering context further exacerbate and accelerate these harms because of the history of oversight and accountability failures in the IC. The lack of transparency in AI systems will compound with the lack of transparency in the counterterrorism space to create a “double black box” of decision making. Finally, the current frameworks proposed by the IC are not sufficient to protect Americans’ interests and should be strengthened and aligned with the Federal Government’s other AI initiatives and regulations.[12]
III. AI Technologies are Overtly Harmful in their Own Right
Under the cover of silence, the IC was an early adopter—and funder—of several different types of AI, such as facial recognition technology and data screening, analytics, and surveillance technologies.[13] It is unclear, however, how these technologies are developed, what datasets they are trained on, how these technologies are deployed, and to what extent they are used. There are widely known development and training methods to create AI systems, but the IC routinely fails to disclose which methods it is using to create new technologies.[14] The IC notoriously withholds information about the technology it uses, leading even the Foreign Intelligence Surveillance Court (“FISC”) to reprimand it for failing to disclose information about noncompliance.[15] Unless oversight bodies know what methods are being used, the oversight bodies cannot meaningfully ensure that the IC practices apply sufficient risk management. Furthermore, the risks of a particular method cannot be known until it is researched in depth, so oversight bodies should have the technical capacity to review the development and training procedures used by the IC.
When there are so many questions about these technologies, the PCLOB must intervene and conduct a thorough review to bring transparency and clarity to the following issues. First, the reliability of AI systems relies heavily on the quality of their training data. The IC’s databases (which very likely are used as training data sources for IC AI systems) disproportionately target marginalized communities and contain flawed or biased data. AI systems typically work best when used in narrow, pre-defined contexts, but IC AI systems are often misused and misapplied to new, under-tested use contexts by AI deployers and IC operators, leading to additional, serious harms. Finally, regardless of precision, accuracy, and handling, the use of AI systems poses significant risks to rights and safety.[16] EPIC urges the PCLOB to thoroughly investigate (1) how the IC is designing, developing, procuring, and deploying AI; (2) what the AI systems are purporting to solve; (3) the efficacy of these systems; (4) how and where the IC is gathering the training data; (5) how the IC trains the operators of these systems; and (6) further questions enumerated below.
A. The PCLOB Must Engage in Thorough Oversight Regarding the Efficacy of New Technologies Used by the IC
AI systems trained on flawed datasets produce unreliable outputs. The NSCAI itself admits that AI “still has significant limitations”[17] and that it is “brittle when operating at the edges of [its] performance competencies[,]”[18] noting the significant issues generally found among AI systems with bias[19] and confidence thresholds.[20] Technologies like person-based predictive policing and emotion recognition systems additionally rest on shaky scientific ground, with few (if any) studies showing the efficacy of the system or providing any substantiation that the end goal (emotion recognition through face analysis) can even be achieved.[21]
AI systems are only as good as the data they are trained on, and facial recognition systems in particular are plagued by underrepresentation of marginalized communities leading to higher false positives rates for Black people and women in comparison to White men.[22] In particular, the NSC notes that when using biometric surveillance systems like facial recognition systems, “[the government] must exercise special caution in managing risks to bedrock constitutional principles including equal protection, due process, freedom from unreasonable searches and seizures, and freedoms of speech and assembly.”[23] Machine learning systems trained using reinforcement learning, a common subset of AI systems, are also unique in that the systems continually learn and evolve as new data points are introduced, which can inject further errors and bias into the system. Because of this, the NSC notes that AI systems require “more continuous testing and evaluation” than prior, non-AI technologies.[24]
The PCLOB must also ensure that the IC is complying with Title VI of the Civil Rights Act and Executive Order 12968 when developing, designing, and procuring AI systems. In 2021, the NSC spent almost a hundred pages in its AI report enumerating how the U.S. should invest heavily in AI research and technology, laying out a step-by-step plan for various sectors of the government.[25] It is imperative that this technology be provably non-discriminatory to ensure compliance with Title VI of the Civil Rights Act, which prohibits federal funding of programs and activities that discriminate based on “race, color, or national origin.”[26] Beyond Title VI, the Office of the Director of National Intelligence (“ODNI”) stressed the importance of non-discrimination in intelligence gathering pursuant to Executive Order 12968.[27] The Executive Order “prohibits an individual from being subject to greater scrutiny” based on race, color, religion, sex, national origin, disability, or sexual orientation.[28]
Intelligence databases that would likely be used to train predictive AI models are vastly skewed towards surveilling marginalized communities such as Muslim people or over-policing Black or low-income communities based on years of prejudiced arrest and sentencing data.[29] The PCLOB must ensure that the AI systems used by the IC are trained on datasets with accurate and representative information that is not skewed towards certain marginalized groups; however, this should not be read as encouraging further, extensive data collection. Instead, the IC should curate specific datasets from the existing data pools it has access to in order to mitigate the risk of discrimination, including directly countering historic biases that may be present in the data.
The PCLOB should investigate the following questions:
- How and to what extent is the IC engaging in the testing, evaluation, verification, and validation (“TEVV”)[30] of AI systems?
- When designing and developing AI systems (either in house or through a third-party vendor), how is TEVV implemented? Are there guidelines dictating how TEVV should be implemented?
- Does the IC engage in TEVV when procuring commercial AI systems? Are there guidelines dictating how TEVV should be implemented?
- Is TEVV done throughout the entire lifecycle of the AI system (including during design, development, procurement, and periodically during operational use)?
- What entity is doing the testing? Is it an independent, third-party auditor with appropriate security clearance and technical background to appropriately audit the AI system for the use case the IC intends to use it for?
- What transparency measures is the IC implementing for testing its AI systems? Is the IC complying with Privacy Impact Assessment (“PIA”) and other transparency obligations required under federal law?
- What safeguards are in place once the systems are in operational use?
- In funding new AI technology research and development, is the IC complying with Title VI of the Civil Rights Act?
- To what extent is the IC consulting with diverse groups to inform the development and implementation of new AI technology, including but not limited to technological experts, academics (both technical as well as ethical and legal), and civil society?
- To what extent is the IC prioritizing rushed implementation of new technologies over appropriate and necessary privacy and civil liberties safeguards?
- How does the IC decide to procure and/or develop AI systems?
- Does the IC start with a narrow, measurable problem that can only be solved by AI?
- Does the IC have written justification for using a new technology within its ecosystem?
- Is there a list of all AI systems currently in use, including a use case inventory?
- Is the IC complying with any and all applicable regulations and guidelines for procuring and using AI systems?
B. The PCLOB Should Ensure that the IC is Adequately Training All Personnel Who Operate, Deploy, Use, Interpret Outputs From, or Otherwise Interact With AI Systems in the IC Ecosystem
Intelligence officials must understand the limitations of the AI systems they are using to ensure that the data inputs and outputs of their systems are appropriate. When misused and abused, AI systems provide seemingly objective, but unreliable—and often harmful—outputs. High accuracy and precision rates when used as intended do not erase user error or risk. For example, facial recognition systems typically work by submitting a probe image and comparing that either to a known image (one-to-one) or to a database of images to find a match (one-to-many). The algorithm is trained on real human faces, and works best when using high quality, well-lit, front facing images of real human faces.[31] Even if the algorithm worked precisely and accurately and returned matching images 100% of the time in a training ecosystem, if an intelligence official submits a photo of a look-alike or a forensic sketch, the algorithm will return a match to the look-alike or the forensic sketch.[32] It will not give the intelligence official a match to the actual person they are looking for, [33] and intelligence officials should be trained to know exactly what an AI system output actually means.
Personnel must also be trained to understand what the outputs mean and what confidence levels can be attributed to the outputs. The NSCAI report recommends robust TEVV implementation to ensure that the intelligence officials can be confident in the outputs produced by the AI systems.[34] However, there will never be a predictive analysis AI system that is 100% accurate in its assessments. Intelligence officials need to understand in what ways the AI system is fallible—and in what use contexts the AI system is inappropriate—to ensure that IC personnel can independently verify information, correct for any skews in the data, and otherwise oversee the system’s deployment. Essentially, intelligence officials must be trained to treat AI system outputs as leads that then must be further investigated by humans—not definitive answers.
The PCLOB should investigate the following questions:
- How does the IC train personnel that operate, develop, deploy, use, interpret outputs from, or otherwise interact with AI systems in the IC ecosystem?
- Are personnel aware when they are interacting with an AI system, including but not limited to complex predictive policing tools, automated criminal risk assessments, and other simple AI systems such as a scoring or screening tools that analyze data for predictions or recommendations?
- Are the personnel trained to review and verify AI system outputs?
- What is the confidence level assigned to AI system outputs?
- How often are automated outputs overridden by human reviewers?
- Are humans meaningfully reviewing AI system outputs before a decision is made or are AI outputs the sole decision-maker in any circumstance?
- Are there audit logs for the use of AI systems?
- When developing AI systems, does the IC prioritize the technical feasibility and inclusion of audit logs?
- How often and/or in what circumstances are audit logs reviewed by supervisors for non-compliant use?
C. Artificial Intelligence Systems Pose Significant Risks Regardless of Accuracy and Handling
Even if the AI system is precise, accurate, and the operator used the AI system as intended without any errors, AI systems still pose inherent risks. For example, generative AI frequently produces hallucinations or confabulations—syntactically sophisticated but substantively inaccurate information—while carrying significant data security vulnerabilities.[35] And all AI systems create issues for liability apportionment. The PCLOB must ensure that, as these technologies are being developed and used, the IC is addressing AI risks and impacts by creating policies for liability, training personnel using the systems on the limitations of the technology, and crafting airtight security measures.
First, AI systems and their underlying datasets are uniquely vulnerable to cyber-attacks due to the way that AI systems function. Datasets can be poisoned with strategically unsound data, especially if the algorithm is trained on data scraped from the internet.[36] Furthermore, some commercial models are open-source or based on open-source code, and cybersecurity experts have noted the possibility of bad actors leveraging the publicly available information to engage in more informed attacks on secure systems.[37] Finally, bad actors can gain access to the training data or alter the function of the AI system through prompt injection and other forms of adversarial attacks.[38] The PCLOB should ensure that the IC is engaging in rigorous defensive strategies to protect against cyberattacks without undermining individuals’ rights or safety—in line with NIST’s AI Risk Management Framework.[39]
Second, generative AI systems like large-language models are not programmed to create truthful or accurate outputs; instead, they act like stochastic parrots, predicting common phrase combinations without ensuring their veracity.[40] Accordingly, generative AI often “hallucinates,” creating factually incorrect outputs that appear accurate upon first glance.[41] In the intelligence context, such false data could prove disastrous. The NSC itself stated that AI outputs could “increase the risk of military escalation” and implicate grave ethical questions when outputs are used in decisions to employ lethal force.[42] The PCLOB must ensure that the IC has policies in place to, as the NSC put it, know “how much confidence in the machine is enough confidence[.]”[43] The IC must train its personnel to have “an informed understanding of risks, opportunities, and tradeoffs” of AI technologies, as well as the possibility of AI hallucinations and other limitations.[44] The IC must also explicitly prohibit autonomous decision making human intervention, particularly in syof the decisions lead to criminal sentencing and/or lethal action.
Lastly, because of the lack of explainability as to how AI systems create outputs, it is difficult to apportion liability and hold all parties accountable for adverse determinations.[45] The NSC AI report stated that it intends to use AI to ”complement, not supplant the role of humans”[46] and that the IC should continue to ensure the ”centrality of human judgement.” Therefore, the IC should have clear policies on what natural person is liable for the decisions made based on the outputs of AI systems. In the event of adverse outcomes, the policy should include steps for conducting a general audit of the system to assess whether the system was working as intended as well as an audit of the liable natural person’s use of the AI system to assess whether the individual used the system appropriately and correctly.
The PCLOB should investigate the following questions:
- How is liability apportioned when an AI system is a part of a decision making process?
- Can it be determined whether an AI system was used as a part of a decision?
- What is the confidence threshold for AI system outputs?
- How are confidence thresholds calculated?
- Are natural persons assigned liability when AI systems are used in the process of making a decision?
- When developing AI systems, how is the IC measuring accuracy and “successful” outputs?
- What cybersecurity protections are in place to protect the vast datasets used to train these AI systems, both during the development and training process as well as during operational use?
- Is the IC complying with intra-government data sharing protocols?
IV. The Use of AI for National Security Requires Strict Standards, Accountability and Oversight
The IC has a history of oversight failures. These failures not only include moving too slowly to establish proper policies and oversight of the use of new technologies or information but also include lack of adequate oversight of established programs. This history suggests that current uses of AI within the IC probably lack adequate oversight and that future uses will too. AI can amplify risks to privacy, civil liberties, and civil rights while at the same time making it harder to mitigate these risks. Although the IC has released a set of AI principles and framework, these will not provide meaningful regulation of the IC’s use of AI in their current form.
- The IC Has a History of Oversight and Accountability Failures
The IC’s past failures of oversight and accountability necessitate strict standards and correspondingly meaningful and thorough oversight and accountability structure for the use of AI. Too often, the IC fails to comply with their own policies; interprets regulations, court opinions, and the Constitution in ways that undermine the protections provided by law; or simply fails to address the implications of new technology in a timely manner.
The IC’s purchasing of commercially available information (“CAI”) is instructive. Last year, after an EPIC Freedom of Information Act request, ODNI released a partially declassified report on the IC’s purchasing of CAI.[47] For years the IC has been purchasing CAI, and the report makes clear ODNI failed to properly oversee the growing use of CAI by the IC. The report found that there were no overarching rules in place for the purchase and use of CAI and ODNI lacked awareness of some of the most basic aspects of this practice, including how much CAI was being collected, what types of information were being purchased, or what was being done with that information.[48]
Particularly disturbing was the report’s finding that the IC lacked a common, IC-wide interpretation of Carpenter v. U.S., which requires a warrant for persistent location information.[49] Location information, among other data, is some of the sensitive information purchased by the IC. Predictably, some elements of the IC have taken a very narrow view of Carpenter, interpreting the case to not require a warrant for their purchasing or use of location data—or any other CAI for that matter.[50]
Even where specific oversight and rules are in place, the IC often fails to comply with these rules. Section 702 of the Foreign Intelligence Surveillance Act has become synonymous with abuse and lack of compliance. Despite Section 702 being an established program with established rules, the IC has consistently abused Section 702 authority and failed to comply with the rules.[51] The FBI has been particularly problematic with the Foreign Intelligence Surveillance Court stating that the “compliance problems with the FBI’s querying of Section 702 information have proven to be persistent and widespread.”[52] Based on these patterns, as the use of AI expands within the IC, the risks to privacy, civil liberties, and civil rights will increase.
- The IC is Already Using AI, and Its Continued Expansion Will Only Exacerbate Failures of Oversight and Accountability
The use of AI by the IC is not new. The SKYNET program uses a machine learning algorithm to analyze the cellular network metadata of millions of people using Pakistan’s mobile phone network to identify terrorists.[53] A data scientist that reviewed what was publicly available about SKYNET called the algorithm “ridiculously optimistic” and suggested that the way the NSA is training the algorithm “makes the results scientifically unsound.”[54] It is not clear whether SKYNET assisted in human reason and judgment or replaced it or whether innocent people were killed based on SKYNET’s analysis. Regardless, we do not have to speculate on whether an AI targeting system replacing human reason and judgment and making life and death decisions about who to target for killing exists —it does. The Israeli army has developed an AI targeting system to identify targets for assassination.[55]
Another program that likely uses AI is XKEYSCORE. XKEYSCORE is a processing and analysis tool used by the NSA.[56] The tool is used to analyze massive datasets of information collected by the NSA. Undoubtedly there are other programs within the IC that use AI. SKYNET and XKEYSCORE raise serious questions about the use of AI, not the least of which is whether AI should be used for the given application—particularly with SKYNET. In the case of XKEYSCORE, there is concern over the extent to which machine analysis of U.S. personal information triggers Fourth Amendment scrutiny, among other issues.
The use of AI by the IC will likely create a “double black box” of national security secrecy and opaque algorithms.[57] Such a scenario will only compound the problem of oversight and accountability. Opaque AI systems make oversight harder. Nonetheless, they often bring the veneer of legitimacy to decisions that are biased or problematic in other ways. Additionally, AI systems can create an accountability gap where it is not clear who is responsible for the failings of an AI system.[58] All of the above is particularly true when the AI system is a commercial product. A strong oversight structure could mitigate many of the risks of the IC’s use of AI—unfortunately, if the IC AI principles and framework are any indication, then oversight structure is lacking for AI.
- The Current IC AI Framework and Principles Lack the Specificity to Meaningfully Regulate the Use of IC for National Security Purposes
The Principles of AI Ethics for the IC (hereinafter “Principles”)[59] is only good policy in theory —practically, it lacks clear standards and enforcement mechanisms to ensure that the principles are followed in a meaningful way. The corresponding AI Ethics Framework (hereinafter “Framework“)[60] does little to resolve the issues with the Principles—it merely provides some guidance about how AI should be overseen and offers questions to consider regarding the use of AI. The Principles are, according to the document, something the IC is committed to implementing, but they are generalized and thus can too easily be bent to accommodate whatever the IC decides to do with AI. The Framework provides some more specifics, but lacks teeth, containing only guidance and considerations that elements of the IC can apply at their discretion. Both the Principles and Framework need to be more robust to be meaningful. To ensure that the Principles are followed even when there are conflicts between the Principles and furthering the IC’s missions, there must be meaningful enforcement mechanisms in place.
V. Conclusion
EPIC looks forward to engaging with the PCLOB further on these urgent issues, and we stand by to assist your agencies however we can.
Respectfully submitted,
Jeramie Scott
Jeramie Scott
EPIC Senior Counsel
Director of the EPIC’s Project on Surveillance Oversight
Maria Villegas Bravo
Maria Villegas Bravo
EPIC Law Fellow
APPENDIX A
For a more detailed analysis of these issues, see:
- Comments of EPIC, Solicitation of Written Comments by the National Security Commission on Artificial Intelligence, 85 Fed. Reg. 32,055, National Security Commission on Artificial Intelligence (Sept. 30, 2020), https://epic.org/apa/comments/EPIC-comments-to-NSCAI-093020.pdf.
- Comments of EPIC to DOJ and DHS on Law Enforcement’s Use of FRT, Biometric, and Predictive Algorithms (Jan. 19, 2023), https://epic.org/wp-content/uploads/2024/01/EPIC-DOJDHS-Comment-LE-Tech-011924.pdf; see also Maria Villegas Bravo, Overview of EPIC’s Comments to DOJ and DHS on the use of facial recognition, other technologies using biometric information, and predictive algorithms, EPIC(Mar. 8, 2024), https://epic.org/overview-of-epics-comments-to-doj-and-dhs-on-the-use-of-facial-recognition-other-technologies-using-biometric-information-and-predictive-algorithms/ (summarizing EPIC’s extensive comments).
- EPIC et al. Comments to the Office of Management and Budget on Responsible Procurement of Artificial Intelligence in Government, (Apr. 29, 2024), https://epic.org/wp-content/uploads/2024/04/Joint-Civil-Society-Comment-re-OMB-RFI-on-Responsible-Procurement-of-Artificial-Intelligence-in-Government.pdf.
- EPIC Comments to the Office of Management and Budget on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum, (Dec. 5, 2023), https://epic.org/wp-content/uploads/2023/12/EPIC-OMB-AI-Guidance-Comments-120523-1.pdf.
- EPIC comments to NIST on Artificial Intelligence Risk Management Framework: Second Draft, (Sep. 28, 2022), https://epic.org/documents/epic-comments-national-institute-of-standards-and-technology-ai-risk-management-framework/.
- EPIC Letter to Attorney General Garland Re: Title VI Compliance and Predictive Algorithms (Jul. 6, 2022), https://epic.org/documents/epic-letter-to-attorney-general-garland-re-title-vi-compliance-andpredictive-algorithms/.
- EPIC Letter to Attorney General Garland Re:ShotSpotter Title VI Compliance (Sept. 27, 2023), https://epic.org/documents/epic-letter-to-attorney-generalgarland-re-shotspotter-title-vi-compliance/.
- Generating Harms: Generative AI’s Impact & Paths Forward, EPIC (May 2023), epic.org/gai2 (last visited Jun. 28, 2024).
- Generating Harms II: Generative AI’s New & Continued Impacts, EPIC (Apr. 2024), epic.org/gai2 (last visited Jun. 28, 2024).
- Comments of EPIC to the Office of Management and Budget on Privacy Impact Assessments, (Apr. 1, 2024), https://epic.org/documents/comments-of-epic-to-omb-on-privacy-impact-assessments/ (discussing the role of transparency surrounding artificial intelligence and other advances in technology).
- Comments of EPIC et al. to FTC on Rule on Impersonation of Government and Business, (Mar. 1, 2024), https://epic.org/documents/epic-and-partner-organizations-comments-on-ftc-rule-on-impersonation-of-government-businesses-and-individuals-snprm/ (discussing the use of deepfakes to impersonate individuals).
[1] 89 Fed. Reg. 45,711.
[2] EPIC, About Us (2024), https://epic.org/about/.
[3] EPIC Letter to Attorney General Garland Re: ShotSpotter Title VI Compliance (Sept. 27, 2023), https://epic.org/documents/epic-letter-to-attorney-general-garland-re-shotspotter-title-vi-compliance/. See EPIC Comments to OSTP on Public and Private Sector Uses of Biometric Technologies (Jan. 15, 2022), https://epic.org/documents/epic-comments-to-ostp-on-public-and-private-sector-uses-of-biometric-technologies/; EPIC Comments to the U.S. Postal Investigative Service on Using U.S.P.S. Customer Data for Law Enforcement (Jan. 18, 2022), https://epic.org/documents/epic-comments-to-the-u-s-postal-investigative-service-on-using-u-s-p-s-customer-data-for-law-enforcement/;
[4] See, e.g., EPIC Comments to the Office of Management and Budget on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum, (Dec. 5, 2023), https://epic.org/wp-content/uploads/2023/12/EPIC-OMB-AI-Guidance-Comments-120523-1.pdf. Comments of EPIC, HUD’s Implementation of the Fair Housing Act’s Disparate Impact Standard, Department of Housing and Urban Development (Oct. 18, 2019), available at https://epic.org/apa/comments/EPIC-HUD-Oct2019.pdf; Testimony of EPIC, Massachusetts Joint Committee on the Judiciary (Oct. 22, 2019), transcription available at https://epic.org/testimony/congress/EPIC-FacialRecognitionMoratorium-MA-Oct2019.pdf; Comments of EPIC, Request for Information: Big Data and the Future of Privacy, Office of Science and Technology Policy (Apr. 4, 2014), available at https://epic.org/privacy/big-data/EPIC-OSTP-Big-Data.pdf.
[5] EPIC v. Department of Justice, 320 F.Supp.3d 110 (D.D.C. 2018), voluntarily dismissed, 2020 WL 1919646 (D.C. Cir. 2020), https://epic.org/foia/doj/criminal-justice-algorithms/.
[6] See EPIC, EPIC v. DHS (FAST Program), https://epic.org/foia/dhs/fast/.
[7] Dana Khabbaz, DHS’s Data Reservoir: ICE and CBP’s Capture and Circulation of Location Information (Aug. 2022), https://epic.org/documents/dhss-data-reservoir-ice-and-cbps-capture-and-circulation-of-location-information/; EPIC Comments to DHS: Advance Collection of Photos at the Border (Nov. 29, 2021), https://epic.org/documents/epic-comments-to-dhs-advance-collection-of-photos-at-the-border/; EPIC Comments to DHS on Collection of Biometric Data From Aliens Upon Entry to and Departure From the United States (Dec. 21, 2023), https://epic.org/documents/collection-of-biometric-data-from-aliens-upon-entry-to-and-departure-from-the-united-states/.
[8] Kara Williams, AI Legislation Scorecard: A Rubric for Evaluating AI Bills, EPIC (Jun. 2024), 1, https://epic.org/wp-content/uploads/2024/06/EPIC-AI-Legislation-Scorecard-June2024.pdf.
[9] Notice of a PCLOB Public Forum Examining the Role of Artificial Intelligence in Counterterrorism and Request for Public Comment, Priv. and Civ. Liberties Oversight Board, 89 Fed. Reg. 45711 (May 23, 2024), https://www.federalregister.gov/documents/2024/05/23/2024-11317/notice-of-a-pclob-public-forum-examining-the-role-of-artificial-intelligence-in-counterterrorism-and.
[11] Notice of a PCLOB Public Forum Examining the Role of Artificial Intelligence in Counterterrorism and Request for Public Comment, Priv. and Civ. Liberties Oversight Board, 89 Fed. Reg. 45711 (May 23, 2024), https://www.federalregister.gov/documents/2024/05/23/2024-11317/notice-of-a-pclob-public-forum-examining-the-role-of-artificial-intelligence-in-counterterrorism-and.
[12] See, e.g., Off. of Mgmt. & Bdget, Exec. Off. of the President, OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Nov. 1, 2023); [Executive Order 14110, OMB Memo M-24-10, NIST AI RMF].
[13] John Launchbury, A DARPA Perspective on Artificial Intelligence , DARPA, 4-7 (Feb. 2017), https://
www.darpa.mil/attachments/AIFull.pdf; see also Homeland Security Act of 2002, Pub.L. No. 107-296, 116 Stat. 2135, 2147 (2002), (authorizing DHS to make use of data mining to achieve its objectives); see 6 U.S.C. § 121(d) (13) (2006).
[14] See e.g. Comments of EPIC to the Office of Management and Budget on Privacy Impact Assessments, (Apr. 1, 2024), https://epic.org/documents/comments-of-epic-to-omb-on-privacy-impact-assessments/ (discussing the DHS, DOJ, FBI, and other agencies’ failure to timely (if at all) publish privacy impact assessments on the technologies being used and developed).
[15] In re [REDACTED], Mem. Opinion & Order, No. [REDACTED] (FISA Ct. Apr. 26, 2017), https://www.dni.gov/files/documents/icotr/51117/2016_Cert_FISC_Memo_Opin_Order_Apr_2017.pdf.
[16] See EPIC, Comments to OMB on its Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum (Dec. 5, 2023), https://epic.org/wp-content/uploads/2023/12/EPIC-OMB-AI-Guidance-Comments-120523-1.pdf.
[17] Eric Schmidt et al., Final Report, Nat’l Sec. Comm’n. on Artificial Intelligence 33 (2021), https://reports.nscai.gov/final-report/ [Hereinafter “NSCAI Report”].
[18] NSCAI Report at 135.
[19] Id.
[20] NSCAI Report at 131-140.
[21] See e.g. Comments of EPIC to DOJ and DHS on Law Enforcement’s Use of FRT, Biometric, and Predictive Algorithms 44-50 (Jan. 19, 2023), https://epic.org/wp-content/uploads/2024/01/EPIC-DOJDHS-Comment-LE-Tech-011924.pdf; Letter from Nancy R. Kingsbury, Managing Dir., Applied Rsch. & Methods, GAO, & Jennifer A. Grover, Dir., Homeland Sec. & Just. Issues, GAO, to Rep. Bennie G. Thompson, Ranking Member, Comm. On Homeland Sec., & Rep. Bonnie Watson Coleman, Ranking Member, Subcomm. on Transp. & Protective Sec. (July 20, 2017), https://www.gao.gov/assets/gao-17-608r.pdf (Letter to TSA titled “Aviation Security: TSA Does Not Have Valid Evidence Supporting Most of the Revised Behavioral Indicators Used in its Behavior Detection Activities); GAO, GAO-20-72, Aviation Security: TSA Coordinates with Stakeholders on Changes to Screening Rules but Could Clarify Its Review Processes and Better Measure Effectiveness, 11–14 (2019), https://www.gao.gov/assets/gao-20-72.pdf (finding that TSA still did not provide sufficient evidence supporting its behavioral indicators).
[22] Patrick Grother, et al., Face Recognition Vendor Test, Part 3: Demographic Effects, NIST (Dec. 2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.
[23] NSCAI Report at 145 (citing Patrick Grother, et al., Face Recognition Vendor Test, Part 3: Demographic Effects, NIST (Dec. 2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf; Gender Shades, http://gendershades.org/ (last visited Jan. 11, 2021)).
[24] NSCAI Report at 144.
[25] NSCAI Report at 155-253.
[26] 42 U.S.C §§ 2000d–2000d-7; see also EPIC Letter to Attorney General Garland Re: Title VI Compliance and Predictive Algorithms (Jul. 6, 2022), https://epic.org/documents/epic-letter-to-attorney-general-garland-re-title-vi-compliance-andpredictive-algorithms/; Comments of EPIC to DOJ and DHS on Law Enforcement’s Use of FRT, Biometric, and Predictive Algorithms 7-9 (Jan. 19, 2023), https://epic.org/wp-content/uploads/2024/01/EPIC-DOJDHS-Comment-LE-Tech-011924.pdf.
[27] Access to Classified Information, Exec. Order No. 12968, 60 Fed. Reg. 40245 (Aug. 07, 1995).
[28] Best Practices to Protect Privacy, Civil Liberties, and Civil Rights of Americans of Chinese Descent in the Conduct of Intelligence Activities, Off. of the Director of Nat’l Intelligence (May 2022), https://www.dni.gov/files/CLPT/documents/ODNI_Report_on_Best_Practices_to_Protect_Privacy_Civil_Liberties_and_Civil_Rights_of_Americans_of_Chinese_Descent_in_ConductOof_US_Intelligence_Activities_May_2022.pdf.
[29] See Council on Am.-Islamic Rels., Twenty Years Too Many: A Call to Stop the FBI’s Secret Watchlist , 1
(2023), https://www.cair.com/wp-content/uploads/2023/06/watchlistreport-1.pdf
[30] NSCAI Report at 136.
[31] Patrick Grother, et al., Face Recognition Vendor Test, Part 3: Demographic Effects, NIST, 58 (Dec. 2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.
[32] Clare Garvie, Garbage In, Garbage Out: Face Recognition on Flawed Data (2019), https://www.flawedfacedata.com.
[33] Todd Feathers, Las Vegas Cops Used ‘Unsuitable’ Facial Recognition Photos To Make Arrests, Motherboard (Aug. 7, 2020), https://www.vice.com/en/article/pkyxwv/las-vegas-cops-used-unsuitable-facialrecognition-photos-to-make-arrests.
[34] NSCAI Report at 131-140.
[35] See, e.g., EPIC, Comments to the NTIA on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights (Mar. 27, 2024), https://epic.org/wp-content/uploads/2024/03/EPIC_Comment_NTIA_Dual_Use_Foundation_Models_with_Appendix.pdf.
[36]Apostol Vassilev et al., Adversarial Machine Learning: A Taxonomy and terminology of Attacks and Mitigation, NIST AI 100-2 E2023, (Jan. 2024), https://csrc.nist.gov/pubs/ai/100/2/e2023/final; Marcus Comiter, Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It, Harvard Kennedy School Belfer Center for Science and International Affairs (Aug. 2019), https://www.belfercenter.org/publication/AttackingAI.
[37] See, e.g., Marcus Comiter, Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It, Harvard Kennedy School Belfer Center for Science and International Affairs (Aug. 2019), https://www.belfercenter.org/publication/AttackingAI.
[38] Matthew Kosinski & Amber Forrest, What is a prompt injection attack?, IBM (Mar. 26, 2024), https://www.ibm.com/topics/prompt-injection.
[39] NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (2023),
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
[40] See Emily M. Bender et al., On the Dangerous of Stochastic Parrots: Can Language Models Be Too Big?, Proc. 2021 ACM Conf. on Fairness, Accountability, & Transp. 610 (2021), https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.
[41] Lisa Lacy, Hallucinations: Why AI Makes Stuff Up and What’s being Done About It, CNET (Apr. 1, 2024), https://www.cnet.com/tech/hallucinations-why-ai-makes-stuff-up-and-whats-being-done-about-it/.
[42] NSCAI Report at 23.
[43] NSCAI Report at 154.
[44] NSCAI Report at 154.
[45] See e.g. infra Pt. IV(b).
[46] NSCAI Report at 80.
[47] Office of the Director of National Intelligence Senior Advisory Group Panel on Commercially Available Information, Report to the Director of National Intelligence (Jan. 27, 2022), https://www.dni.gov/files/ODNI/documents/assessments/ODNI-Declassified-Report-on-CAI-January2022.pdf.
[48] Id. at 21.
[49] Id. at 19.
[50] Id.
[51] Brennan Center for Justice, PCLOB Report Reveals New Abuses of FISA Section 702 (Oct. 2023), https://www.brennancenter.org/our-work/research-reports/pclob-report-reveals-new-abuses-fisa-section-702; Elizabeth Goitein, The FISA Court’s 702 Opinions, Part I: A History of Non-Compliance Repeats itself (Oct. 15, 2019), https://www.justsecurity.org/66595/the-fisa-courts-702-opinions-part-i-a-history-of-non-compliance-repeats-itself/; Robyn Greene, A History of FISA Section 702 Compliance Violations (Sept. 28, 2017), https://www.newamerica.org/oti/blog/history-fisa-section-702-compliance-violations/.
[52] In re [REDACTED], Mem. Opinion & Order, No. [REDACTED] 49 (FISA Ct. Apr. 21, 2022), https://www.intelligence.gov/assets/documents/702%20Documents/declassified/21/2021_FISC_Certification_Opinion.pdf.
[53] Christian Grothoff and J.M. Porup, The NSA’s SKYNET program may be killing thousands of innocent people, Ars Technica (Feb. 16, 2016), https://arstechnica.com/information-technology/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent-people/.
[54] Id.
[55] Yuval Abraham, ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza, +972 Magazine (Apr. 2, 2024), https://www.972mag.com/lavender-ai-israeli-army-gaza/.
[56] PCLOB, Report on Certain NSA Uses of XKEYSCORE for Counterterrorism Purposes (Dec. 2020), https://documents.pclob.gov/prod/Documents/OversightReport/ee4c139b-1674-4bfa-9b6a-5b591b648090/NSA%20XKEYSCORE%20REPORT.pdf.
[57] See Ashley S. Deeks, Predicting Enemies 104 Va. L. Rev. 1529 (Dec. 22, 2018), https://virginialawreview.org/articles/predicting-enemies/
[58] See Arthur Holland Michel, The Machine Got it Wrong? Uncertainties, Assumptions, and Biases in Military AI, Just Security (May 13, 2024), https://www.justsecurity.org/95630/biases-in-military-ai/.
[59] “Principles of Artificial Intelligence Ethics for the Intelligence Community,” Office of the Director of National Intelligence (last viewed July 1, 2024), available at https://www.intelligence.gov/principles-of-artificial-intelligence-ethics-for-the-intelligence-community.
[60] “Artificial Intelligence Ethics Framework for the Intelligence Community,” Office of the Director of National Intelligence (last visited July 1, 2024), available at https://www.intelligence.gov/artificial-intelligence-ethics-framework-for-the-intelligence-community.

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate