EPIC v. AI Commission
Seeking public access to the records and meetings of the NSCAI
In EPIC v. AI Commission, EPIC has filed suit to enforce the transparency obligations of the National Security Commission on Artificial Intelligence. The AI Commission, established by Congress in 2018 and chaired by former Google CEO Eric Schmidt, is charged with "review[ing] advances in artificial intelligence, related machine learning developments, and associated technologies" and making policy recommendations to Congress and the President. Yet since its launch in March of 2019, the Commission has conducted its decisionmaking in near-total secrecy. None of the Commission's plenary or working group meetings have been announced in advance or opened to the public, and no agendas, minutes, or materials from those meetings have been published.
Twice in the past seven months, EPIC submitted open records and meetings requests to the AI Commission and Department of Defense under Freedom of Information Act (FOIA) and Federal Advisory Committee Act (FACA). After both agencies failed to comply with EPIC's requests, EPIC brought suit on September 27, 2019.
- EPIC Seeks More Details on Secretive AI Commission Report: Following the release of a report by the US Commission on Artificial Intelligence, EPIC is seeking specific information about recommendations that could impact the privacy rights of Americans. EPIC previously sued the Commission to make public its records and meetings. Now EPIC wants to know why the Commission criticized the EU General Data Protection Regulation and why the Commission wants to amend U.S. privacy laws to allow "government access to data on Americans." EPIC is also curious why the Commission selectively published the names of organizations and businesses it consulted. The Commission is chaired by former Google CEO Eric Schmidt. EPIC filed suit against the Commission earlier this year to ensure transparency and public participation. The Commission has held more than 200 closed-door meetings. The case is EPIC v. AI Commission, No. 19-2906 (D.D.C). (Nov. 6, 2019) More top news »
The Privacy Risks Posed by the Use of AI
Artificial intelligence presents unique threats to privacy, human rights, and democratic institutions. The deployment of AI systems tests long-standing privacy safeguards governing the collection and use of personal data. For example, privacy laws require data minimization—the requirement that only necessary data be retained. Yet “[i]n the search for new connections and more precise analyses, it is tempting to give [a] system access to as much data as possible.” China, for instance, uses sophisticated AI surveillance technology to profile and control Muslim minority populations. Automated decisionmaking and profiling with AI systems can produce biased and inaccurate decisions, with serious consequences for the persons improperly targeted. Similarly, unrepresentative data sets can produce flawed AI models.
There is a clear need for human rights protections for AI systems in the national security context, where public oversight is often limited. Yet there are already indications that the U.S. Intelligence Community has failed to invest in vital AI safeguards. In May 2019, the Inspector General for the Intelligence Community highlighted a lack oversight for the use of AI, warning that “[i]nvestment asymmetry between mission performance and intelligence oversight in AI efforts could lead to an accountability deficit. . . . [T]here is little indication that investments in oversight of AI are currently a high priority.”
Privacy, security, and discrimination are not the only civil liberties and human rights issues raised by use of AI systems. International AI policy frameworks—including the OECD AI Principles (to which the United States is a signatory) and the Universal Guidelines for Artificial Intelligence—set out explicit rights and responsibilities concerning the use of AI systems. These include transparency and identification requirements, testing requirements, fairness, data quality, public safety, contestability, reliability, termination, and more.
Public Participation in AI Policymaking
The vast majority of AI policymaking around the world is conducted transparently and relies on public participation. National governments and international organizations routinely seek public input on AI policy. Europe’s High-Level Expert Group on Artificial Intelligence—a group of academic, civil society, and industry representatives—held a public consultation on Europe’s draft Ethics Guidelines for AI. The Council of Europe invited public comment on a draft recommendation concerning the human rights impacts of algorithmic systems. The Organization for Economic Cooperation and Development (“OECD”) established the Artificial Intelligence Group of Expert, representing OECD member organizations, held several meetings, sought comments from civil society, and produced the OECD Principles on Artificial Intelligence, which were endorsed by 42 nations and the G20.
Governments around the world have conducted transparent consultations on AI policy. Japan conducted a public consultation and published a draft AI research and development guidelines to prompt international debate over AI policymaking. The Australian government published proposed AI ethics framework for public consultation. And Canada and France made a joint public proposal for an international panel on artificial intelligence. All told, Australia, Canada, China, Denmark, EU Commission, Finland, France, Germany, India, Italy, Japan, Kenya, Malaysia, Mexico, New Zealand, Nordic-Baltic Region, Poland, Russia, Singapore, South Korea, Sweden, Taiwan, Tunisia, UAE, United Kingdom, and the U.S. have publicly released national AI strategies.
In the United States, EPIC—joined by leading scientific organizations and nearly 100 experts—filed a petition calling for public participation in federal efforts to develop AI policy. The coalition stated:
The reach of AI is so vast, so important, and encompasses so many issues, it is imperative that the Administration provide the American public the opportunity to comment on proposed policy initiatives impacting the American public. AI has the potential to improve our society, but only if proper policies are in place to provide the guidance needed to address the potential risks that accompany the potential benefits.The National Science Foundation subsequently announced it would seek public comment on AI policy.
The National Institute of Standards and Technology published a plan for developing technical AI standards and sought public comments. The Office of Management and Budget solicited public comment about the use of federal data for AI research and development. And the President’s Executive Order on Maintaining American Leadership in Artificial Intelligence states that “[m]aintaining American leadership in AI requires a concerted effort to promote advancements in technology and innovation, while protecting American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.”
The Formation and Structure of the AI Commission
Congress created the National Security Commission on Artificial Intelligence through the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (“NDAA”), Pub. L. No. 115-232, § 1051, 132 Stat. 1636, 1962-65 (2018), signed into law on August 13, 2018. The NDAA directs the AI Commission “to review advances in artificial intelligence, related machine learning developments, and associated technologies.” The AI Commission is “an independent establishment of the Federal Government” that is “in the executive branch.” The AI Commission “shall be composed of 15 members” appointed “for the life of the Commission” by the Secretary of Defense, the Secretary of Commerce, and the chairs and ranking members of six congressional committees. The Commission “shall terminate on October 1, 2020.”
The Chairman of the Commission is Defendant Eric Schmidt, the former executive chairman of Alphabet Inc. and the former chairman and chief executive officer of Google Inc. The Vice Chairman of the Commission is Robert O. Work, former Deputy Secretary of Defense. The Commission also includes:
- Safra Catz, chief executive officer of Oracle
- Steve Chien, supervisor of the Artificial Intelligence Group at Caltech’s Jet Propulsion Lab
- Mignon Clyburn, Open Society Foundation fellow and former FCC commissioner
- Chris Darby, chief executive officer of In-Q-Tel
- Ken Ford, chief executive officer of the Florida Institute for Human and Machine Cognition
- Jose-Marie Griffiths, president of Dakota State University
- Eric Horvitz, director of Microsoft Research Labs
- Andy Jassy, chief executive officer of Amazon Web Services
- Gilman Louie, partner at Alsop Louie Partners
- William Mark, director of SRI International’s Information and Computing Sciences Division
- Jason Matheny, director of the Center for Security and Emerging Technology and former Assistant Director of National Intelligence
- Katharina McFarland, consultant at Cypress International and former Assistant Secretary of Defense for Acquisition
- Andrew Moore, head of Google Cloud AI
Under the NDAA, the AI Commission is to “consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.” Congress has designated nine AI-related subjects for the Commission to review, including “ethical considerations related to artificial intelligence and machine learning as it will be used for future applications related to national security and defense.” Since it launched, the Commission has organized itself into four specialized working groups and has “decided to pursue Special Projects on three cross-cutting issues[.]” One of the Commission’s special projects concerns “the responsible and ethical use of AI for national security[.]”
The AI Commission is “supported by a professional staff of about 20, including direct hires and detailees from the military services and government agencies. The staff is organized into three teams, focused on research and analysis, outreach and engagement, and operations.”
Since launching, the AI Commission has held at least thirteen plenary and working group meetings and has received more than 100 briefings. On four occasions—March 11, 2019; May 20, 2019; July 11, 2019; and September 2019—the Commission met as a whole. None of these meetings have been noticed in the Federal Register or open to the public, nor has the AI Commission published any agendas, minutes, or materials from these meetings.
Accordingly, EPIC filed two separate Freedom of Information Act/Federal Advisory Committee Act requests with the Department of Defense and the AI Commission. On Feb. 22, 2019, EPIC requested from the DOD:
(1) Copies of all “records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents which were made available to or prepared for or by” the National Security Commission on Artificial Intelligence or any subcomponent thereof;
(2) A copy of the “initial report on the findings and . . . recommendations” of the National Security Commission on Artificial Intelligence required by section 1051(c)(1) of the National Defense Authorization Act for FY 2019; and
(3) Access to, and advance Federal Register notice of, all meetings of the National Security Commission on Artificial Intelligence and any subcomponent thereof.
On Sep. 11, 2019, EPIC requested from the AI Commission:
(1) Copies of all “records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents which were made available to or prepared for or by” the National Security Commission on Artificial Intelligence and/or any subcomponent thereof;
(2) Contemporaneous access to, and advance Federal Register notice of, all meetings of the National Security Commission on Artificial Intelligence and any subcomponent thereof, including but not limited to the Commission’s September 2019 and November 2019 plenary meetings.
On September 27, 2019—following the failure of both the AI Commission and the DOD to expedite EPIC's requests and failure to provide responsive records—EPIC filed suit against the AI Commission, Commission Chair Eric Schmidt, and the DOD in the U.S. District Court for the District of Columbia.
EPIC is one of the leading organizations in the country with respect to the privacy and human rights implications of AI use. In 2015, EPIC led an international campaign for “algorithmic transparency,” a practice which reduces bias and helps ensure fairness in automated decisionmaking. In 2018, EPIC led the drafting of the Universal Guidelines for Artificial Intelligence, a framework for AI governance based on the protection of human rights. The Universal Guidelines have been endorsed by more than 250 experts and 60 organizations in 40 countries. In 2019, EPIC published the EPIC AI Policy Sourcebook, the first compendium of AI policy frameworks and related AI resources.
EPIC regularly shares AI expertise and policy recommendations with Congressional committees, federal agencies, and international organizations. EPIC is currently seeking the release of a 2014 Department of Justice report to the White House concerning the use of predictive analytics and risk assessment algorithms in the criminal justice system. The DOJ has warned that assessments based on sociological and personal information rather than prior bad acts is "dangerous" and constitutionally suspect, citing the disparate impacts of risk assessments and the erosion of consistent sentencing.
- EPIC Complaint (Sep. 27, 2019)
- EPIC Motion for a Preliminary Injunction (Sep. 27, 2019)
- Government Opposition to EPIC's Motion for a Preliminary Injunction (Oct. 9, 2019)
- EPIC Reply in Support of Motion for a Preliminary Injunction (Oct. 11, 2019)
- Transcript of Hearing on Motion for a Preliminary Injunction (Oct. 16, 2019)
- Order Denying Preliminary Injunction and Setting Expedited Briefing Schedule (Oct. 16, 2019)
- EPIC, EPIC AI Policy Sourcebook 2019 (2019)
- Comments of EPIC to the Council of Europe on Human Rights Impacts of Algorithmic Systems (Aug. 15, 2019)
- Comments of EPIC to OMB on Access to Federal Data for AI Research (Aug. 9, 2019)
- Comments of EPIC to NIST on AI Standards (May 31, 2019)
- Comments of EPIC to DOD on "Insider Threat Management and Analysis Center" (Apr. 22, 2019)
- Statement of EPIC to the Senate Comm. on the Judiciary on U.S. AI Policy (Nov. 30, 2018)
- The Public Voice, Universal Guidelines for Artificial Intelligence (Oct. 23, 2018)
- EPIC et al., Petition to OSTP for Request for Information on Artificial Intelligence Policy (July 4, 2018)
- EPIC v. DOJ, No. 18-5307 (seeking a DOJ report to the President and related records on the use of algorithms in the criminal justice system)
- EPIC, Algorithmic Transparency: End Secret Profiling (2015)
- EPIC et al., Petition for OSTP to Conduct Public Comment Process on Big Data and the Future of Privacy (Feb. 10, 2014)
- Alexandra S. Levine, New EPIC Lawsuit, POLITICO (Sep. 27, 2019)
Share this page:
Subscribe to the EPIC Alert
The EPIC Alert is a biweekly newsletter highlighting emerging privacy issues.