Two key AI transparency measures from Executive Orders remain largely unfulfilled past deadlines
January 26, 2022 |
In 2019, then-President Trump signed an executive order that required agencies to publish information about how they planned to regulate AI in accordance with a set of government-wide principles. In 2020, he signed a second order that would improve transparency around how the United States is currently using AI by creating registries of non-classified federal government uses of AI. Both deadlines stemming from the executive orders have not been met.
AI – used here to describe everything from simple automated decision-making tools to large-scale biometric analysis systems – is used broadly by the federal government in many sensitive contexts. Known uses include facial recognition systems used by Customs and Border Protection, COVID-19 morbidity risk algorithms used by the Veteran’s Affairs, and “predictive enforcement tools” used by Medicare and Medicaid Services and the Internal Revenue Service. Many AI tools have known bias and accuracy concerns, encoding and exacerbate existing biases, and the opacity about what agencies are using what tools erodes accountability.
Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence,” signed in 2019, instructed regulatory agencies to publish information about how they plan on regulating AI in compliance with 10 principles laid out by the Office of Science and Technology Policy and the Office of Management and Budget (“OMB.”) According to final guidance sent by OMB to heads of agencies in November 2020, agencies should have completed their plans to regulate AI by May 17, 2021 and published those plans on the agency’s website. So far, most if not all agencies have failed to comply with this instruction, and the OMB has not produced the agency plans in response to a Freedom of Information Request by EPIC. Only a few agencies have published these plans publicly: United States Agency International Development (whose posted plan does not include any relevant content); the Department of Energy (whose posted plan is empty); the Department of Health and Human Services; and the Department of Veterans Affairs.
The principles laid out in EO 13859 are: Public Trust Public Participation Scientific Integrity and Information Quality; Risk Assessment and Management; Benefits and Cost; Flexibility; Fairness and Non-discrimination; Disclosure and Transparency; Safety and Security; and Interagency Coordination. The executive order is highly imperfect, and it only applies to commercial use of AI. EPIC commented on a draft version of the guidance in January 2020. The Biden administration’s Office of Science and Technology Policy is working on a “Bill of Rights for an Automated Society,” which will presumably yield a replacement or complementary set of principles.
Executive Order 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” signed in 2020, ordered Federal Chief Information Officers Council to “identify, provide guidance on, and make publicly available the criteria, format, and mechanisms for agency inventories of…use cases of AI by agencies” by February 2021, required agencies to prepare an inventory in compliance with that guidance by August 2021, and required agencies to make those inventories public by December 2021.
Those inventories have not been made public, and although substantial investment in AI adoption has not slowed, the identifiable efforts at transparency and accountability about the AI agencies already use has.
EPIC will continue to push for purposeful and transparent procurement of AI in government, including red-lines for certain unacceptable uses of AI. At minimum, governments must be transparent around what AI tools they are using, and more. Transparency is not a panacea but is a necessary starting point. Federal agencies should complete the transparency requirements they are obligated to follow under Executive Orders 13859 and 13960.