EPIC Urges NIST to Center AI Transparency and Data Minimization in AI Risk Management Following Biden Executive Order
February 2, 2024
Today, EPIC submitted comments to the National Institute of Standards and Technology (NIST) commending their efforts to develop additional AI risk management resources pursuant to President Biden’s AI Executive Order and recommending additional guidance around AI transparency, accountability, and data controls to mitigate the risks of both generative and non-generative AI.
Among the assignments included in President Biden’s Executive Order are requirements for NIST to expand on the agency’s AI Risk Management Framework by establishing additional guidelines, best practices, and consensus standards for the development and deployment of trustworthy AI systems. Of particular interest to the agency are resources and information concerning the risks of generative AI technologies and the synthetic content they produce.
EPIC’s comment (1) clarified that many provisions within NIST’s AI Risk Management Framework—including AI impact assessments, AI performance testing, and AI red-teaming efforts— can and should apply to generative AI technologies as well; (2) raised several risks of generative AI technologies that warrant additional safeguards, such as AI misinformation through “hallucinations,” malicious AI use to harass, impersonate, blackmail, or otherwise influence people; and model collapse caused by the use of AI-generated content to train other AI systems; and (3) urged NIST to pursue more robust and actionable standards for AI transparency, accountability, and data minimization as core features of an AI risk management framework.
EPIC’s comment to NIST follows previous EPIC comments concerning OMB’s draft AI regulations and shortcomings within NIST’s AI Risk Management Framework. It also builds on EPIC’s 2023 generative AI report, Generating Harms: Generative AI’s Impact & Paths Forward.