Analysis
Framing the Risk Management Framework: Actionable Instructions by NIST in their “Govern” Section
March 22, 2023 |
If you’ve had trouble following the different laws and frameworks proposed as ways to regulate A.I. systems, you’re not alone. From the White House’s Blueprint for an A.I. Bill of Rights and the National Institute of Standards and Technology’s (NIST’s) A.I. Risk Management Framework to the OECD’s Principles on Artificial Intelligence and state laws like New York’s Local Law 144, the last few years have seen numerous attempts to solidify a framework for regulating A.I. systems—often in ways that conflict. These frameworks leave both industry actors and government regulators uncertain about which A.I. practices to applaud and which to censure. Amidst the noise, these frameworks actually provide many harmonized recommendations that companies can incorporate and legislators should require. Over the next few months, EPIC will explore today’s leading A.I. frameworks, teasing out actionable recommendations and clear points of agreement across frameworks to clarify what companies and regulators alike should do—and what they should avoid.
We’re starting our exploration with one of the leading American A.I. frameworks: NIST’s A.I. Risk Management Framework. Formally released on January 26, 2023, the A.I. Risk Management Framework is a four-part, voluntary framework intended to guide the responsible development and use of A.I. systems. The core of the framework are recommendations divided into four overarching functions: (1) Govern, which covers overarching policy decisions and organizational culture around A.I. development; (2) Map, which covers efforts to contextualize A.I. risks and potential benefits; (3) Measure, which covers efforts to assess and quantify A.I. risks; and (4) Manage, which covers the active steps an organization should take to mitigate risks and prioritize elements of trustworthy A.I. systems. In addition to the core Framework, NIST also hosts supplemental resources like a community Playbook to help organizations navigate the Framework. Over the next few weeks, EPIC will be distilling the A.I. Risk Management Framework’s recommendations into actionable, easy-to-understand instructions.
The high level suggested actions can be divided into 5 overarching recommendations:
- (1) Align Efforts with Industry and Legal Standards
- (2) Engage a Diverse Set of Stakeholders and Those Impacted by AI Systems to Inform Risk Management Processes and Procedures
- (3) Set Clear and Consistent Standards for Defining, Designing, Training, Testing, and Deploying AI Systems
- (4) Update and Maintain Consistent and Comprehensive AI Risk Management Policies, Which Provide Clear Guidance to Staff;
- (5) Ensure Staff Have the Skills, Training, Resources, and Experience Necessary to Uphold.
Below, EPIC organizes the 102 specific actions into these 5 categories.
Align efforts with industry and legal standards
- [1] Maintain awareness of the legal and regulatory considerations and requirements specific to industry, sector, and business purpose, as well as the application context of the deployed AI system.
- [2] Align risk management efforts with applicable legal standards.
- [6] Align to broader data governance policies and practices, particularly the use of sensitive or otherwise risky data.
- [16] Verify that formal AI risk management policies align to existing legal standards, and industry best practices and norms.
- [17] Establish AI risk management policies that broadly align to AI system trustworthy characteristics.
- [30] Examine the efficacy of different types of transparency tools and follow industry standards at the time a model is in use.
- [45] Establish policies that promote regular communication among AI actors participating in AI risk management efforts.
- [48] Establish policies that incentivize AI actors to collaborate with existing legal, oversight, compliance, or enterprise risk functions in their AI risk management activities.
- [70] Establish policies that incentivize AI actors to collaborate with existing nondiscrimination, accessibility and accommodation, and human resource functions, employee resource group (ERGs), and diversity, equity, inclusion, and accessibility (DEIA) initiatives.
- [84] Align organizational impact assessment activities with relevant regulatory or legal requirements.
- [96] Define reasonable risk tolerances for AI systems informed by laws, regulation, best practices, or industry standards.
Engage a diverse set of stakeholders and those impacted by A.I. systems to inform risk management processes and procedures
- [13] Outline processes for internal and external stakeholder engagement.
- [28] Identify AI actors responsible for evaluating efficacy of risk management processes and approaches, and for course-correction based on results.
- [35] Establish mechanisms to enable the sharing of feedback from impacted individuals or communities about negative impacts from AI systems.
- [36] Establish mechanisms to provide recourse for impacted individuals or communities to contest problematic AI system outcomes.
- [63] Establish board committees for AI risk management and oversight functions and integrate those functions within the organization’s broader enterprise risk management approaches.
- [67] Facilitate the contribution of staff feedback and concerns without fear of reprisal.
- [69] Seek external expertise to supplement organizational diversity, equity, inclusion, and accessibility where internal expertise is lacking.
- [88] Establish organizational commitment to identifying AI system limitations and sharing of insights about limitations within appropriate AI actor groups.
- [90] Establish policies and processes regarding public disclosure of incidents and information sharing.
- [92] Establish AI risk management policies that explicitly address mechanisms for collecting, evaluating, and incorporating stakeholder and user feedback that could include: (1) recourse mechanisms for faulty AI system outputs; (2) bug bounties; (3) human-centered design; (4) user-interaction and experience research; and (5) participatory stakeholder engagement with individuals and communities that may experience negative impacts.
- [93] Verify that stakeholder feedback is considered and addressed, including environmental concerns, and across the entire population of intended users, including historically excluded populations, people with disabilities, older people, and those with limited access to the internet and other basic technologies.
- [94] Clarify the organization’s principles as they apply to AI systems – considering those which have been proposed publicly – to inform external stakeholders of the organization’s values. Consider publishing or adopting AI principles.
- [98] Collaboratively establish policies that address third-party AI systems and data.
Set clear and consistent standards for defining, designing, training, testing, and deploying A.I. systems
- [5] Define key terms and concepts related to AI systems and the scope of their purposes and intended uses.
- [7] Detail standards for experimental design, data quality, and model training.
- [8] Outline and document risk mapping and measurement processes and standards.
- [9] Detail model testing and validation processes.
- [18] Verify that formal AI risk management policies include currently deployed and third-party AI systems.
- [19] Establish policies to define mechanisms for measuring or understanding an AI system’s potential impacts, e.g., via regular impact assessments at key stages in the AI lifecycle, connected to system impacts and frequency of system updates.
- [20] Establish policies to define mechanisms for measuring or understanding the likelihood of an AI system’s impacts and their magnitude at key stages in the AI lifecycle.
- [21] Establish policies that define assessment scales for measuring potential AI system impact. Scales may be qualitative, such as red-amber-green (RAG), or may entail simulations or econometric approaches.
- [22] Establish policies for assigning an overall risk measurement approach for an AI system, or its important components, e.g., via multiplication or combination of a mapped risk’s impact and likelihood (risk = impact x likelihood).
- [23] Establish policies to assign models to uniform risk scales that are valid across the organization’s AI portfolio (e.g. documentation templates), and acknowledge risk tolerance and risk levels may change over the lifecycle of an AI system.
- [24] Establish and regularly review documentation policies that, among others, address information related to: (1) AI actors contact information; (2) business justification; (3) scope and usages; (4) assumptions and limitations; (5) description and characterization of training data; (6) algorithmic methodology; (7) evaluated alternative approaches; (8) description of output data; (9) testing and validation results (including explanatory visualizations and information); (10) down- and up-stream dependencies; (11) plans for deployment, monitoring, and change management; and (12) stakeholder engagement plans.
- [26] Establish policies for a model documentation inventory system and regularly review its completeness, usability, and efficacy.
- [37] Establish policies that define the creation and maintenance of AI system inventories.
- [38] Establish policies that define a specific individual or team that is responsible for maintaining the inventory.
- [39] Establish policies that define which models or systems are inventoried, with preference to inventorying all models or systems, or minimally, to high-risk models or systems, or systems deployed in high-stakes settings.
- [40] Establish policies that define model or system attributes to be inventoried, e.g, documentation, links to source code, incident response plans, data dictionaries, AI actor contact information.
- [41] Establish policies for decommissioning AI systems. Such policies typically address: (1) user and community concerns, and reputational risks; (2) business continuity and financial risks; (3) up and downstream system dependencies; (4) regulatory requirements (e.g., data retention); (5) potential future legal, regulatory, security or forensic investigations; and (6) migration to the replacement system, if appropriate.
- [42] Establish policies that delineate where and for how long decommissioned systems, models and related artifacts are stored.
- [43] Establish policies that address ancillary data or artifacts that must be preserved for fulsome under- standing or execution of the decommissioned AI system, e.g., predictions, explanations, intermediate input feature representations, usernames and passwords, etc.
- [71] Establish policies and procedures that define and differentiate the various human roles and responsibilities when using, interacting with, or monitoring AI systems.
- [73] Establish policies for the development of proficiency standards for AI actors carrying out system operation tasks and system oversight tasks.
- [77] Establish policies to enhance the explanation, interpretation, and overall transparency of AI systems.
- [78] Establish policies for managing risks regarding known difficulties in human-AI configurations, human-AI teaming, and AI system user experience and user interactions (UI/UX).
- [87] Establish policies and procedures to facilitate and equip AI system testing.
- [99] Establish policies related to: (1) transparency into third-party system functions, including knowledge about training data, training and inference algorithms, and assumptions and limitations; (2) thorough testing of third-party AI systems; and (3) requirements for clear and complete instructions for third-party system usage.
- [100] Evaluate policies for third-party technology
Update and maintain consistent and comprehensive A.I. risk management policies, which provide clear guidance to staff
- [4] Establish and maintain formal AI risk management policies that address AI system trustworthy characteristics throughout the system’s lifecycle.
- [10] Detail review processes for legal and risk functions.
- [11] Establish the frequency of and detail for monitoring, auditing and review processes.
- [12] Outline change management requirements.
- [14] Establish whistleblower policies to facilitate reporting of serious AI system concerns.
- [15] Detail and test incident response plans.
- [25] Verify documentation policies for AI systems are standardized across the organization and up to date.
- [26] Establish policies for a model documentation inventory system and regularly review its completeness, usability, and efficacy.
- [27] Establish mechanisms to regularly review the efficacy of risk management processes.
- [29] Establish policies and processes regarding public disclosure of risk management material such as impact assessments, audits, model documentation and validation and testing results.
- [32] Establish policies and procedures for monitoring and addressing AI system performance and trustworthiness, including bias and security problems, across the lifecycle of the system.
- [33] Establish policies for AI system incident response or confirm that existing incident response policies apply to AI systems.
- [34] Establish policies to define organizational functions and personnel responsible for AI system monitoring and incident response activities.
- [44] Establish policies that define the AI risk management roles and responsibilities for positions directly and indirectly related to AI systems, including, but not limited to: (1) boards of directors or advisory committees; (2) senior management; (3) AI audit functions; (4) product management; (5) project management; (6) AI design; (7) AI development; (8) human-AI interaction; (9) AI testing and evaluation; (10) AI acquisition and procurement; (11) impact assessment functions; (12) oversight functions.
- [46] Establish policies that separate management of AI system development functions from AI system testing functions, to enable independent course-correction of AI systems.
- [47] Establish policies to identify, increase the transparency of, and prevent conflicts of interest in AI risk management, and to counteract confirmation bias and market incentives that may hinder AI risk management efforts.
- [52] Verify that organizational AI policies include mechanisms for internal AI personnel to acknowledge and commit to their roles and responsibilities.
- [54] Define paths along internal and external chains of accountability to escalate risk concerns.
- [59] Declare risk tolerances for developing or using AI systems.
- [60] Ensure management supports AI risk management efforts and plays an active role in such efforts.
- [61] Ensure management supports competent risk management executives.
- [72] Establish procedures for capturing and tracking risk information related to human-AI configurations and associated outcomes.
- [75] Establish policies and procedures regarding AI actor roles, and responsibilities for human oversight of deployed systems.
- [76] Establish policies and procedures defining human-AI configurations in relation to organizational risk tolerances, and associated documentation.
- [79] Establish policies that require inclusion of oversight functions (legal, compliance, risk management) from the outset of the system design process.
- [80] Establish policies that promote effective challenges of AI system design, implementation, and deployment decisions, via mechanisms such as the three lines of defense, model audits, or red-teaming – to ensure that workplace risks such as groupthink do not take hold.
- [81] Establish policies that incentivize safety-first mindset and general critical thinking and review at an organizational and procedural level.
- [82] Establish whistleblower protections for insiders who report on perceived serious problems with AI systems.
- [83] Establish impact assessment policies and processes for AI systems used by the organization.
- [85] Verify that impact assessment activities are appropriate to evaluate the potential negative impact of a system and how quickly a system changes, and that assessments are applied on a regular basis.
- [86] Utilize impact assessments to inform broader evaluations of AI system risk.
- [89] Establish policies for reporting and documenting incident response.
- [91] Establish guidelines for incident handling related to AI system risks and performance.
- [95] Explicitly acknowledge that AI systems, and the use of AI, present inherent costs and risks along with potential benefits.
- [97] Establish policies that define how to assign AI systems to established risk tolerance levels by combining system impact assessments with the likelihood that an impact occurs. Such assessment often entails some combination of: (1) econometric evaluations of impacts and impact likelihoods to assess AI system risk; (2) red-amber-green (RAG) scales for impact severity and likelihood to assess AI system risk; (3) establishment of policies for allocating risk management resources along established risk tolerance levels, with higher-risk systems receiving more risk management resources and oversight; (4) establishment of policies for approval, conditional approval, and disapproval of the design, implementation, and deployment of AI systems; and (5) establish policies facilitating the early decommissioning of an AI system that is deemed risky beyond practical mitigation.
- [101] Establish policies for handling third-party system failures to include consideration of redundancy mechanisms for vital third-party AI systems.
- [102] Verify that incident response plans address third-party AI systems.
Ensure staff have the skills, training, resources, and experience necessary to uphold policies
- [3] Maintain policies for training (and re-training) organizational staff about necessary legal or regulatory considerations that may impact AI-related design, development and deployment activities.
- [31] Establish policies to allocate appropriate resources and capacity for assessing impacts of AI systems on individuals, communities and society.
- [49] Establish policies for personnel addressing ongoing education about: (1) applicable laws and regulations for AI systems; (2) potential negative impacts that may arise from AI systems; (3) organizational AI policies; and (4) trustworthy AI characteristics.
- [50] Ensure that trainings are suitable across AI actor sub-groups – for AI actors carrying out technical tasks (e.g., developers, operators, etc.) as compared to AI actors in oversight roles (e.g., legal, compliance, audit, etc.).
- [51] Ensure that trainings comprehensively address technical and socio-technical aspects of AI risk management.
- [53] Verify that organizational policies address change management and include mechanisms to communicate and acknowledge substantial AI system changes.
- [55] Ensure that the relevant staff dealing with AI systems are properly trained to interpret AI model output and decisions as well as to detect and manage bias.
- [56] Implement and update policies used to determine the necessary skills and experience needed to design, develop, deploy, assess, and monitor AI systems.
- [57] Implement and update policies to assess whether personnel have the necessary skills, training, resources, and domain knowledge to fulfill their assigned responsibilities.
- [58] Recruit, develop, and retain a workforce with background, experience, and perspectives that reflect the community impacted by an AI system in development.
- [62] Delegate the power, resources, and authorization to perform risk management to each appropriate level throughout the management chain.
- [64] Define policies and hiring practices at the outset that promote interdisciplinary roles, competencies, skills, and capacity for AI efforts.
- [65] Define policies and hiring practices that lead to demographic and domain expertise diversity.
- [66] Empower staff with necessary resources and support.
- [68] Establish policies that facilitate inclusivity and the integration of new insights into existing practice.
- [74] Establish specified risk management training protocols for AI actors carrying out system operation tasks and system oversight tasks.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate