Framing the Risk Management Framework: Actionable Instructions by NIST in their “Manage” Section

March 30, 2023 | Ben Winters, Senior Counsel and Grant Fergusson, Equal Justice Works Fellow

Note: If you missed the first piece in this series, focused on introducing the AI Risk Management Framework and breaking down the “Govern” Section, check it out here.

Formally released on January 26, 2023, the A.I. Risk Management Framework is a four-part, voluntary framework intended to guide the responsible development and use of A.I. systems. The core of the framework are recommendations divided into four overarching functions: (1) Govern, which covers overarching policy decisions and organizational culture around A.I. development; (2) Map, which covers efforts to contextualize A.I. risks and potential benefits; (3) Measure, which covers efforts to assess and quantify A.I. risks; and (4) Manage, which covers the active steps an organization should take to mitigate risks and prioritize elements of trustworthy A.I. systems. In addition to the core Framework, NIST also hosts supplemental resources like a community Playbook to help organizations navigate the Framework. Over the next few weeks, EPIC will continue to distill the A.I. Risk Management Framework’s recommendations into more actionable instructions. Particularly, the different function sections have specific “Suggested Actions,” which comprise the list below.

The Manage section urges companies to devote resources to establish a meaningful transparency and accountability structure, document key activities, and critically, stop using AI tools when evaluation shows they are not trustworthy for one reason or another. Another key set of recommendations focus on urging companies to take responsibility for third-party systems they integrate, not just tools they develop themselves. The main recommendations EPIC divided the “suggested actions” from NIST into are:

  • Devote Resources to Establishing Robust TEVV (Test, Evaluation, Verification, Validation) Infrastructure Including Staffing 
  • Maintain Documentation for TEVV Infrastructure, A.I. Risks, and Evaluation Procedures
  • Regularly Evaluate the A.I. Systems You Use, Including Third-Party A.I. Systems
  • Align Efforts with Industry Standards and Legal Requirements
  • Monitor Policies and Evaluation Protocols Surrounding A.I. Systems for Effectiveness on an Ongoing Basis
  • Communicating to Stakeholders Throughout your Entity and Outside of it
  • Decommission Systems that Exceed Risk Tolerance

Devote Resources to Establishing Robust TEVV (Test, Evaluation, Verification, Validation) Infrastructure, Including Staffing 

  • [1.2] Assign risk management resources relative to established risk tolerance. AI systems with lower risk tolerances receive greater oversight, mitigation, and management resources.
  • [1.3] Identify risk response plans and resources and organizational teams for carrying out response functions.
  • [2.1] Verify risk management teams are resourced to carry out functions, including (1) establishing processes for considering methods that are not automated, semi-automated, or other procedural alternatives for AI functions; (2) enhancing A.I. system transparency mechanisms for A.I. teams; (3) enabling exploration of A.I. system limitations by A.I. teams; and (4) identifying, assessing, and cataloging past failed designs and negative impacts or outcomes to avoid known failure modes.
  • [2.1] Identify resource allocation approaches for managing risks in systems (1) deemed high risk, (2) that self-update (adaptive, online, reinforcement self-supervised learning or similar), (3) trained without access to ground truth (unsupervised, semi-supervised, learning or similar), (4) with high uncertainty or where risk management is insufficient.
  • [2.4] Apply protocols, resources, and metrics for decisions to supersede, bypass or deactivate AI systems or AI system components. 
  • [3.1] Monitor third-party AI systems for potential negative impacts and risks associated with trustworthiness characteristics. 
  • [3.1] Establish processes to identify beneficial use and risk indicators in third-party systems or components, such as inconsistent software release schedule, sparse documentation, and incomplete software change management (e.g., lack of forward or backward compatibility).
  • [4.3] Verify that relevant AI actors responsible for identifying complex or emergent risks are properly resourced and empowered. 

Maintain Documentation for TEVV Infrastructure, A.I. Risks, and Evaluation Procedures

  • [1.2] Document AI risk tolerance determination practices and resource decisions.
  • [1.3] Store risk management and system documentation in an organized, secure repository that is accessible by relevant AI Actors and appropriate personnel.
  • [1.4] Document residual risks within risk response plans, denoting risks that have been accepted, transferred, or subject to minimal mitigation. 
  • [2.1] Prepare and document plans for continuous monitoring and feedback mechanisms.
  • [2.2] Document risk tolerance decisions and risk acceptance procedures.
  • [2.4] Preserve materials for forensic, regulatory, and legal review.
  • [3.1] Identify and maintain documentation for third-party AI systems and components. 
  • [3.2] Identify, document and remediate risks arising from AI system components and pre-trained models per organizational risk management procedures, and as part of third-party risk tracking. 
  • [4.1] Respond to and document detected or reported negative impacts or issues in AI system performance and trustworthiness. 
  • [4.2] Document the basis for decisions made relative to tradeoffs between trustworthy characteristics, system risks, and system opportunities.
  • [4.3] Maintain a database of reported errors, incidents and negative impacts including date reported, number of reports, assessment of impact and severity, and responses. 
  • [4.3] Maintain a database of system changes, reason for change, and details of how the change was made, tested and deployed. 
  • [4.3] Maintain version history information and metadata to enable continuous improvement processes.

Regularly Evaluate the A.I. Systems You Use, Including Third-Party A.I. Systems

  • [1.1] Consider trustworthiness characteristics when evaluating AI systems’ negative risks and benefits.
  • [1.1] Utilize TEVV (Test, Evaluation, Verification, Validation) outputs from map and measure functions when considering risk treatment.
  • [2.1] Plan and implement risk management practices in accordance with established organizational risk tolerances.
  • [2.2] Establish mechanisms to capture feedback from system end users and potentially impacted groups.
  • [2.2] Establish risk controls considering trustworthiness characteristics, including: (1) data management, quality, and privacy (e.g., minimization, rectification, or deletion requests) controls as part of organizational data governance policies; (2) machine learning and end-point security countermeasures (e.g., robust models, differential privacy, authentication, throttling); (3) business rules that augment, limit or restrict AI system outputs within certain contexts; (4) utilizing domain expertise related to deployment context for continuous improvement and TEVV across the AI lifecycle; (5) development and regular tracking of human-AI teaming configurations; (6) model assessment and test, evaluation, validation and verification (TEVV) protocols; (7) use of standardized documentation and transparency mechanisms; (8) software quality assurance practices across AI lifecycle; and (9) mechanisms to explore system limitations and avoid past failed designs or deployments.
  • [2.3] Establish and maintain procedures to regularly monitor system components for drift, decontextualization, or other AI system behavior factors.
  • [2.3] Establish and maintain procedures for capturing feedback about negative impacts.
  • [2.4] Apply change management processes to understand the upstream and downstream consequences of bypassing or deactivating an AI system or AI system components.
  • [2.4] Conduct internal root cause analysis and process reviews of bypass or deactivation events. 
  • [2.4] Establish criteria for redeploying updated system components, in consideration of trustworthy characteristics. 
  • [3.1] Apply organizational risk tolerance to third-party AI systems. 
  • [3.1] Apply and document organizational risk management plans and practices to third-party AI technology, personnel, or other resources. 
  • [3.1] Establish testing, evaluation, validation, and verification processes for third-party AI systems which address the needs for transparency without exposing proprietary algorithms.
  • [3.2] Identify pre-trained models within AI system inventory for risk tracking. 
  • [4.1] Establish and maintain procedures to monitor AI system performance for risks and negative and positive impacts associated with trustworthiness characteristics. 
  • [4.1] Perform post-deployment TEVV tasks to evaluate AI system validity and reliability, bias and fairness, privacy, and security and resilience. 
  • [4.1] Evaluate AI system trustworthiness in conditions similar to deployment context of use, and prior to deployment. 
  • [4.1] Establish and implement red-teaming exercises at a prescribed cadence and evaluate their efficacy. 
  • [4.1] Establish procedures for tracking dataset modifications such as data deletion or rectification requests. 

Align Efforts with Industry Standards and Legal Requirements

  • [1.3] Observe regulatory and established organizational, sector, discipline, or professional standards and requirements for applying risk tolerances within the organization.
  • [1.3] Prioritize risks involving physical safety, legal liabilities, regulatory compliance, and negative impacts on individuals, groups, or society.
  • [2.2] Review insurance policies, warranties, or contracts for legal or oversight requirements for risk transfer procedures. 
  • [3.1] Ensure that legal requirements have been addressed. 

Monitor Policies and Evaluation Protocols Surrounding A.I. Systems for Effectiveness on an Ongoing Basis

  • [1.1] Regularly track and monitor negative risks and benefits throughout the AI system lifecycle including in post-deployment monitoring.
  • [1.1] Regularly assess and document system performance relative to trustworthiness characteristics and tradeoffs between negative risks and opportunities.
  • [1.1] Evaluate tradeoffs in connection with real-world use cases and impacts and as enumerated in Map function outcomes. 
  • [1.2] Regularly review risk tolerances and re-calibrate, as needed, in accordance with information from AI system monitoring and assessment.
  • [2.3] Ensure that protocols, resources, and metrics are in place for continual monitoring of AI systems’ performance, trustworthiness, and alignment with contextual norms and values. 
  • [2.3] Establish and regularly review treatment and response plans for incidents, negative impacts, or outcomes.
  • [2.3] Verify contingency processes to handle any negative impacts associated with mission-critical AI systems, and to deactivate systems.
  • [2.4] Regularly review established procedures for AI system bypass actions, including plans for redundant or backup systems to ensure continuity of operational and/or business functionality. 
  • [2.4] Identify and regularly review system incident thresholds for activating bypass or deactivation responses. 
  • [3.2] Establish processes to independently and continually monitor performance and trustworthiness of pre-trained models, and as part of third-party risk tracking. 
  • [4.2] Integrate trustworthiness characteristics into protocols and metrics used for continual improvement. 

Communicate with Stakeholders Both Within and Outside of Your Organization

  • [1.4] Establish procedures for disclosing residual risks to relevant downstream AI actors. 
  • [1.4] Inform relevant downstream AI actors of requirements for safe operation, known limitations, and suggested warning labels.
  • [2.1] Regularly seek and integrate external expertise and perspectives to supplement organizational diversity (e.g., demographic, disciplinary), equity, inclusion, and accessibility where internal capacity is lacking.
  • [2.1] Enable and encourage regular, open communication and feedback among AI actors and internal or external stakeholders related to system design or deployment decisions. 
  • [2.3] Enable preventive and post-hoc exploration of AI system limitations by relevant AI actor groups.
  • [3.1] Establish processes for third parties to report known and potential vulnerabilities, risks, or biases in supplied resources.
  • [4.1] Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders to capture information about system performance, trustworthiness, and impact. 
  • [4.1] Share information about errors and attack patterns with incident databases, other organizations with similar systems, and system users and stakeholders. 
  • [4.2] Establish processes for evaluating and integrating feedback into AI system improvements.
  • [4.2] Assess and evaluate alignment of proposed improvements with relevant regulatory and legal frameworks.
  • [4.2] Assess and evaluate alignment of proposed improvements connected to the values and norms within the context of use.
  • [4.3] Establish procedures to regularly share information about errors, incidents and negative impacts with relevant stakeholders, operators, practitioners and users, and impacted parties.

Decommission Systems that Exceed Risk Tolerance

  • [2.3] Decommission systems that exceed risk tolerances. 
  • [2.4] Decommission and preserve system components that cannot be updated to meet criteria for redeployment. 
  • [3.1] Decommission third-party systems that exceed risk tolerances
  • [3.2] Decommission AI system components and pre-trained models which exceed risk tolerances, and as part of third-party risk tracking. 
  • [4.1] Decommission systems that exceed established risk tolerances

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate