Analysis

Framing the Risk Management Framework: Actionable Instructions by NIST in the “Map” section of the RMF

April 6, 2023 | Grant Fergusson, Equal Justice Works Fellow

Note: This piece is part of a series examining NIST’s A.I. Risk Management Framework. If you missed our previous parts, click here for our introduction to the “Govern” function and click here for our introduction to the “Manage” function. 

Released on January 26, 2023 by the National Institute of Standards and Technology (NIST), the A.I. Risk Management Framework is a four-part, voluntary framework intended to guide the responsible development and use of A.I. systems. At the core of the framework are recommendations divided into four overarching functions: (1) Govern, which covers top-level policy decisions and organizations culture around A.I. development; (2) Map, which covers efforts to contextualize A.I. risks and potential benefits; (3) Measure, which covers efforts to assess and quantify A.I. risks; and (4) Manage, which covers the active steps an organization should take to mitigate risks and prioritize elements of trustworthy A.I. systems. In addition to the core Framework, NIST also hosts supplemental resources like a community Playbook to help organizations navigate the Framework. Over the next few weeks, EPIC will continue to distill the A.I. Risk Management Framework’s recommendations into more actionable instructions.

The Map Function of the A.I. Risk Management Framework urges companies to document every step of the A.I. development lifecycle, from identifying use cases, benefits, and risks to building interdisciplinary teams and testing methods. However, it goes further: the A.I. Risk Management Framework also pushes companies to consider the broader contexts and impacts of their A.I. systems—and resolve conflicts that may arise between different documented methods, uses, and impacts. Notably, the Map Function recommends (1) pursuing non-A.I. and non-technological solutions when they are more trustworthy than an A.I. system would be and (2) decommissioning or stopping deployment of A.I. systems when they exceed an organization’s maximum risk tolerance. The Map Function also includes recommendations for instituting and clearly documenting procedures for engaging with internal and external stakeholders for feedback.

EPIC believes the main suggested actions under the Map Function can be encompassed by 5 recommendations:

  • Maintain awareness of and engagement with diverse stakeholders and A.I. standards.
  • Clearly document and explain all steps of the A.I. development lifecycle, including A.I. system goals, functionalities, limitations, dependencies, business uses, biases, risks, and benefits, as well as testing procedures, downstream impacts of A.I. uses, staffing composition, and stakeholder engagement processes.
  • Define and examine A.I. system design, tasks, purposes, and requirements with diverse socio-technical and human contexts in mind.
  • Resolve conflicts between documented A.I. goals, uses, risks, benefits, and impacts—and shift to non-A.I. or non-technological solutions if they remain more trustworthy after resolving A.I. system conflicts.
  • Establish testing procedures, risk tolerance levels, or other A.I. risk criteria to guide oversight resource allocation and A.I. system deployment decisions.

A breakdown of where and how each suggested action within the Map Function maps onto these five recommendations is provided below.

Maintain Awareness of and Engagement with Diverse Stakeholders and A.I. Standards.

  • [MAP 1.1] Maintain awareness of industry, technical, and applicable legal standards.
  • [MAP 1.1] Gain and maintain awareness about evaluating scientific claims related to AI system performance and benefits before launching into system design.
  • [MAP 1.2] Establish interdisciplinary teams to reflect a wide range of skills, competencies, and capabilities for AI efforts.
  • [MAP 1.2] Verify that internal team membership includes demographic diversity, broad domain expertise, and lived experiences.
  • [MAP 1.2] Create and empower interdisciplinary expert teams to capture, learn, and engage the interdependencies of deployed AI systems and related terminologies and concepts from disciplines outside of AI practice such as law, sociology, psychology, anthropology, public policy, systems design, and engineering.
  • [MAP 1.5] Utilize existing regulations and guidelines for risk criteria, tolerance and response established by organizational, domain, discipline, sector, or professional requirements.
  • [MAP 1.6] Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to system design or deployment decisions.
  • [MAP 1.6] Develop and standardize practices to assess potential impacts at all stages of the AI lifecycle, and in collaboration with interdisciplinary experts, actors external to the team that developed or deployed the AI system, and potentially impacted communities.
  • [MAP 1.6] Include potentially impacted groups, communities, and external entities (e.g., civil society organizations, research institutes, local community groups, and trade associations) in the formulation of priorities, definitions and outcomes during impact assessment activities.
  • [MAP 1.6] Conduct qualitative interviews with end user(s) to regularly evaluate expectations and design plans related to Human-AI configurations and tasks.
  • [MAP 1.6] Follow responsible design techniques in tasks such as software engineering, product management, and participatory engagement. Some examples for eliciting and documenting stakeholder requirements include product requirement documents (PRDs), user stories, user interaction/user experience (UI/UX) research, systems engineering, ethnography and related field methods.
  • [MAP 1.6] Conduct user research to understand individuals, groups and communities that will be impacted by the AI, their values & context, and the role of systemic and historical biases. Integrate learnings into decisions about data selection and representation.
  • [MAP 2.2] Design for end user workflows and toolsets, concept of operations, and explainability and interpretability criteria in conjunction with end user(s) and associated qualitative feedback.
  • [MAP 2.2] Follow stakeholder feedback processes to determine whether a system achieved its documented purpose within a given use context, and whether end users can correctly comprehend system outputs or results.
  • [MAP 2.3] Establish mechanisms for regular communication and feedback among relevant AI actors and internal or external stakeholders related to the validity of design and deployment assumptions.
  • [MAP 2.3] Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to the development of TEVV approaches throughout the lifecycle to detect and assess potentially harmful impacts.
  • [MAP 2.3] Map adherence to policies that address data and construct validity, bias, privacy, and security for AI systems and verify documentation, oversight, and processes.
  • [MAP 2.3] Work with domain experts and other external AI actors to gain and maintain contextual awareness and knowledge about how human behavior, organizational factors and dynamics, and society influence, and are represented in, datasets, processes, models, and system output.
  • [MAP 2.3] Work with domain experts and other external AI actors to identify participatory approaches for responsible Human-AI configurations and oversight tasks, taking into account sources of cognitive bias.
  • [MAP 3.1] Utilize participatory approaches and engage with system end users to understand and document AI systems’ potential benefits, efficacy and interpretability of AI task output.
  • [MAP 3.1] Maintain awareness and documentation of the individuals, groups, or communities who make up the system’s internal and external stakeholders.
  • [MAP 3.1] Verify that appropriate skills and practices are available in-house for carrying out participatory activities such as eliciting, capturing, and synthesizing user, operator, and external feedback, and translating it for AI design and development functions.
  • [MAP 3.1] Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to system design or deployment decisions.
  • [MAP 3.1] Consider performance to human baseline metrics or other standard benchmarks.
  • [MAP 3.1] Incorporate feedback from end users, and potentially impacted individuals and communities about perceived system benefits.
  • [MAP 3.3] Engage AI actors from legal and procurement functions when specifying target application scope.
  • [MAP 3.4] Define and develop training materials for proposed end users, practitioners and operators about AI system use and known limitations.
  • [MAP 3.4] Include operators, practitioners and end users in AI system prototyping and testing activities to help inform operational boundaries and acceptable performance.
  • [MAP 3.5] Include relevant AI Actors in AI system prototyping and testing activities. Conduct testing activities under scenarios similar to deployment conditions.
  • [MAP 4.2] Supply resources such as model documentation templates and software safelists to assist in third-party technology inventory and approval activities.
  • [MAP 5.2] Establish and document stakeholder engagement processes at the earliest stages of system formulation to identify potential impacts from the AI system on individuals, groups, communities, organizations, and society.
  • [MAP 5.2] Identify approaches to engage, capture, and incorporate input from system end users and other key stakeholders to assist with continuous monitoring for potential impacts and emergent risks.
  • [MAP 5.2] Identify a team (internal or external) that is independent of AI design and development functions to assess AI system benefits, positive and negative impacts and their likelihood.
  • [MAP 5.2] Evaluate and document stakeholder feedback to assess potential impacts for actionable insights regarding trustworthiness characteristics and changes in design approaches and principles.

Clearly Document and Explain All Steps of the A.I. Development Lifecycle, Including A.I. System Goals, Functionalities, Limitations, Dependencies, Business Uses, Biases, Risks, and Benefits, as well as Testing Procedures, Downstream Impacts of A.I. Uses, Staffing Composition, and Stakeholder Engagement Processes.

  • [MAP 1.1] Plan for risks related to human-AI configurations, and document requirements, roles, and responsibilities for human oversight of deployed systems.
  • [MAP 1.2] Document the composition of interdisciplinary teams you establish.
  • [MAP 1.3] Build transparent practices into AI system development processes.
  • [MAP 1.4] Document business value or context of business use.
  • [MAP 1.5] Document decisions, risk-related trade-offs, and system limitations.
  • [MAP 1.6] List potential impacts that may arise from not fully considering the importance of trustworthiness characteristics in any decision making.
  • [MAP 2.2] Document settings, environments and conditions that are outside the AI system’s intended use.
  • [MAP 2.2] Document dependencies on upstream data and other AI systems, including if the specified system is an upstream dependency for another AI system or other data.
  • [MAP 2.2] Document connections the AI system or data will have to external networks (including the internet), financial markets, and critical infrastructure that have potential for negative externalities.
  • [MAP 2.3] Identify and document experiment design and statistical techniques that are valid for testing complex socio-technical systems like AI, which involve human factors, emergent properties, and dynamic context(s) of use.
  • [MAP 2.3] Demonstrate and document that AI system performance and validation metrics are interpretable and unambiguous for downstream decision making tasks and take socio-technical factors such as context of use into consideration.
  • [MAP 2.3] Identify and document assumptions, techniques, and metrics used for testing and evaluation throughout the AI lifecycle including experimental design techniques for data collection, selection, and management practices in accordance with data governance policies established according to the Govern Function.
  • [MAP 2.3] Document assumptions made and techniques used in data selection, curation, preparation and analysis, including: (1) identification of constructs and proxy targets, and (2) development of indices, especially those operationalizing concepts that are inherently unobservable (e.g. “hireability,” “criminality.” “lendability”).
  • [MAP 2.3] Identify and document transparent methods (e.g. causal discovery methods) for inferring causal relationships between constructs being modeled and dataset attributes or proxies.
  • [MAP 2.3] Identify and document processes to understand and trace test training data lineage and its metadata resources for mapping risks.
  • [MAP 2.3] Document known limitations, risk mitigation efforts associated with, and methods used for, training data collection, selection, labeling, cleaning, and analysis (e.g., treatment of missing, spurious, or outlier data; biased estimators).
  • [MAP 2.3] Identify techniques to manage and mitigate sources of bias (systemic, computational, human-cognitive) in computational models and systems, and the assumptions and decisions in their development.
  • [MAP 3.4] Identify and declare AI system features and capabilities that may affect downstream AI actors’ decisionmaking in deployment and operational settings (e.g., how system features and capabilities may activate known risks in various human-AI configurations, such as selective adherence).
  • [MAP 3.4] Identify skills and proficiency requirements for operators, practitioners and other domain experts that interact with AI systems.
  • [MAP 3.4] Develop AI system operational documentation for AI actors in deployed and operational environments, including information about known risks, mitigation criteria, and trustworthy characteristics.
  • [MAP 3.4] Verify AI system output is interpretable and unambiguous for downstream decision making tasks.
  • [MAP 3.4] Design AI system explanation complexity to match the level of problem and context complexity.
  • [MAP 3.5] Identify and document AI systems’ features and capabilities that require human oversight, in relation to operational and societal contexts, trustworthy characteristics, and risks.
  • [MAP 3.5] Define and develop training materials for relevant AI Actors about AI system performance, context of use, known limitations and negative impacts, and suggested warning labels.
  • [MAP 3.5] Verify that model documents contain interpretable descriptions of system mechanisms, enabling oversight personnel to make informed, risk-based decisions about system risks.
  • [MAP 4.1] Inventory third-party material (hardware, open-source software, foundation models, open source data, proprietary software, proprietary data, etc.) required for system implementation and maintenance.
  • [MAP 5.1] Identify and document likelihood and magnitude of system benefits and negative impacts in relation to trustworthiness characteristics.

Define and Examine A.I. System Design Specifications, Tasks, Purposes, and Requirements with Diverse Socio-Technical and Human Contexts in Mind.

  • [MAP 1.1] Consider intended AI system design tasks along with unanticipated purposes in collaboration withhuman factors and socio-technical domain experts.
  • [MAP 1.1] Define and document the task, purpose, minimum functionality, and benefits of the AI system to informconsiderations about whether the utility of the project or its lack of.
  • [MAP 1.1] Examine how changes in system performance affect downstream events such as decision-making (e.g, changes in an AI model objective function create what types of impacts in how many candidates do/do not get a job interview).
  • [MAP 1.1] Determine the end user and organizational requirements, including business and technical requirements.
  • [MAP 1.1] Determine and delineate the expected and acceptable AI system context of use, including: (1) social norms, (2) impacted individuals, groups, and communities, (3) potential positive and negative impacts to individuals, groups, communities, organizations, and society, and (4) operational environment.
  • [MAP 1.1] Identify human-AI interaction and/or roles, such as whether the application will support or replace human decision making.
  • [MAP 1.3] Review the documented system purpose from a socio-technical perspective and in consideration of societal values.
  • [MAP 1.3] Evaluate AI system purpose in consideration of potential risks, societal values, and stated organizational principles.
  • [MAP 1.6] Proactively incorporate trustworthy characteristics into system requirements.
  • [MAP 2.1] Define and document AI system’s existing and potential learning task(s) along with known assumptions and limitations.
  • [MAP 2.2] Plan and test human-AI configurations under close to real-world conditions and document results.
  • [MAP 3.2] Perform context analysis to map potential negative impacts arising from not integrating trustworthiness characteristics. When negative impacts are not direct or obvious, AI actors can engage with stakeholders external to the team that developed or deployed the AI system, and potentially impacted communities, to examine and document: (1) who could be harmed, (2) what could be harmed, (3) when could harm arise, and (4) how could harm arise.
  • [MAP 3.3] Consider narrowing contexts for system deployment, including factors related to: (1) how outcomes may directly or indirectly affect users, groups, communities and the environment; (2) length of time the system is deployed in between re-trainings; (3) geographical regions in which the system operates; (4) dynamics related to community standards or likelihood of system misuse or abuses (either purposeful or unanticipated); and (5) how AI system features and capabilities can be utilized within other applications, or in place of other existing processes.
  • [MAP 3.4] Verify model output provided to AI system operators, practitioners, and end users is interactive and specified to context and user requirements.

Resolve Conflicts Between Documented A.I. Goals, Uses, Risks, Benefits, and Impacts—and Shift to Non-A.I. or Non-Technological Solutions if They Remain More Trustworthy After Resolving A.I. System Conflicts.

  • [MAP 1.1] Examine trustworthiness of AI system design and consider non-AI solutions.
  • [MAP 1.1] Identify whether there are non-AI or non-technology alternatives that will lead to more trustworthy outcomes.
  • [MAP 1.1] Perform context analysis related to time frame, safety concerns, geographic area, physical environment, ecosystems, social environment, and cultural norms within the intended setting (or conditions that closely approximate the intended setting).
  • [MAP 1.3] Determine possible misalignment between societal values and stated organizational principles and code of ethics.
  • [MAP 1.3] Flag latent incentives that may contribute to negative impacts.
  • [MAP 1.4] Reconcile documented concerns about the system’s purpose within the business context of use compared to the organization’s stated values, mission statements, social responsibility commitments, and AI principles.
  • [MAP 1.4] Reconsider the design, implementation strategy, or deployment of AI systems with potential impacts that do not reflect institutional values.
  • [MAP 1.5] Articulate and analyze tradeoffs across trustworthiness characteristics as relevant to proposed context of use. When tradeoffs arise, document them and plan for traceable actions (e.g.: impact mitigation, removal of system from development or use) to inform management decisions.
  • [MAP 1.6] Analyze dependencies between contextual factors and system requirements.
  • [MAP 2.3] Establish and document practices to check for capabilities that are in excess of those that are planned for, such as emergent properties, and to revisit prior risk management steps in light of any new capabilities.
  • [MAP 2.3] Investigate and document potential negative impacts related to the full product lifecycle and associated processes that may conflict with organizational values and principles.
  • [MAP 3.4] Develop approaches to track human-AI configurations, operator, and practitioner outcomes for integration into continual improvement.
  • [MAP 4.1] Review third-party software release schedules and software change management plans (hotfixes, patches, updates, forward- and backward-compatibility guarantees) for irregularities that may contribute to AI system risks.
  • [MAP 4.1] Review redundancies related to third-party technology and personnel to assess potential risks due to lack of adequate support.
  • [MAP 5.2] Employ methods such as value sensitive design (VSD) to identify misalignments between organizational and societal values, and system implementation and impact.
  • [MAP 5.2] Incorporate quantitative, qualitative, and mixed methods in the assessment and documentation of potential impacts to individuals, groups, communities, organizations, and society.

Establish Testing Procedures, Risk Tolerance Levels, or Other A.I. Risk Criteria to Guide Oversight Resource Allocation and A.I. System Deployment Decisions

  • [MAP 1.5] Establish risk tolerance levels for AI systems and allocate the appropriate oversight resources to each level.
  • [MAP 1.5] Establish risk criteria in consideration of different sources of risk, (e.g., financial, operational, safety and wellbeing, business, reputational, and model risks) and different levels of risk (e.g., from negligible to critical).
  • [MAP 1.5] Identify maximum allowable risk tolerance above which the system will not be deployed, or will need to be prematurely decommissioned, within the contextual or application setting.
  • [MAP 1.5] Review uses of AI systems for “off-label” purposes, especially in settings that organizations have deemed as high-risk.
  • [MAP 2.2] Identify and document negative impacts as part of considering the broader risk thresholds and subsequent go/no-go deployment as well as post-deployment decommissioning decisions.
  • [MAP 2.3] Develop and apply TEVV protocols for models, system and its subcomponents, deployment, and operation.
  • [MAP 2.3] Identify testing modules that can be incorporated throughout the AI lifecycle, and verify that processes enable corroboration by independent evaluators.
  • [MAP 2.3] Establish processes to test and verify that design assumptions about the set of deployment contexts continue to be accurate and sufficiently complete.
  • [MAP 3.2] Identify and implement procedures for regularly evaluating the qualitative and quantitative costs of internal and external AI system failures.
  • [MAP 3.2] Develop actions to prevent, detect, and/or correct potential risks and related impacts.
  • [MAP 3.2] Regularly evaluate failure costs to inform go/no-go deployment decisions throughout the AI system lifecycle.
  • [MAP 3.4] Define and develop certification procedures for operating AI systems within defined contexts of use, and information about what exceeds operational boundaries.
  • [MAP 3.4] Conduct testing activities under scenarios similar to deployment conditions.
  • [MAP 3.4] Verify that design principles are in place for safe operation by AI actors in decision-making environments.
  • [MAP 3.5] Establish practices for AI systems’ oversight in accordance with policies developed under the Govern Function.
  • [MAP 3.5] Evaluate AI system oversight practices for validity and reliability. When oversight practices undergo extensive updates or adaptations, retest, evaluate results, and course correct as necessary.
  • [MAP 4.1] Review audit reports, testing results, product roadmaps, warranties, terms of service, end user license agreements, contracts, and other documentation related to third-party entities to assist in value assessment and risk management activities.
  • [MAP 4.2] Track third-parties preventing or hampering risk-mapping as indications of increased risk.
  • [MAP 4.2] Review third-party material (including data and models) for risks related to bias, data privacy, and security vulnerabilities.
  • [MAP 4.2] Apply traditional technology risk controls – such as procurement, security, and data privacy controls – to all acquired third-party technologies.
  • [MAP 5.1] Establish assessment scales for measuring AI systems’ impact. Scales may be qualitative, such as red-amber-green (RAG), or may entail simulations or econometric approaches. Document and apply scales uniformly across the organization’s AI portfolio.
  • [MAP 5.1] Apply TEVV regularly at key stages in the AI lifecycle, connected to system impacts and frequency of system updates.
  • [MAP 5.2] Develop TEVV procedures that incorporate socio-technical elements and methods and plan to normalize across organizational culture.
  • [MAP 5.2] Regularly review and refine TEVV processes.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate