Assessing the Assessments: Comparing Risk Assessment Requirements Around the World

December 4, 2023 | Kara Williams, EPIC Law Fellow

As part of EPIC’s project Assessing the Assessments: Maximizing the Effectiveness of Algorithmic & Privacy Risk Assessments, we have taken stock of impact assessment requirements on the state, federal, and international levels. The chart below breaks down various state laws, executive orders, proposed laws, and international directives and laws that require impact assessments and compares their requirements.  

We’ve set out a number of provisions that are essential to ensuring impact assessments are effective, including cost-benefit analysis for using an AI system, plans to mitigate risks, performance auditing requirements, data quality assessments, testing for bias, and explanations of decisions from the system to the public. The chart is a visual representation of how different impact assessment requirements compare to each other and whether they include provisions that EPIC believes are key to meaningful impact assessment requirements.  

Federal

Algorithmic Accountability Act of 2022 (HR 6580)American Data Privacy and Protection Act (ADPPA)Biden-Harris Executive Order on AI and OMB guidance
Effective Date⚠️ Proposed Act from Feb. 2022⚠️ Proposed Act, last activity in 12/30/2022October 30, 2023
Name of AssessmentImpact AssessmentsAlgorithm Impact AssessmentAI Impact Assessment 
Threshold (Covered Entities)“Large data holder”Federal agencies 
When is it triggered?Would direct the FTC to require assessments covering automated decision systems and augmented critical decision processes Use of a covered algorithm in a manner that poses “consequential risk” of harm and uses the algorithm to collect, process, or transfer covered data must conduct oneBefore agencies use “safety-impacting” or “rights-impacting” AI and during use 

Safety-impacting: 
(1) The functioning of dams, emergency services, electrical grids or the generation or movement of energy, fire safety systems, food safety mechanisms, integrity of elections and voting infrastructure, traffic control systems and other systems controlling physical transit, water and wastewater systems, and nuclear reactors, materials, and waste;
(2) Physical movements, including in human-robot teaming, such as the movements of a robotic appendage or body, within a workplace, school, housing, transportation, medical, or law enforcement setting;
(3) The application of kinetic force, delivery of biological or chemical agents, or delivery of potentially damaging electromagnetic impulses;
(4) The movements of vehicles, whether on land, underground, at sea, in the air, or in space;
(5) The transport, safety, design, or development of hazardous chemicals or biological entities or pathways;
(6) Industrial emissions and environmental impact control processes;
(7) The transportation or management of industrial waste or other controlled pollutants;
(8) The design, construction, or testing of industrial equipment, systems, or structures that, if they failed, would pose a meaningful risk to safety;
(9) Responses to insider threats;
(10) Access to or security of government facilities; or
(11) Enforcement actions pursuant to sanctions, trade restrictions, or other controls on exports, investments, or shipping.

Rights-impacting: 
(1) Decisions to block, remove, hide, or limit the reach of protected speech;
(2) Law enforcement or surveillance-related risk assessments about individuals, criminal recidivism prediction, offender prediction, predicting perpetrators’ identities, victim prediction, crime forecasting, license plate readers, iris matching, facial matching, facial sketching, genetic facial reconstruction, social media monitoring, prison monitoring, forensic analysis, forensic genetics, the conduct of cyber intrusions, physical location-monitoring devices, or decisions related to sentencing, parole, supervised release, probation, bail, pretrial release, or pretrial detention;
(3) Deciding immigration, asylum, or detention status; providing risk assessments about individuals who intend to travel to, or have already entered, the U.S. or its territories; determining border access or access to Federal immigration related services through biometrics (e.g., facial matching) or other means (e.g., monitoring of social media or protected online speech); translating official communication to an individual in an immigration, asylum, detention, or border context; or immigration, asylum, or detention-related physical location- monitoring devices.
(4) Detecting or measuring emotions, thought, or deception in humans;
(5) In education, detecting student cheating or plagiarism, influencing admissions processes, monitoring students online or in virtual-reality, projecting student progress or outcomes, recommending disciplinary interventions, determining access to educational resources or programs, determining eligibility for student aid, or facilitating surveillance (whether online or in-person);
(6) Tenant screening or controls, home valuation, mortgage underwriting, or determining access to or terms of home insurance;
(7) Determining the terms and conditions of employment, including pre-employment screening, pay or promotion, performance management, hiring or termination, time-on-task tracking, virtual or augmented reality workplace training programs, or electronic workplace surveillance and management systems;
(8) Decisions regarding medical devices, medical diagnostic tools, clinical diagnosis and determination of treatment, medical or insurance health-risk assessments, drug-addiction risk assessments and associated access systems, suicide or other violence risk assessment, mental-health status detection or prevention, systems that flag patients for interventions, public insurance care-allocation systems, or health-insurance cost and underwriting processes;
(9) Loan-allocation processes, financial-system access determinations, credit scoring, determining who is subject to a financial audit, insurance processes including risk assessments, interest rate determinations, or financial systems that apply penalties (e.g., that can garnish wages or withhold tax returns);
(10) Decisions regarding access to, eligibility for, or revocation of government benefits or services; allowing or denying access—through biometrics or other means (e.g., signature matching)—to IT systems for accessing services for benefits; detecting fraud; assigning penalties in the context of government benefits; or
(11) Recommendations or decisions about child welfare, child custody, or whether a parent or guardian is suitable to gain or retain custody of a child.
Description of Intended Purpose and Proposed use
Necessity of the automated decision-system
Cost-benefit analysis
Effects on Civil, Constitutional, and Legal Rights✅ (for rights-impacting AI)
Disparate Impact Evaluation✅ (for rights-impacting AI)
Mitigation Plans
Data Quality Assessments✅ (must “evaluate inputs . . . including training data” but unclear what that means)
Disclosure of whether system is human-in-the-loop
Performance/Benchmark Auditing Requirements
Personnel Training Requirements
Explanation of Decisions to Public🟡 (for rights-impacting AI—affected individuals must be notified of negative decisions, and agencies are strongly encouraged to provide explanations for these decisions)
Testing for Bias🟡 Might loosely be captured by disparate impact evaluations + evaluation of inputs✅ (for rights-impacting AI)
Third-Party Testing
Stakeholder Engagements✅ (for rights-impacting AI)
Public Disclosure of Impact Assessment❌ Summary reports to FTC❌ Optional to public, but must submit to FTC🟡 Detailed and accessible documentation of system’s functionality must be included in the AI use case inventory. If agencies’ use cases are excluded from the inventory, relevant information must be reported to OMB.
Retroactivity?
Recurring Production of Reports❌ Triggered on “development” and “use” of algorithm (does retraining count?)✅ report to OMB as a component of annual AI use case inventory, periodic accountability reviews, or on request by OMB 
Consequences for Insufficient AssessmentsEnforcement by FTC
Misc.

State

California CCPA + CPRAVirginia § 59.1-580Connecticut CDPAColorado Privacy ActMontana (MTCDPA)New HampshireTennessee (TIPA)Indiana SB5 (Consumer Data Protection Bill)⚠️ Massachusetts Proposed Rule ⚠️ Washington S.B. 5116 
(Proposed Rule)
⚠️ Hawaii Proposed Rule
SB 974 SD2
Effective DateDraft regulations from September 2023January 2023July 1, 2023July 2023October 1, 2024January 1, 2025July 1, 2025January 1, 2026⚠️ Proposed Rule from Jan. 2023⚠️ Proposed Rule from 2021⚠️ Proposed Rule from 2023, proposed effective in Dec 31, 2025
Name of AssessmentRisk AssessmentData Protection AssessmentData Protection AssessmentData Protection AssessmentData Protection AssessmentData Protection AssessmentData Protection AssessmentData Protection AssessmentCovered algorithm impact and evaluationAlgorithmic Accountability ReportData Protection Assessment
Threshold (Covered Entities)
When is it triggered?For processing that presents “significant risk to consumers’ privacy”:
(1) selling or sharing personal information 
(2) processing sensitive personal information 
(3) using automated decisionmaking technology in furtherance of a sensitive decision
(4) processing the personal information of consumers that the business knows are younger than 16
(5) processing in ways that constitute workplace or school surveillance
(6) processing personal information of consumers in publicly accessible places using technology to monitor consumers’ behavior, location, movements, or actions 
(7) processing personal information to train AI or ADT 
For data processing involving personal data involving:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
(5) any activity that presents a heightened risk of harm to consumers
(6) or when requested by the Attorney General when relevant to investigations in progress
For data processing involving heightened risk which may involve:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
Any time personal data is processed in such a way that it presents a “heightened risk of harm” to a consumerFor data processing involving heightened risk which may involve:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
For data processing involving heightened risk which may involve:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
For data processing involving personal data involving:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
(5) any activity that presents a heightened risk of harm to consumers
(6) or when requested by the Attorney General when relevant to investigations in progress
For data processing involving personal data involving:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
(5) any activity that presents a heightened risk of harm to consumers
(6) or when requested by the Attorney General when relevant to investigations in progress
Upon the use of a covered algorithm in a manner that poses a “consequential risk of harm” when collecting, processing, or transfering covered data or publicly available data shallfor any automated decision system used by a public agencyFor data processing involving personal data involving:
(1) targeted advertising, 
(2) sale of personal data,
(3) profiling that presents a foreseeable risks of disparate treatment, injuries, unreasonable privacy intrusions
(4) situations involving sensitive data
(5) any activity that presents a heightened risk of harm to consumers
(6) or when requested by the Attorney General when relevant to investigations in progress
Description of Intended Purpose and Proposed use
Necessity of the automated decision-system
Cost-benefit analysis
Effects on Civil, Constitutional, and Legal Rights
Disparate Impact Evaluation
Mitigation Plans✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)
Data Quality Assessments✅ (for businesses using ADT)
Disclosure of whether system is human-in-the-loop✅ (for businesses using ADT)
Performance/Benchmark Auditing Requirements
Personnel Training Requirements
Explanation of Decisions to Public✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)✅ (descriptions of reasonable expectations of consumers)
Testing for Bias
Third-Party Testing
Stakeholder Engagements✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)✅ (factored into cost-benefit if controller can employ mitigations)
Public Disclosure of Impact Assessment❌ (disclosure to CPPA and AG upon request)❌ Also exempt from FOIA production, disclosure is only to AG❌ Also exempt from FOIA production, disclosure is only to AG❌ Also exempt from FOIA production, disclosure is only to AG❌ Also exempt from FOIA production, disclosure is only to AG❌ Also exempt from FOIA production, disclosure is only to AG❌ Also exempt from FOIA production, disclosure is only to AG❌ Also exempt from FOIA production and public inspection and copying requirements, disclosure is only to AG
Retroactivity?
Recurring Production of Reports✅ if there is a “material change” in processing activity + draft proposals include two options that board is considering: (1) at least once every three years or (2) update as necessary but ADT must be updated either annually, biannully, or once every three years ✅ “as often as appropriate” if risk is materially modified
Consequences for Insufficient AssessmentsReviews for compliance by Attorney GeneralReviews for compliance by Attorney GeneralReviews for compliance by Attorney GeneralReviews for compliance by Attorney GeneralReviews for compliance by Attorney GeneralReviews for compliance by Attorney GeneralReviews for compliance by Attorney General
Misc.Virginia CopycatConnecticut CopycatConnecticut CopycatVirginia CopycatVirginia Copycat

International

GDPRCanada • Directive on Automated Decision-MakingSwitzerland revFADPUK
Effective DateMay 2018April 1, 2019September 1, 2023
Name of AssessmentData Protection Impact AssessmentsAlgorithmic Impact AssessmentsData Protection Impact AssessmentsData Protection Impact Assessments
Threshold (Covered Entities)Records of Processing Activities (ROPA) requirements Business with more than 250 employees
When is it triggered?When data processing is likely to result in “high risk” to the rights and freedoms of natural persons, esp. w.r.t. personal data

Particularly for:

(a) systemic and extensive automated processing determinations (e.g. profiling) that may serve as the basis for legal effects
(b) large scale processing under Article 9(1) or of personal data relating to criminal convictions/offenses
(c) systematic monitoring of publicly accessible areas on a large scale
Applies to any system, tool, or statistical model used to make an administrative decision or a related assessment about a client 

Only for systems in production, not those in test environments
“High risk” to personality or funadmental rights of data subjects

For the processing of personal data of natural Swiss persons (“sensitive data” also includes genetic and biometric data whose processing requires explicit consent)

Explicitly required for “profiling” (automated processing of personal data to assess personal aspects about a natural person)

Assessments must be conducted before data processing
processing likely to result in “high risk”

includes unique call outs to cases when entities are matching or combining data from different sources, things that may endanger individuals if breaches occur, uses of “innovative technology”
Description of Intended Purpose and Proposed use
Necessity of the automated decision-system?
Cost-benefit analysis?
Effects on Civil, Constitutional, and Legal Rights?
Disparate Impact Evaluation?
Mitigation Plans?
Data Quality Assessments?
Disclosure of whether system is human-in-the-loop?
Performance/Benchmark Auditing Requirements?
Personnel Training Requirements✅ (kind of, must act in compliance with codes of conduct resulting from Art. 40 requirements)?
Explanation of Decisions to Public?
Testing for Bias?
Third-Party Testing?
Stakeholder Engagements✅ “where appropriate”?
Public Disclosure of Impact Assessment❌ disclosure not required but covered controllers should be able to demonstrate compliance🟡 not required, but included in scope of FOIA❌ disclosure required to ICO for high risk if mitigations cannot be carried out, no public disclosure required 
Retroactivity??
Recurring Production of Reports✅ when there is a “change of risk” represented by the processing✅ “scheduled basis”?
Consequences for Insufficient Assessments✅ As allowed by the Financial Administration Act and outlined in this framework✅ Fines for willful acts or omissions
Misc.Also defines “impact levels” for how high of an impact a decision will likely have (e.g. Level III decisions will often lead to imapcts that can be difficult to reverse and are ongoing, Level IV are “irreversible and perpetual”)Particularly high risk activities require opinions from the FDPIC firstGDPR no longer applies, so this is the only pillar that remains
The ICO’s explainer on this still heavily references the GDPR
Law currently in forceDataset of Public PIAs
Register of Data files ICO Guidelines

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate