Background

Risk assessments are a key accountability mechanism that helps ensure that entities process personal data or use automated decision-making tools safely, responsibly, and in ways that minimize the risk of harm to individuals.

The term “risk assessment” may describe an overarching assessment of the risks associated with particular instance of personal data processing and the system(s) used to process personal data, or it may also describe an analysis of an automated decision-making or artificial intelligence system that focuses on bias, data quality, model weights, training protocols, and second-order impacts. This page will focus on the first sense of the term. 

Requiring entities to complete a robust risk assessment supports thoughtful adoption of new data practices and risk mitigation procedures instead of the hasty deployment of new technologies without considering potential harms.  

What is a risk assessment? 

A risk assessment is an analysis of how personal data will be collected, processed, stored, and transferred by an entity. When implemented properly, risk assessments force businesses to carefully evaluate and disclose the risks of planned data processing to consumers, to the environment, and to society as a whole—including risks associated with AI and automated decision-making. In turn, risk assessments can deter businesses from adopting harmful data practices in the first place. The term “risk assessment” is sometimes used to describe privacy impact assessments, but risk assessments like those required under the California Consumer Privacy Act go further to incorporate elements focused on automated decision-making systems. 

One of the best ways to ensure risk assessments are effective is to require that either the assessments, or summaries of the assessments, be made available to the public. Without transparency requirements, risk assessments can become an ineffective box-checking exercise instead of a true evaluation of potential risks and benefits. Transparency mechanisms are not sufficient on their own to ensure the safe and responsible use of data for new technologies, but they are a necessary step.  

Risk assessment requirements in practice  

The use of risk assessments as a tool for increased transparency and accountability is not a new concept. Risk assessment requirements have existed on the state, federal, and international levels for many years.  

At the federal level, former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the subsequent draft guidance from the Office of Management and Budget required federal agencies to conduct AI risk assessments, which were to cover both the quality and appropriateness of data usage as well as AI system performance, purpose, and risks.  

On the state level, the California Consumer Privacy Act (CCPA) mandates risk assessments covering the processing, use, benefits, and risks related to personal data. As of early 2025, there is an ongoing California Privacy Protection Agency rulemaking to clarify risk assessment requirements under the Act. Most other states that have passed comprehensive privacy laws also require some type of risk assessment.  

In addition to privacy legislation, many states are now considering bills regulating various types of AI. In 2024, Colorado passed a law regulating the use of high-risk AI systems in consequential decisions about people’s lives, which includes some risk assessment components. Many other states are now considering versions of bills based on the Colorado model.  

Internationally, risk assessment requirements have existed since at least 1988, when Australia enacted its Privacy Act. Risk assessments are also required in several other international jurisdictions, including in the European Union through the General Data Protection Regulation (GDPR) and AI Act, in Canada through the Directive on Automated Decision-Making, in Brazil through the General Data Protection Law (LGPD), and in China through the Personal Information Protection Law (PIPL).   

EPIC’s work on risk assessments  

Since 2018, EPIC has advocated for the wide adoption of algorithmic risk assessments, which would force entities to evaluate the privacy, equity, and human rights implications of AI and automated decision-making systems both before and during deployment. EPIC was instrumental in the introduction of the Algorithmic Accountability Act, which would require companies to conduct impact assessments to determine if their algorithms are “inaccurate, unfair, biased, or discriminatory.” 

EPIC, with the support of the Rose Foundation for Communities and the Environment, has undertaken a major project in this area, Assessing the Assessments: Maximizing the Effectiveness of Algorithmic and Privacy Risk Assessments. Through this project, EPIC has published (and will continue to publish) materials to educate consumers and promote best risk assessment practices for entities processing personal data. EPIC has also published on its website evolving resources that consumers, researchers, journalists, and lawmakers can use to track the development and implementation of risk assessment frameworks.  

EPIC has also undertaken substantial work on risk assessments in connection with the Advancing American AI Act and former President Biden’s Executive Order on artificial intelligence. EPIC submitted extensive comments to OMB on its draft guidance concerning implementation of the EO. These comments lay out in detail what AI impact assessments should require. EPIC has also tracked federal agencies’ compliance with AI accountability requirements and provided comments to NIST that explain the importance of impact assessments as a core of AI risk management.

Our Risk Assessments Experts

Support Our Work

EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.

Donate