AI Bill of Rights Provides Actionable Instructions for Companies, Agencies, and Legislators

October 11, 2022 | Ben Winters, EPIC Counsel

Last week, the White House Office of Science and Technology Policy released a “Blueprint” for an “AI Bill of Rights.”The five major principles are Safe and Effective Systems; Freedom from Algorithmic Discrimination; Data Privacy; Notice and Explanation; Human Alternatives, Consideration, and Fallback. EPIC published an Op-Ed in Protocol outlining specifically how the White House can act to enact the principles from the blueprint.

In their own words, “The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a waiver of sovereign immunity.”

However, the Office of Science Technology Policy did outline several expectations of how people should be able to experience automated decision-making systems and how entities should act when developing and using automated decision-making systems.

EPIC will continue to push for laws to ensure these and many more protections are legally enshrined and protected. The specific actions are isolated in this post below (emphasis added by EPIC)


-[During development of a system] Consultation should directly engage diverse impacted communities to consider concerns and risks that may be unique to those communities, or disproportionately prevalent or severe for them. Concerns raised in this consultation should be documented, and the automated system developers were proposing to create, use, or deploy should be reconsidered based on this feedback.

Systems should undergo extensive testing before deployment. “Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.”

Outcomes of these protective measures ( pre-deployment testing, risk identification and mitigation, and ongoing monitoring) should include the possibility of not deploying the system or removing a system from use

-Should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.

Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.

-Expansive set of classes that should not face discrimination by algorithms and systems should be used and designed in an equitable way: Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.

-Protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.

-Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.-ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.

Human oversight should ensure that automated systems in sensitive domains are narrowly scoped to address a defined goal, justifying each included data item or attribute as relevant to the specific use case. Data included should be carefully limited to avoid algorithmic discrimination resulting from, e.g., use of community characteristics, social network analysis, or group-based inferences.

-Sensitive data and derived data should not be sold, shared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be used to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies are expected to keep sensitive data private.

-Civil liberties and civil rights must not be limited by the threat of surveillance or harassment facilitated or aided by an automated system. Surveillance systems should not be used to monitor the exercise of democratic rights, such as voting, privacy, peaceful assembly, speech, or association, in a way that limits the exercise of civil rights or civil liberties.

-Seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible


-Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed


-enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first
-Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access

-Surveillance or monitoring systems should be subject to heightened oversight that includes at a minimum assessment of potential harms during design (before deployment) and in an ongoing manner, to ensure that the American public’s rights, opportunities, and access are protected. This assessment should be done before deployment and should give special attention to ensurethere is not. Assessment should then be reaffirmed in an ongoing manner as long as the system is in use.

-Sensitive data should only be used for functions strictly necessary for that domain or for functions that are required for administrative reasons (e.g., school attendance records), unless consent is acquired, if appropriate, and the additional expectations in this section are met.

-You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.

-Consent for non-necessary functions should be optional, i.e., should not be required, incentivized, or coerced in order to receive opportunities or access to services. In cases where data is provided to an entity (e.g., health insurance company) in order to facilitate payment for such a need, that data should only be used for that purpose.

-In sensitive domains, entities should be especially careful to maintain the quality of data to avoid adverse consequences arising from decision-making based on flawed or inaccurate data. It should be not left solely to individuals to carry the burden of reviewing and correcting data. Entities should conduct regular, independent audits and take prompt corrective measures to maintain accurate, timely, and complete data.

You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome.

-Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context. 

-Explanations as to how and why a decision was made or an action was taken by an autoamted system should be tailored to the purpose, tailored to the target of the explanation, and tailored to the level of risk.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate