Background

AI policy must protect individuals, increase transparency and accountability, and center human rights.

States and municipalities are increasingly taking interest in Artificial Intelligence and filling the gaps left by federal inaction on algorithmic harm. States and cities have taken different routes, from notification and task forces to minimum privacy standards. Some of the recent efforts are highlighted below. EPIC is not including state and local laws primarily focused on investing resources in building more AI and general research, as they do not improve protection of individuals against algorithmic harm. At the federl level, EPIC is only including laws of this sort that have been enacted, and only including laws within the last several years.

U.S. FEDERAL STRATEGY AND PROPOSED LAWS

The United States signed on to the Organisation for Economic Cooperation and Development Principles in 2019, along with 41 other countries. The principles are:

  • Inclusive growth, sustainable development and well-being. AI should benefit people and the planet.
  • Human-centered values and fairness. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention when necessary – to ensure a fair and just society.
  • Transparency and explainability. There should be transparency and a responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • Robustness, security and safety. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Accountability. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

The U.S. created the Defense Innovation Board and National Security Commission on AI to spend several years studying the use of AI in particular contexts, both of which remain influential to policy-making bodies.

The Office of Management and Budget directed each agency to create a plan for regulating the industries they regulate. The plans were due in May 2021 but have not been made public. The Office explicitly did not task the agencies to consider their own use of these tools, but strictly when considering regulation of commerce that their agency covers. In the guidance, pursuant to Executive Order 13859, Maintaining Leadership in Artificial Intelligence, the Office lays out 10 “Principles for the Stewardship of AI Applications:”

  • Public Trust in AI
  • Public Participation
  • Scientific integrity and Information Quality
  • Risk Assessment and management
  • Benefits and Costs
  • Flexibility
  • Fairness and Non-Discrimination
  • Disclosure and Transparency
  • Safety and Security
  • Interagency Coordination

National AI Initiative Act of 2020 (Enacted)

  • Encourages interagency coordination and planning efforts for AI research, development, standards, and education
  • Creates National AI Research Resource Group
  • Creates avenues for funding research “focused on an economic sector, social sector, or on a cross-cutting AI challenge”
  • Directs NIST to create a AI risk management framework including data sharing best practices

Algorithmic Accountability Act (2019) – Sen. Booker, Rep. Clarke, Sen. Wyden

  • Would require companies of a certain size to study the accuracy, bias, fairness, discrimination, privacy, and security, and remedy any issues found
  • Would give the FTC explicit power to create regulations requiring impact assessments for high-risk ADS

Algorithmic Justice and Online Platform Transparency Act (2021) – Sen. Markey, Rep. Matsui

  • Would prohibit algorithms on online platforms that discriminate on protected characteristics
  • Would direct the creation of a safety and effectiveness standards to act as a baseline for online platform algorithms
  • Would establish documentation and publication requirements for online platform companies for review by federal agencies
  • Would create a task force of the Federal Trade Commission, Department of Education, Department of Housing and Urban Development, Department of Commerce, and Department of Justice to investigate discriminatory algorithms across the sectors they cover

Facial Recognition and Biometric Technology Moratorium Act (2020) — Sens. Markey, Merkley, Sanders, Warren, Wyden

  • Would prohibit the use of facial recognition and other biometric technologies by federal agencies, including Customs and Border Protection.

No Biometric Barriers to Housing Act of 2021 (2021) — Reps. Clarke, Pressley, Tlaib

  • Would ban the use of facial recognition and other biometric technologies in public housing and any units funded under the Department of Housing and Urban Development (“HUD”)
  • Would direct HUD to study where biometrics are used in public housing, the impact of the technology, the purpose, demographic information about the residents of where it’s used, and potential future impacts of the use of that technology.

U.S. STATE AND LOCAL LAWS (ENACTED)

Several cities and states have banned or limited the use of biometric technologies, including Alameda, CA; Berkeley, CA; Boston, MA; Brookline, MA; Cambridge, MA; Jackson, MS; Northampton, MA; Oakland, CA; Portland, ME; Portland, ME; Portland, OR; San Francisco, CA; Somerville, MA; and Springfield, MA. Learn more at EPIC’s Ban Face Surveillance webpage.

In 2019, Idaho enacted a law that requires “all documents, data, records, and information used by the builder to build or validate the pretrial risk assessment tool and ongoing documents, data, records, and written policies outlining the usage and validation of the pretrial risk assessment tool” to be publicly available; allowing a party in a criminal case to review the calculations and data underlying their risk score; and precluding trade secret or other intellectual property defenses in discovery requests regarding the development and testing of the tool. This is an exemplar for states committed to using algorithms in pre-trial sentencing while retaining the notions of fairness and due process.

In 2019, Illinois enacted the AI Video Interview Act, which requires companies using video interview systems during interviews to give notice, receive consent, destroy data after 30 days after an applicant requests it and limit distribution of data.

AI Task Forces and Commissions have been created in several jurisdictions:

  • New York City (2017): The New York City Council created a task force to study how it uses AI and to provide recommendations on specific prompts. In November 2019, the council released their report. In conjunction with this released report, Mayor De Blasio announced an Executive Order creating an “Algorithms Management and Policy Officer.” An unofficial “shadow report” of the Task Force was also released.
  • Vermont (2018): The Vermont Legislature created an AI Task Force to explore areas of responsible growth of the state’s technology markets, the use of AI by their government, and appropriate regulation in the field. The task force published an update report in February 2019.
  • Alabama (2019) : Alabama created an AI Commission that has a broad mandate to study “all aspects” of AI and associated technologies and the associated challenges and opportunities.
  • New York State (2019): New York State created a commission to begin in 2020 that will study with a broad mandate the sufficiency of current law to deal with AI as well as the effects of AI on employment and public safety.
  • Would ban the use of facial recognition and other biometric technologies in public housing and any units funded under the Department of Housing and Urban Development (“HUD”)
  • Would direct HUD to study where biometrics are used in public housing, the impact of the technology, the purpose, demographic information about the residents of where it’s used, and potential future impacts of the use of that technology.

Frameworks

Universal Guidelines for Artificial Intelligence

In October 2018, over 250 experts and 60 organizations, representing more than 40 countries, endorsed the Universal Guidelines for Artificial Intelligence (“UGAI”). The guidelines were organized by the Public Voice. The guidelines in full are:

  1. Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome.
  2. Right to Human Determination. All individuals have the right to a final determination made by a person.
  3. Identification Obligation. The institution responsible for an AI system must be made known to the public.
  4. Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
  5. Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.
  6. Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.
  7. Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.
  8. Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.
  9. Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.
  10. Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.
  11. Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.
  12. Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.

Organisation of Economic Cooperation and Development AI Principles

The OECD AI Principles were adopted in 2019 and endorsed by 42 countries—including the United States, several European Countries, and the G20 nations. The OECD AI Principles establish international standards for AI use:

  1. Inclusive growth, sustainable development and well-being. AI should benefit people and the planet.
  2. Human-centered values and fairness. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention when necessary – to ensure a fair and just society.
  3. Transparency and explainability. There should be transparency and a responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  4. Robustness, security and safety. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  5. Accountability. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

International Laws

Several other counties are more advanced than the U.S. in terms of AI policy development that works to protect people from algorithmic harm. For more information on AI laws and norms internationally, please see EPIC’s International Policy page.

Support Our Work

EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.

Donate