Analysis

What U.S. Regulators can Learn from the EU AI Act

March 22, 2024 | Maria Villegas Bravo, EPIC Law Fellow

On March 13, the European Union Parliament passed the Artificial Intelligence Act (“AI Act” or “the Act”), taking the penultimate step in a years-long legislative process. Originally proposed in early 2021, the sweeping, harms-based Act categorizes artificial intelligence systems by preconceived risks to fundamental rights, public safety, and public health into prohibited, high-risk, or low or no-risk systems. Depending on the category, the Act either (1) prohibits the technology from being placed on the market and deployed, (2) establishes mandatory safeguards and legal liabilities across the AI system’s supply chain, or (3) recommends the installation of a voluntary code of conduct. For an in-depth summary of the AI Act’s provisions, see our recent analysis summarizing the Act.

So where does the AI Act fit within the European Union’s (“EU”) vast privacy and tech law ecosystem? This post (1) provides a surface-level explanation of EU data protection law to contextualize the AI Act, (2) analyzes the good, the bad, and the ugly of its provisions, and (3) provides key takeaways for legislators aiming to implement AI legislation in the United States.  

EU Data Protection Law for the Uninitiated

The EU and its legislative system are notoriously bureaucratic and difficult to understand upon first contact. This section offers a glimpse into the basic mechanisms of the current EU data protection regime to better contextualize the AI Act.

The EU is a supranational union comprised of 27 countries, referred to as member states, that aims chiefly to regulate commerce. It is based on several treaties as well as the EU Charter of Fundamental Rights and is similar to the United States federal government system. At a highly simplified level, the EU governmental system consists of an executive branch (the Commission), the legislative branch (the Parliament and Council), and the judicial branch (with the Court of Justice of the European Union (CJEU) sitting as the top court). EU law is similar to U.S. federal law in that, when a conflict arises between EU law and member state law, EU law will prevail and may preempt member state law.

In particular, the EU’s data protection regime serves as a comprehensive model for privacy legislation across the world. The EU passed the General Data Protection Regulation (“GDPR”) in 2016 to secure the fundamental right to privacy and data protection enshrined in the EU Charter of Fundamental Rights, the EU’s counterpart to the Bill of Rights. The core assumption of the GDPR is that it is illegal to process personal data unless there is a legitimate basis for doing so, usually consent from the data subject, contractual rights, and/or one of the other explicitly enumerated bases for processing listed in Article 6.

The GDPR defines three groups of people:

  • Data Controller: A natural or legal person (an entity) that makes decisions on how to process personal data.
  • Data Processer: A natural or legal person (an entity) that processes personal data on behalf of controllers (may be the same entity as the controller). Processing includes any operations performed on personal data, such as collection, recording, structuring, use, disclosure, erasure, or destruction. The definition of processing is highly inclusive, and almost any interaction with the data may constitute processing.
  • Data Subject: A natural person that is or can be identified by the personal data being processed.

Data subjects have several rights with regards to their personal data, including the right to access data a controller or processer might have, the right to correct such data, and the right to delete in certain circumstances. These rights sit in addition to the data subject’s pre-existing fundamental rights under the EU Charter, including the rights to nondiscrimination, freedom of thought, freedom of expression, and freedom of assembly. In turn, controllers and processers have obligations and responsibilities to ensure data subject rights.

The GPDR, however, does not apply to law enforcement’s processing of personal data—instead, the Law Enforcement Directive controls. The Law Enforcement Directive closely mirrors the GDPR as the two pieces of legislation were drafted in concert, but the two noticeably diverge as to the rights of data subjects and transparency requirements due to the EU’s limitation in regulating national security measures. This limitation is a key feature of the AI Act, resulting in repeated carveouts for law enforcement and border control in otherwise highly regulated or banned AI systems.

The AI Act: A Case Study in AI Regulation

The AI Act is a mixed bag when it comes to practical protections, due in part to its harms-based structure and to the extensive debates prior to its passage. Both resulted in some weakened protections and a more limited scope. To credit the AI Act for its strengths, identify areas of improvement, and note serious problems, we have broken our analysis into The Good, The Bad, and The Ugly.

The Good

The AI Act’s categorization system draws attention to categories of dangerous AI systems that EPIC has long rallied against. In particular, EPIC applauds the total prohibition on social scoring systems, emotional recognition tools in the school and work context,[1] certain predictive policing tools,[2] and systems that scrape the internet to create or increase facial recognition databases.[3] The EU also included a broad swath of algorithms in its highly regulated high-risk category, recognizing that these systems deeply affect fundamental rights. In particular, EPIC applauds categorizing algorithms related to public benefits,[4] student surveillance in the education context,[5] and emotional recognition writ large as high-risk. The AI Act also recognizes that law enforcement and border control authorities are the entities with the most frequent opportunities to violate the fundamental rights of EU residents and ensures that the tools intended for their use are highly regulated. Both the prohibited and high-risk categories will be evaluated on a yearly basis to ensure that the Act will continue to protect fundamental rights, public health, and public safety.

Within the obligations for high-risk AI systems, the AI Act bakes civil rights protections into its provisions on training data. The Act requires that data and datasets used to train the AI systems be properly labelled, representative of the population the AI system purports to analyze, and as error free as possible. This is to ensure that the models will be as accurate as possible when analyzing input data. Providers of general-purpose AI models (“GPAI” or “GPAI models”), such as large language models, must also publish a sufficiently detailed summary of the content the model is trained on to ensure people can enforce their rights. These enforceable rights include EU copyright protections, privacy rights, and civil rights, as well as other statutory rights under both member state law and EU law.  

Importantly, there are several layers of evaluation systems in place for high-risk AI systems, including robust risk assessment requirements, which EPIC frequently points to as a key accountability mechanism. First, all high-risk AI systems must undergo a fundamental rights impact assessment (“FRIA”) before being placed on the market and/or deployed. The FRIA includes:

  • A description of the intended use of the AI product;
  • The time period within which the AI product will be deployed;
  • The natural persons or groups likely to be affected by the product’s intended use and the specific risk of harm to those people;
  • A description of the risk mitigation procedures, including human oversight measures;
  • Instructions for deployers on how to use the system appropriately; and
  • Instructions on how to take corrective action if such risks materialize during the deployment of the product.

These FRIA must be transmitted to the appropriate regulatory authority before the AI system is placed on the market or put into service. Second, providers of the AI system must also create and maintain a risk management system, which is a “continuous, iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring systematic review and updating.” This risk management system must identify known and reasonably foreseeable risks to fundamental rights, health, or safety, both when the system is used for its intended purpose and for any reasonably foreseeable misuses of the system. The provider must take targeted, corrective measures to mitigate the risks identified by the risk management system.

Once the AI system is in use, the provider must, in addition to the risk management system, put into place a post-market surveillance system, as well as policies and procedures to ensure the continuing quality of the AI system. The post-market surveillance system monitors the AI system’s deployment for serious incidents, which include the death of a person or serious harm to the person’s health, a serious or irreversible disruption of critical infrastructure, the infringement of obligations under Union law intended to protect fundamen­­­­tal rights, and/or serious harm to property or the environment. The provider must have procedures in place to report serious incidents to the appropriate regulatory bodies. The provider must also regulate all aspects of data processing, including acquisition, collection, analysis, labeling, storage, retention, and otherwise, to ensure the algorithm is being fed high-quality data over its lifecycle.

The AI Act also builds in several layers of transparency requirements for appropriate oversight. First, the AI Act builds in transparency requirements between providers and deployers of AI systems to confirm that the AI system is used in accordance with its intended purpose. Providers must train deployers when necessary as well as provide technical documentations of the AI system and its risk management framework operations. Second, providers must also include audit logs in high-risk AI systems to record the AI system operations, ensuring that the risk management system and all oversight measures track what is actually occurring at a technical level. Third, the regulatory bodies in charge of external oversight of these systems have the authority (and clearance, when dealing with law enforcement) to request the technical documentation and audit logs so they may act as meaningful oversight and review of the AI system. Fourth, the AI Act calls for a public database of all high-risk AI systems, including information on the intended purpose, a summary of the results of the FRIA conducted before deployment, the basic functioning and operating logic of the system, and information about the providers and deployers. This database will also include high-risk AI systems by law enforcement and border control, but any sensitive law enforcement data will be registered in a non-public section of the database. It will allow EU residents to know how and when their rights are being affected and regulatory bodies to effectively monitor the market. This database is similar to a proposal by the White House Office of Management and Budget in its draft guidance regarding the Biden Administration’s recent Executive Order on AI as well as the U.S. Department of Justice’s publicly available inventory of AI Use Cases.

Most importantly, the AI Act specifically enshrines private rights of action. It allows anyone whose fundamental rights were affected by the system to submit a complaint to the appropriate market surveillance authority. The Act also allows anyone to request an explanation of why an action was taken by the deployer when the decision was based on the output of an AI system and affected the person’s health, safety, legal rights, and/or fundamental rights. A private right of action is key because it allows for widespread enforcement by individuals who are in the best position to vindicate their rights at greater speed than the large and overtaxed regulatory bodies can provide. Indeed, in the United States, private rights of action have been the most effective method of Big Tech regulation. For example, the Illinois Biometric Information Privacy Act’s private right of action has been central to reigning in the misuse and abuse of biometric information across the entire United States, in particular with the facial prints underlying facial recognition technology.

The Bad

The overall structure and approach of the AI Act makes regulating GPAI models and open-source software difficult. The categorization system is based on the intended use of the AI system, but, by definition, GPAI models are unlikely to fall under the high-risk or prohibited categories because they can be used for several different kinds of tasks. Originally, GPAI models and open source AI systems were not even within the scope of the AI Act. Instead, the Council and Parliament compromised at the last minute to regulate GPAI where the levels of risk were based merely on size, rather than purpose. All GPAI have some basic transparency and evaluation requirements, but only sufficiently large GPAI have extensive regulatory requirements. GPAI is dangerous, particularly generative AI, but the AI Act barely touches on this issue.

Likewise, open-source software fundamentally breaks the liability chain created by the AI Act’s provider and deployer system. Open-source software is placed on the market (for free or for value) expressly to allow other entities to modify the software and use it for purposes other than those intended by the original creator. The AI Act attempts to address this issue by mandating that anyone who substantially modifies an AI system becomes the new provider of the system, thereby assigning all the responsibilities and obligations that come with the title. However, this break in the chain can remove an AI system entirely from a prohibited or high-risk category (removing any obligations on the part of either provider), create an enforcement nightmare for the regulatory bodies, and relieve the original provider of the AI system of any liability for iterations of the system they created. 

On a practical level, it is unclear how companies that create AI systems will know where their systems fall into the categorization system and what triggers compliance. As written, the burden of deciding whether an AI system is prohibited or high-risk falls to the providers. When given the chance, companies will do whatever possible to avoid regulations, particularly since the AI Act comes with a high compliance burden. Companies will argue that the intended purpose of their AI system does not fall under a high-risk or prohibited category and stick to that argument until courts prove otherwise. This exposes EU residents to non-compliant AI systems until and unless the overtaxed EU regulatory system identifies and penalizes the providers or an EU resident files a formal complaint under the private right of action after their rights have already been infringed.  

Next, while there are provisions requiring high-risk AI systems to be overseen by humans, there is no explicit prohibition against automating the risk management systems and other oversight mechanisms intended to serve as risk mitigation. As a time and cost-cutting measure, companies may turn to automated software to engage in the risk management systems unless explicitly told otherwise. This will lead to underreporting incidents and/or false positives, as seen by other oversight mechanisms like public benefit fraud detection software and social media content moderation algorithms. The use of automated systems also increases the risk that the oversight mechanism will experience a mechanical failure and experience down time, leading to periods of time where the system will not be monitored for serious incidents. While humans need not manually review and monitor the whole AI system constantly, human review is crucial to ensuring effective and continuous risk mitigation within the AI system.

The Ugly

The AI Act botched its handling of algorithms related to biometric information, particularly biometric identification systems. While the ban on real-time, remote biometric identification in public spaces for law enforcement purposes is a major step in the right direction, the carveouts swallow the rule. By including a major carveout for law enforcement to use the technology in certain circumstances, not banning it for private actors, and relegating non-real-time, remote biometric identification in public spaces to the high-risk category, the AI Act further waters down the spirit of the prohibition by allowing law enforcement to create a mass surveillance ecosystem under the color of Union law. Finally, biometric verification processes the exact same sensitive data that biometric identification systems do, but they are explicitly exempt from the high-risk categorization.

Biometric identification is the use of biometric data to link a person to an identity, typically using a one-to-many matching algorithm. Condoning real-time, remote biometric identification in public places for any reason condones the creation of a mass surveillance state. By only banning real-time, remote biometric identification for law enforcement purposes, the AI Act allows private actors and even the government (for non-law enforcement purposes) to construct a mass surveillance ecosystem where privacy in public is eroded. Private actors may set up facial recognition systems that scan public, live CCTV streams to track individuals in real time. This would accelerate the grave harm companies like PIM Eyes have brought into the world, allowing for increased stalking and harassment capabilities. Private entities are often subject to fewer checks and obligations on their uses of new technologies beyond basic safety and fitness and have less legal duty to uphold fundamental rights compared to government actors. The ban as written is for law enforcement purposes, not just when law enforcement authorities themselves are the providers/deployers, so law enforcement would theoretically not be able to buy this data from private actors. But the fact that this loophole exists means that it will be abused. Even if law enforcement are banned in certain circumstances from using this technology, there are recent examples of law enforcement bodies lying to the public and using intelligence software anyways. Law enforcement officials could also use the software in violation of the law without oversight from their bosses.

Moreover, banning real-time, remote biometric identification but relegating non-real-time, remote biometric identification to the high-risk category allows for a mass surveillance ecosystem because the data will be collected whether law enforcement authorities are accessing it in real-time or not. Facial recognition technology is by far the most common form of remote biometric identification that can be effectively scaled for law enforcement purposes. The CCTV cameras used for this must be in place monitoring public spaces before they can be used in conjunction with the regulated facial recognition systems, leading to constant monitoring whether AI systems are involved or not. Furthermore, the distinction between real-time and non-real-time, remote identification is vague, allowing law enforcement authorities leeway and discretion to use the technology to its fullest capabilities inconsistent with the law. If the infrastructure is in place, law enforcement authorities, including individual officers, will misuse and abuse the systems.

The carveout in the ban on real-time, remote biometric identification in public spaces for law enforcement purposes is overbroad and directly contradicts the spirit of the prohibition.  Law enforcement will use any excuse to overcharge individuals to reach the four-year prison sentence minimum required to use these identification systems. Even if the system is not misused and/or abused, mass surveillance fundamentally erodes civil rights. To properly engage in the right to assembly and the right to free speech, the government must protect an individual’s intellectual privacy and their right to privacy in public. It is unlikely that the judicial authorities, who will be required to sign warrants allowing law enforcement to use this technology, will protect these rights because of the vast deference given to law enforcement officials. The EU is primarily a commercial body and does not have expansive authority to infringe on the member states’ national security powers. Because of this lack of authority, the “necessary and proportionate” standard that the CJEU uses to balance Union-level human rights and member state-level national security interests often gives broad deference to law enforcement authorities’ discretion to execute their duties. However, there has been a trend in CJEU cases where the EU has encroached on national security interests, including a ruling that limited bulk metadata retention for law enforcement purposes. This uncertainty in the courts means that the only way to reliably guarantee human rights protections at the Union level is to explicitly ban harmful use cases like real-time, remote biometric identification in public spaces without any carveouts. There are other, less invasive means of tracking dangerous individuals than pointing hundreds of cameras at the public square.

Finally, biometric data such as fingerprints and face prints are highly sensitive pieces of data, whether they are processed for one-to-many or one-to-one matching algorithms. Despite this, the AI Act specifically exempts biometric verification, aka one-to-one matching algorithms, from the high-risk categorization. The underlying technology is the same, so the only difference between the two systems is the dataset that the algorithm compares the probe data to (a single data point typically provided by the individual rather than a large dataset controlled by the deployer).  Biometric verification should be regulated at the same level as biometric identification and be subject to at least high-risk obligations.

The Takeaways  

The AI Act will soon be formally adopted, but it will not come into force until two years after its formal adoption. The intervening two years will give the various governing boards created by the AI Act, such as the AI Office, time to set up and give companies time to comply with the various requirements. While the law is not retroactive and does not apply to AI systems placed on the market or into service before its passage, the Act will still have sweeping consequences for the industry, and several governments are likely to follow suit in passing similar laws.

However, the U.S. should take a different approach than the EU. Because the U.S. doesn’t have a strong human rights framework like the EU does, this country should not adopt a harms-based structure. The AI Act relies heavily on a comprehensive privacy law and explicit, fundamental rights to privacy, data protection, assembly, speech, and others. The U.S. lacks comprehensive federal privacy legislation, which is vital to successfully imitating the AI Act. EPIC has advocated for comprehensive privacy laws at both the federal and state levels for the past 30 years. In particular, EPIC’s Deputy Director, Caitriona Fitzgerald, has testified in several states in support of strong, comprehensive privacy laws to ensure that U.S. citizens have some semblance of protection until Congress moves on privacy.  Privacy legislation is the foundation of the AI Act, and it needs to be the foundation of any future U.S. AI legislation for it to be effective.  

The United States needs to engage in comprehensive and consistent approach to AI. The Biden Administration issued an Executive Order on AI, and the White House Office of Management and Budget released draft guidance outlining federal agencies’ obligations regarding responsible development, use, and procurement of AI technologies. Both the Executive Order and the draft guidance pull from the NIST AI Risk Management Framework, which echoes some of the oversight provisions enumerated in the AI Act. When assessing what provisions to include in U.S. laws, legislators should consider the following provisions:

  • A private right of action allowing individuals to sue companies for violating the regulation;
  • Regulations and liability for the entity that develops the AI system, entities that deploy the AI system, and any other entity in the supply chain throughout the lifecycle of the AI system (third party contractors, infrastructure support, etc.);
  • A risk management system that analyzes risks to human rights and an explicit requirement for corrective action throughout the lifecycle of the AI system – this should include regular audits and impact assessments;
  • Several layers of oversight mechanisms, including an explicit requirement to design the AI system with the capability of audit logging, appropriate authority and clearance for regulatory bodies, and a public dissemination of information about the AI systems (such as the EU database);
  • Data minimization by design;
  • Requirements for developers and deployers to disclose when content is AI generated;
  • Prohibitions on discriminatory or unfair uses of data;
  • Strong cybersecurity protections for sensitive information like biometric data; and
  • A prohibition on social scoring, one-to-many facial recognition systems, emotional recognition systems, non-consensual deepfakes, and other particularly harmful use cases.

[1] For a sampling of our work on emotional recognition, see: Letter to Zoom on Emotion Analysis Software; EPIC Comments to the DOJ/DHS on Law Enforcement’s Use of FRT, Biometric, and Predictive Algorithms; and Comments of EPIC to the Department of Education on Seedlings to Scale.

[2] For a sampling of our work on predictive policing tools, see: Comments to the DOJ/DHS on Law Enforcement’s Use of FRT; Ben Winters Layered Opacity: Criminal Legal Technology Exacerbates Disparate Impact Cycles and Prevents Trust; EPIC Letter to Attorney General Garland Re: ShotSpotter Title VI Compliance; and EPIC’s ShotSpotter DOJ Petition.

[3] For a sampling of our work on facial recognition, see our page on Face Surveillance and Biometrics.

[4] For a sampling of our work on public benefits, see: Outsourced and Automated and EPIC’s Thomson Reuters’ Fraud Detection System FTC Complaint.

[5] For a sampling of our work on student surveillance, see our page on Student Privacy.

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate