Analysis
Summary: What does the European Union Artificial Intelligence Act Actually Say?
February 23, 2024 |
Covering everything from ChatGPT to the systems used to build freight truck cabins, the European Union’s Proposed Artificial Intelligence Act (“AI Act” or “the Act”) is a piece of tech legislation rivaled only in scope by the GDPR. The sweeping harms-based structure categorizes artificial intelligence systems by preconceived risks to fundamental rights, public safety, and public health (prohibited, high-risk, and low or no-risk systems). Depending on the category, the Act either (1) prohibits the technology from being placed on the market and deployed, (2) establishes mandatory safeguards and legal liabilities across the AI system’s supply chain, or (3) recommends the installation of a voluntary code of conduct. The mandatory obligations and responsibilities for high-risk AI systems focus on risk management, proper evaluations, and transparency. The use case categorization system doesn’t regulate general purpose AI, generative AI, or open-source models as entirely separate entities from the 3 major categories. Instead, general purpose AI, generative AI, and open-source models may fit into one of the three original categories, but with some mandatory transparency obligations and slightly modified liability provisions. Furthermore, the Act establishes a privacy sandbox to temporarily shield certain AI research and development from regulation to promote innovation. The AI Act is a major step in the right direction and will act as a regulatory guinea pig as AI technology continues to evolve.
Note that the AI Act does not yet have an official, public final draft of the text. The AI Act finished its lengthy negotiations phase between the EU Parliament and Council and is now in the final stages of adoption. On January 22, 2024, Luca Bertuzzi, an independent journalist, leaked a post-negotiation draft of the AI Act text, which is the most recent version of the proposal available to the public.[1] This blogpost was based off of this version of the AI Act.
Below is a detailed summary of the key components of the AI Act.
Who is Regulated?
The AI Act regulates all AI systems that are placed on the market in the European Union (“EU”) or put into service by providers (whether based in the EU or in a third party country) and deployers of AI systems who have their place of establishment or are located within the EU. If a provider is based outside of the EU, it needs to establish an authorized representative in the EU to act as a contact point for the appropriate regulatory authorities, enable the provider to perform its tasks under the regulation, and establish liability. Some key terms used throughout the regulation are:
- Providers: A natural or legal person, public authority, agency, or other body that develops AI systems or general purpose AI models (or has one developed on its behalf) and places the system on the market or puts the system into service under its own name or trademark, whether for payment or free of charge. Any entity along the AI system’s value chain can become a provider if it makes a substantial modification to an AI system.
- Deployers: A natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI is used in the course of a personal non-professional activity.
- Importers: Any natural or legal person located or established in the EU that places on the market an AI system that bears the name or trademark of a natural person or legal person established outside the EU.
- Distributors: Any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
- Affected natural persons: Affected natural persons do not have specific obligations under the AI Act, but are discussed in relation to provider and deployer obligations. Affected natural persons may enforce their rights against all entities in the AI lifecycle. Affected natural persons (1) are located in the EU and (2) have their personal data in data training sets, are subject to the AI system (such as a job candidate being evaluated by a resume screener), and/or are otherwise affected by AI systems.
3(ish) Categories of Regulated Systems
Prohibited AI Systems
It is illegal to place the following discrete list of AI systems on the market, put them into service, or deploy them due to their unacceptable risk to fundamental rights, public safety, and/or public health. Since these technologies are prohibited wholesale, the burden will be on providers to not place the AI system on the market as well as on enforcement bodies to take action when a non-compliant system is placed on the market. This short, discrete list may be amended through procedures discussed below and will generally be reviewed on an annual basis.
- Subliminal Behavior Distortion: Systems that “deploy subliminal . . . or purposefully manipulative or deceptive techniques with the objective to or the effect of materially distorting a person’s . . . behavior by appreciably impairing the person’s ability to make an informed decision thereby causing the person to take a decision that that person would not have otherwise taken” in a manner that causes or is likely to cause significant harm.
- Exploitative Behavior Distortion: Systems that exploits the vulnerabilities of a person or group of persons due to their “age, disability, or a specific social or economic situation” with the objective or material effect of distorting their behavior in a manner that causes (or is likely to cause) significant harm.
- Biometric Categorization to Infer Specific Personal Traits: Systems that use biometric categorization systems to categorize natural persons based on their biometric data to deduce or infer their “race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.”
- Social Scoring: Systems that evaluate or classify natural persons or groups of persons over a certain time period “based on their social behavior or known, inferred or predicted personal or personality characteristics” leading to either detrimental or unfavorable treatment in an unrelated social context and/or detrimental or unfavorable treatment that is unjustified or disproportionate to that social behavior.
- Person Based Predictive Policing: Systems that evaluate a natural person to predict the risk that the person will commit a criminal offense based solely on profiling of their personality traits and characteristics.
- Creation or Expansion of Facial Recognition Databases: Systems that scrape facial images from the internet or CCTV footage in an untargeted manner to expand or create facial recognition databases.
- Emotional Recognition in Schools and the Workplace: Systems that infer emotions of a natural person in the areas of a workplace and/or educational systems except for those systems that are used for medical or safety reasons.
Finally, the following use case is listed under the prohibited category but includes a major carveout. Therefore, it is legal to put on the market, put into service, or deploy these systems, but only in very specific circumstances. If the following circumstances are not met, then the use of such a system is prohibited.
- Real Time, Remote Biometric Surveillance in Public Spaces by Law Enforcement: Systems that use biometric data to identify individuals in real time (for example, by applying facial recognition technology to a live CCTV feed) in publicly accessible spaces for law enforcement purposes. First, a law enforcement official who wishes to use such a system must engage in a fundamental rights impact assessment and seek authorization from a judicial authority before deploying the system. The system may then be deployed for a limited amount of time in a limited geographic area searching for an already targeted individual as long as it is “strictly necessary” for one of the following purposes:
- A targeted search for specific victims of abduction, human trafficking, and/or missing persons;
- Preventing a “specific, substantial, and imminent threat” to the life or physical safety of natural persons or a “genuine and present or genuine and foreseeable threat” of a terrorist attack;
- Localization or identification of a person suspected of having committed a criminal offense for a certain grouping of criminal offenses listed in Annex IIa of the AI Act that is punishable by at least four years in prison; [2] and/or
- For the purposes of conducting a criminal investigation, prosecution, or executing a criminal penalty for a certain grouping of criminal offenses listed in Annex IIa of the AI Act that is punishable by at least four years in prison.
High Risk AI Systems
High risk AI systems are those that pose significant risk to fundamental rights, public safety, and/or public health but whose risks can be mitigated with adequate safeguards. The AI Act imposes obligations for high risk systems on providers, deployers, importers, and distributors and includes protections for affected natural persons. If an AI system fits under one of the below high risk categories, then the provider of the system must self-certify and submit the appropriate documentation to regulatory bodies and any downstream deployers, importers, distributors, and affected natural persons. For providers, the obligations must be completed before the AI system has been placed on the market and throughout the lifecycle. For deployers, obligations begin once the technology has been deployed and continue throughout the time period in which the entity deploys the system. These use case categories refer only to the intended use of the product. Misuse, even reasonably foreseeable misuse, do not qualify an AI system for high risk obligations and responsibilities.
Categorization as a high risk AI system may be avoided if the product does not pose a significant risk to fundamental rights, public safety, or public health. If the provider believes the AI system is not high risk, then the system must still be registered with the EU and assessed before being placed on the market. The lack of significant risk can be proven by fulfilling one or more of the following criteria:
- The AI system is intended to perform a narrow, procedural task;
- The AI system merely improves the result of a previously completed human activity;
- The AI system analyzes decision making patterns and detects deviations but does not influence or replace human decision making; and/or
- The AI system performs a mere preparatory task in a product that may fulfill a high risk use case.
Exhaustive List of High Risk Use Cases
- Biometrics:
- AI systems that perform remote, non-real time biometric identification. This does not include biometric verification systems whose sole purpose is to confirm a specific natural person is the person they claim to be.
- AI systems used for biometric categorization “according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics.”
- AI systems used for emotional recognition.
- Critical Infrastructure Safety Components
- AI systems used as safety components in the management and operation of critical digital infrastructure, road, traffic, and the supply of water, gas, heating, and electricity.
- Education and Vocational Training:
- AI systems that determine admission or access or that assign natural persons to educational and vocational training institutions at all levels.
- AI systems that detect learning outcomes, including when those outcomes are used to steer the learning processes of natural persons.
- AI systems used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, within the context of education and training institutions.
- AI systems that monitor and detect prohibited behavior of students during tests in the context of education and vocational training institutions.
- Workplace:
- AI systems used for recruitment or selections of natural persons, notably to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates.
- AI systems that affect the terms of the work-related relationships, promotion and termination of work-related relationships, that allocate tasks based on individual behavior or personal traits or characteristics, and that monitor and evaluate performance and behavior in such relationships.
- Public Benefits:
- AI systems used by public authorities or on behalf of public authorities to evaluate the eligibility of a natural person for essential public assistance benefits and services, including healthcare services. Includes systems that grant, reduce, revoke, or reclaim such benefits and services.
- AI systems that evaluate creditworthiness of natural persons or establish their credit score, except for those intended to detect financial fraud.
- AI systems that evaluate and classify emergency calls by natural persons or to be used by dispatch, or that establish priority in dispatching emergency first response services.
- Law Enforcement:
- AI systems used by law enforcement, on behalf of law enforcement, or by EU government bodies in support of law enforcement
- Including polygraphs and similar tools
- to assess the risk of natural person becoming a victim of a criminal offense
- to evaluate the reliability of evidence in the course of an investigation or prosecution of a criminal offense
- to assess or predict the risk of a natural person offending or reoffending when the assessment is not based solely on profiling, personality traits, characteristics, or past criminal behavior
- for profiling in the course of detection, investigation, or prosecution of criminal offenses
- AI systems used by law enforcement, on behalf of law enforcement, or by EU government bodies in support of law enforcement
- Migration, Asylum, and Border Management:
- AI systems used by competent public authorities such as polygraphs and similar tools.
- AI systems used by competent authorities, on behalf of competent authorities, or by EU government bodies in support of competent authorities to assess a risk (security, irregular migration, or health risks) posed by a natural person who intends to enter or has entered the territory of a member state.
- AI systems used by competent authorities, on behalf of competent authorities, or by EU government bodies in support of competent authorities to detect, recognize, or identify natural persons in the context of migration, asylum, and border control. This high risk categorization does not include verification of travel documents.
- Administration of Justice and Democratic Processes:
- AI systems intended to be used by a judicial authority, alternative dispute resolution entities, or on their behalf to research and interpret facts and the law, as well as applying the law to a concrete set of facts.
- AI systems intended to be used for influencing the outcome of an election or referendum or the voting behavior of natural persons. This high risk categorization does not include systems where the output is not seen by natural persons, such as tools used to organize, optimize, and structure political campaigns from an administrative point of view.
Obligations of Providers
- Risk Management System: Providers must engage in a continuous, iterative process that is planned and runs through the entire lifecycle of the AI system. The risk management system shall identify and analyze known and reasonably foreseeable risks that the high risk AI system can pose to fundamental rights, public health, and public safety when the system is used for its intended purpose as well as reasonably foreseeable misuse. After deployment, the risk management system shall take into account new data from post market surveillance monitoring to identify and mitigate risks. The risk management system will include testing, both before the AI system is placed on the market and routinely thereafter. The provider must adopt appropriate and targeted risk management measures to eliminate risks where feasible and mitigate risk where it cannot be eliminated. Instructions and training (when necessary) shall be provided to deployers of the technology to ensure that the risks do not come to fruition during use.
- Data Governance: Datasets used to train high risk AI systems must be appropriate for the intended purpose of the AI system. The datasets must be sufficiently representative, and, to the best extent possible, free of errors and complete in view of the intended purpose of the AI system and the natural persons the AI system will be deployed on. When choosing appropriate datasets, the environment where the dataset is being deployed shall also be taken into account.
- Technical Documentation: Providers must create and maintain sets of technical documentation on the AI systems’ functionality that allow the relevant regulatory authorities to assess the system’s compliance with its obligations.
- Record Keeping: When designing high risk AI systems, providers must allow for automatically recording events (‘logs’) over the duration of the lifetime of the system.
- Transparency from Providers to Deployers: Providers must design AI systems in so the operation is sufficiently transparent to enable deployers to interpret and use the system’s output appropriately. This includes through providing instructions to the deployer which include the identity and contact information of the provider; the characteristics, capabilities, and limitations of performance of the system; required human oversight measures; computational and hardware resources needed over the lifetime of the system; and a description of the mechanism which allows users to collect, store, and interpret record keeping logs.
- Human Oversight by Design: Providers shall design and develop the AI system so it can be effectively overseen and stopped by natural persons during the deployment of the AI system.
- Accuracy, Robustness, Cybersecurity: Providers shall design and develop the AI systems in a way that achieves accuracy, robustness, and cybersecurity requirements. Robustness is defined as resiliency from misuse. Cybersecurity requirements align with the EU Cyber Resiliency Act.
- Quality Management System: During the lifecycle of the system, a provider shall put into place a quality management system that ensures compliance with the AI Act. The system shall include a strategy for regulatory compliance, techniques to be used for design and control of the system, technical specifications of the AI system, systems and procedures for data management (including acquisition, collection, analysis, labelling, storage, filtration, etc.), setting up post-market monitoring systems, handling communications with competent authorities, resource management, an accountability framework, and several other conformity requirements.
- Duty of Information and Corrective Action: When a significant incident occurs, providers shall bring the AI system into conformity, withdraw the system from the market, disable it, or recall the system as necessary. The provider shall inform deployers, authorized representatives, and importers accordingly.
- Nominate an Authorized Representative: If a provider is located outside the EU, then it needs to nominate an authorized representative who shall perform the tasks mandated under the AI Act to ensure the AI system is compliant. The representative will also be the point of contact for the relevant regulatory authorities.
- Fundamental Rights Impact Assessments: Providers of certain high risk AI systems (public authorities, AI systems involved in critical infrastructure, or AI systems involved in public benefits) shall engage in a fundamental rights impact assessment. The assessment must include the following:
- A description of the intended use of the AI product;The time period within which the AI product will be deployed;The natural persons or groups likely to be affected by the product’s intended use and the specific risk of harm to those people;A description of the risk mitigation procedures, including human oversight measures; Instructions for deployers on how to use the system appropriately; and Instructions on how to take corrective action if such risks materialize during the deployment of the product.
- Conformity Assessments: Providers shall engage in conformity assessments to verify and certify that the AI system is compliant with the AI Act before placing the system on the market or into service.
Obligations of Deployers
- Human Oversight: Deployers shall assign human oversight to natural persons who have the necessary competence, training, and authority to oversee the AI system.
- Review Input Data: To the extent it is under deployer control, deployers shall ensure that input data for the AI system is relevant and sufficiently representative of the intended purpose of the AI system.
- Post Market Monitoring: Deployers shall engage in post-market monitoring. When a significant incident occurs, deployers shall inform providers, importers, distributors, and authorized representatives accordingly.
- Remote, non-real time biometric surveillance: Deployers shall request judicial authorization prior to deployment. If the request is denied, the use shall be stopped with immediate effect. Law enforcement deployers can only use such a system in a way that is linked to a criminal offense, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offense, or the search for a specific missing person. No decision that renders an adverse legal effect on a natural person may be taken solely based on the output of such a system. The use of such a system by law enforcement shall be documented in the relevant police file and be made available to relevant market surveillance authorities. Deployers shall submit annual reports to the relevant market surveillance authorities on the use of non-real time biometric surveillance systems, excluding sensitive operational data.
Obligations of Importers and Distributors
Importers and distributors must generally verify that the AI system is in compliance with the AI Act by verifying conformity assessments, the existence of the appropriate technical documentation, and other obligations.
Low/No Risk AI Systems
If the AI System is not prohibited or classified as a high risk AI system or general purpose AI system with systemic risk, then the AI system is not subject to obligations under those categories. The AI Act recommends the use of voluntary codes of conduct that resemble the high risk AI system obligations.
General Purpose AI, Generative AI, and Open Source Software
General Purpose AI (GPAI) models
While general purpose AI (GPAI) models and systems are defined separately from AI systems, they are still subject to the same taxonomy of regulation. AI systems are defined broadly as systems that, “for explicit or implicit objectives, infer[], from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” GPAI, on the other hand, is defined more narrowly as a model or system that, “when trained with a large amount of data using self-supervision at scale, [] displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the [model or system] is placed on the market[.]” While this wording is unclear, it seems that GPAI systems are a subset of AI systems. Large generative AI algorithms are cited as a common example of GPAI due to the flexible nature of content creation. Even if the GPAI system does not constitute a prohibited AI system or a high risk AI system, it may still be deemed a “systemic risk” and be subject to certain obligations. A GPAI system is categorized as having systemic risk if it has high impact capabilities that match or exceed “the capabilities recorded in the most advanced general purpose AI models.” The provider can identify itself as a systemic risk GPAI model or the EU Commission can deem an AI system a systemic risk.
GPAI systems with systemic risk are subject to similar obligations as high risk AI systems, including certain transparency requirements and technical documentation requirements. Notably, these GPAI system providers must ensure that natural persons interacting with, being exposed to, or being subject to a GPAI model must be made aware that they are interacting with an AI system “unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of the use.” These GPAI system providers must also put into place a policy to respect EU copyright law, including creating and making publicly available a “sufficiently detailed” summary of the content used for training the model. The summary shall be detailed enough to allow interested parties to enforce their rights, including intellectual property rights as well as rights under the GDPR and other applicable EU law.
Generative AI
Generative AI is not regulated separately from other kinds of AI systems. It is noted as a common example of GPAI models but is only given one unique obligation under the AI Act. Under Article 52(1), providers of AI systems that “generate synthetic audio, image, video or text content” shall mark the outputs of the AI system in a “machine-readable format” to ensure that the outputs are “detectable as artificially generated or manipulated.” This technical solution—commonly described as AI watermarking—is required to be “effective, interoperable, robust and reliable” and should be up to par with the “generally acknowledged state-of-the-art” technical standards. This requirement does not apply when the AI system performs an assistive function for standard editing or does not substantially alter the input data (or semantics thereof) provided by the deployer of the AI system. This marking requirement also does not apply where authorized by law to “detect, prevent, investigate, and prosecute criminal offenses.”
Open-Source Software
Open-source software is similarly not regulated separately from other kinds of AI systems. Since the AI Act is use case based, the obligations on open-source AI systems remain the same based on its categorization as high risk or otherwise, but the question of who the obligations apply to is slightly different. The person, legal or natural, that develops or places onto the market the original version of the AI system retains any and all obligations under the AI Act for that original version. As soon as another person, legal or otherwise, makes a substantial modification to the AI system, then the new person becomes the provider and takes over any and all obligations going forward. The original provider’s technical documentation and reporting duties extend only to the AI system before the modifications occur.
Governance
Amendment Processes
The AI Act builds in mandatory self-evaluation processes to ensure the legislation continues to protect against the rapid evolution of the AI industry. The following are the most relevant evaluation procedures.
- Once a year: evaluate the need to amend the list of prohibited and high risk AI systems.
- Two years after the AI Act’s application date and every four years thereafter: evaluate the need to extend the area headings, the list of AI systems that need additional transparency requirements, and the effectiveness of the supervision and governance systems.
- Three years after the AI Act’s application date and every four years thereafter: evaluate and review the enforcement structure and take corrective action following the evaluation.
Regulatory Sandbox
To support innovation and research, the AI Act has provided for temporary exemptions to certain obligations and liabilities during the design and development stages to eligible entities. The member states shall ensure that a competent authority establishes at least one AI regulatory sandbox that is operational within 24 months of the AI Act’s entry into force. These sandboxes will generally foster innovation and facilitate development, training, testing, and validation of Ai systems before placement on the market, including testing in real world conditions. The creation, operation, and functioning of the sandboxes is left to the purview of the member states and is not described in detail in the Act.
Creation of New Offices
The AI Act provides for the creation of an AI Board and scientific panel on GPAI at the EU level. The AI Board will be composed of representatives from each member state and the European Data Protection Board (EDPB) and the AI Office will participate as an observer. The AI Board shall advise and assist the EU Commission and member states in order to facilitate the consistent and effective application of the regulation. This includes sharing expertise, harmonizing standards across market surveillance authorities, and cooperating on investigations between authorities. The scientific panel on GPAI shall consist of experts selected by the commission on the basis of scientific or technical expertise in the field of AI to advise and support the EU AI Office in the implementation of the AI Act, to alert the AI Office of possible systemic risks, contributing to the methodologies for evaluating GPAI, among other responsibilities.
Enforcement
The individual member states are responsible for creating at least one notifying body and one market surveillance authority that will be overseen by the EDPB to enforce the AI Act’s provisions. In addition, the member states will ensure that the market surveillance authorities that oversee law enforcement’s use of prohibited or high risk AI systems shall have the authority and clearance to appropriately enforce the AI Act. The market surveillance authorities will be subject to the EU’s existing regulation on market surveillance and will have the power to bring AI systems into compliance, request documentation from providers and deployers, prevent the product from being made available on the market, withdraw the product from the market, destroy the product or otherwise render it inoperable, setting conditions for placing a product on the market, among others. The member states will lay down rules on penalties and other enforcement measures, including warnings and non-monetary measures. Non-compliance with the prohibition of certain AI systems shall be subject to fines of up to 35,000,000 euros or up to 7% of its total worldwide annual turnover, whichever is higher. Non-compliance with high risk AI system obligations shall be subject to fines of up to 15,000,000 euros or up to 3% of its total worldwide annual turnover, whichever is higher. Supplying incorrect, incomplete, or misleading information to the appropriate regulatory authorities shall be subject to fines up to 7,500,000 euros or up to 1% of its total worldwide annual turnover, whichever is higher.
[1] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rule on Artificial Intelligence and Amending Certain Union Legislative Acts (“Artificial Intelligence Act”), COM/2021/0106(COD) [hereinafter “AI Act”]. The document includes four columns; the first three columns are previous drafts, and the rightmost column contains the negotiated, most up-to-date text.
[2] Annex IIa lists the following applicable crimes: (1) terrorism, (2) trafficking in human beings, (3) sexual exploitation of children and child pornography, (4) illicit trafficking in narcotic drugs and psychotropic substances, (5) illicit trafficking in weapons, munitions and explosives, (6) murder or grievous bodily injury, (7) illicit trade in human organs and tissue, (8) illicit trafficking in nuclear or radioactive materials, (9) kidnapping, illegal restraint, and hostage-taking, (10) crimes within the jurisdiction of the International Criminal Court, (11) unlawful seizure of aircraft/ships, (12) rape, (13) environmental crime, (14) organized or armed robbery, (15) sabotage, and (16) participation in a criminal organization involved in one or more offenses listed above.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate