PAST EVENT

AI Symposium Recap

21 Sep. 12:00 PM EDT

In Fall 2021, EPIC hosted a symposium on discussing ways to actualize Artificial Intelligence regulation to best protect individuals. The event featured two expert panels, one keynote by Jennifer Lee of ACLU Washington, and six flash talks. Videos of each symposium segment are available below along with EPIC-written summaries. Works related to each segment published elsewhere are linked at the bottom of the summary. EPIC works to ensure regulation of emerging technologies and enforcement of existing civil rights and civil liberties protections. Follow EPIC’s work on AI at epic.org/ai.


The Flaws of Policies Requiring Human Oversight of Government Algorithms 
Ben Green  

Human oversight has become a common mechanism for regulating government automated decision-making. It’s often seen as a panacea that will mitigate the dangers of AI. In the European Union and Canada, it’s a required element for using AI tools and is seen as something that can facilitate fair use of Automated Decision-making Systems (ADS).  

Three approaches to human oversight: 

  • Restricting a narrowly defined set of “solely” automated decisions (General Data Protection Regulation) 
  • Flaw: Very few fully automated systems, ripe for exceptions 
  • Emphasizing human discretion or “humans in the loop”
  • Flaw: Automation bias – people defer to deductions of ADS 
  • Require “meaningful” human input (EU AI Act) 
  • Flaw: Explanations don’t improve human oversight and people often override algorithms in harmful ways

These approaches to human oversight of ADS are currently in use, but do they work? Or do they obscure the problem, precluding real solutions?

Human oversight policies have little empirical support and legitimize use of unnecessary or flawed algorithms. They provide a veneer of legitimacy but preclude real oversight, diminishing accountability for institutional decision makers. The current system allows government contractors to have it both ways: credit the algorithm for success, blame human oversight for harms.

An alternative: increase burden on contractors and agencies to justify value of system, require empirical support, and evaluation. 


Auditing Employment Algorithms for Discrimination 
Alex Engler   

Engler says the Federal Trade Commission (FTC) is a great starting point for corrective action on AI bias, but the agency has its hands full. His recommendation centers on the Employment Equal Opportunity Commission (EEOC) sharing some of the enforcement duties. He suggests that instead of the Food and Drug Administration (FDA) pre-market approval model, an ongoing market surveillance model similar to Environmental Protection Agency (EPA) inspections would also be good. Engler found a study in which 55 percent of HR leaders in the US use predictive algorithms in hiring. That percentage is higher for larger companies, which hire more workers. The systems are used at every stage of the hiring funnel: job recommendations, resume analysis, questions, automated interviews, etc. 

When considering algorithmic discrimination, will fair/robust AI hiring systems prevail? Do the market incentives reward fair AI? Engler’s answer is no for a few reasons. It’s time consuming and expensive to develop fair AI models. Vendors tend to make the same claims (100% unbiased, 99.6% accurate). The question becomes: How do we reward responsible vendors/punish less careful vendors? Engler proposed algorithmic audits – but only if they are independent and adversarial. The reasoning is that the market cannot audit itself – if the industry sets the rules for itself, that is not accountability. Furthermore, audits do not automatically mean accountability. For instance, in finance, auditors are held legally liable if they commit fraud, and there is direct government oversight.

Government audits could work; they can be genuinely independent and discrimination in hiring is already against the law. The EEOC is permitted to conduct qualitative and quantitative interviews. It could take steps to act as an enforcer by (i) developing specific best practices/guidance by model or application type; (ii) implementing regulatory changes, such as removing predictive validity from uniform guidelines; (iii) hiring more data scientists; (iv) encouraging complaints; and (v) using the Commissioner’s charge.


Unlawful Until Proven Otherwise: The New Turn on AI Accountability
Gianclaudio Malgieri & Frank Pasquale

Regulators and governments are now stuck in a cycle of chasing down algorithmic harm when it is already out of control. This should be flipped to an ambitious model of requiring licenses prior to algorithmic action, allowing governments to shape rather than merely respond to AI applications that impact people

How can and should this be done?

Licenses are required for driving and for medicine, for example. In both those situations, license applicants are required to demonstrate a level of legitimacy, ability, and baseline acceptability. 

Current state: ex-post AI regulatory model where, even in the best of cases, it’s notice-consent-contest—it is not requiring any demonstration of sufficiency or legitimacy. This proposal would require AI providers to assess and justify the fairness of their algorithmic systems, using an ex-ante model where AI systems posing high risks are prohibited until demonstrated safe, effective, and not likely to violate fundamental rights. 

Benefits of Justification: goes beyond just considering the cause or purpose of a system, assesses burden appropriately, and moves beyond a transparency fallacy while not requiring a highly technical explanation. 

Proposal: illegality-by-default used by very large firms or on a very large amount of people until justified via explanation of why and how it respects given core legal principles.  


Regulating Face Recognition to Address Racial and Discriminatory Logics in Policing
Sam Andrey, Sonja Solomun, & Yuan Stevens

The Canadian government’s use of Clearview AI was possible because of a gap in the law. The federal police were taking advantage of under-regulation in federal privacy laws. As a result, the police used third-party technology regardless of whether the data was lawfully obtained. Stevens outlined three concerns: 1) inaccuracy and the exacerbation of discriminatory decisions by police, 2) the impact on constitutional rights, and 3) the legitimacy of erroneous biometric databases.

There is decreased accuracy dependent on a system’s training data. Work by Joy Buolamwini and Timnit Gebru shows that automated face recognition is far more inaccurate for Black and East Asian persons. Stevens also questioned whether the government should be able to build databases of our faces in the first place. In Canada, the Privacy Act applies only to certain federal government bodies and generally only provides access and correction rights. Furthermore, the Office of the Privacy Commissioner is an ombudsperson and cannot render enforceable decisions.  

There are three jurisdictions that have begun to address the harms related to automated face recognition.

First, the European Union’s approach is instructive. Two data protection oversight bodies in the EU issued a call in June 2021 for a ban on use of automated recognition for human features. The GDPR has a general prohibition on the processing of biometric data with certain limited exceptions. It also grants people the right not to be subject to automated profiling or decisions based solely on automated processing when the impacts will be legal or significant in nature. The Data Protection Law Enforcement Directive also generally provides that no decisions by police based solely on Automated Decision-Making Tools (ADM) can rely on biometric data unless rights-protecting safeguards are in place. Data Protection Impact Assessments (DPIAs) are also a required form of monitored self-regulation and risk assessment for acceptable systems, requiring:

  • A description of the processing operations (e.g., algorithms in question) and the purpose of processing;
  • Assessment of the necessity of processing in relation to purpose;
  • Assessment of the risks to people’s rights and freedoms;
  • The measures an entity will use to address these risks and demonstrate GDPR compliance, including security measures.

Still, DPIAs under the GDPR are not required to be released to the public. Over 100 human rights groups in Europe are also demanding significant improvements to the proposed EU AI Act in order to make it compliant with human rights law.

Second, Illinois passed what is being heralded as one of the strongest biometric laws for the private sector. The Biometric Information Privacy Act applies only to private actors. Companies that possess biometric identifiers or information must develop public policies for retention, must not profit from biometric information, must not disclose information without consent or legal requirement to do so, and must store, transmit, and protect info with reasonable standard of care. An example of the law’s effectiveness is Facebook’s $550 million (raised to $650 million) class action settlement for violating disclosure requirements. Similar claims have been filed against Microsoft and Amazon.

Third, Massachusetts enacted a law in 2021 related to police use of face and other remote biometric recognition systems. The police must obtain a warrant before conducting any face recognition searches except if there is a reasonable belief that there is an emergency involving substantial risk of harm to any individual or group of people. The police are also prohibited from using the automated face recognition provided by a third party. Only the state police, FBI, or DMV can perform a search. The police would also need to submit detailed documentation of each search to the public safety office, which shares aggregated information with the public.

Given these legal developments elsewhere in the world, Stevens, Solomun, and Andrey urged lawmakers in Canada to consider the following recommendations for more effective data protection regulation related to face recognition: 

Recommendation 1: Prohibit the collection use and disclosure of facial information for the purpose of uniquely identifying an individual through automated decision-making systems. At a minimum, prohibit uniquely identifying a person in real time. This may require a cohesive privacy approach that spans public and private sectors. If regulation ultimately ends up being enacted that allows for exceptions to this general prohibition, permission must be sought before any face recognition search is conducted (only then in life-threatening situations), and human rights safeguards must be imposed. These safeguards may include a warrant requirement, a prohibition on using third party services, and requiring that searches only be performed with written permission and through government systems (for example, through the registry of motor vehicles).

Recommendation 2: Provide recourse for violations of privacy and related rights. These should include: the right to meaningful explanation, the right to contest decisions, the right to meaningful human intervention, and the right to freedom from discrimination. 

Recommendation 3: All automated decision-making systems in use should generally undergo impact assessments on a continual basis. This would apply to third party systems, if allowed to be run against baseline standards (e.g., Gender Shades study by Buolwamini and Gebru).

Recommendation 4: Maintain a public register for all automated decision-making systems deployed by law enforcement in Canada. The register should include publicly shared results of automated decision-making impact assessments, including transparency and accountability requirements in certain cases.  

Andrey closed by echoing the words of the Citizen Lab authors of To Surveil and Protect (Robertson, Khoo, and Song): There is reason to remain deeply skeptical that algorithmic policing technologies, including automated face recognition systems, can be used in a manner that does not discriminate against or otherwise unjustly impact individuals under equality law. 


A NEPA for AI
S. Scott Graham

The United States should establish an FDA for algorithms. Similar to how the FDA must approve a drug to ensure safety before it can be released to the market, there should be an audit for algorithms. Graham quotes Andrew Tutt: “Given the close analog between complex pharmaceuticals and sophisticated algorithms, leaving algorithms unregulated could lead to the same pattern of crisis and response.”  

Algorithms, like pharmaceutical drugs, can create risks and inflict harms in ways that are difficult to anticipate. However, companies like Google don’t believe that FDA-type regulation is necessary for algorithms. These companies cite self- and co-regulatory approaches, and current laws as ways to curb inappropriate AI use. But Graham argues that Google’s position shows the company is not paying attention to the impact of its products and that there needs to be a regulatory body empowered to address the risks of AI and algorithmic decision-making.

Graham pointed to The National Environmental Policy Act (NEPA) and the government’s Council on Environmental Quality as a good example of a potentially effective approach to AI regulation. He highlighted several sources during his presentation that informed his work and distilled them to a few core recommendations for AI governance: 1) develop new regulatory frameworks that are attuned to a more expansive sense of possible harms; 2) center community and stakeholder groups on governance and regulatory decision-making; and 3) increase regulatory scrutiny on the earliest stages of product development. 

Recommendation 1: A more capacious sense of harms  

Recommendation 2: Community involvement in governance 
For instance, much of the discussion about the governance of AI comes from the most privileged in society. There needs to be robust engagement and community inclusion.  

Recommendation 3: Regulation earlier in the product lifecycle  
We cannot rely only on retroactive enforcement or tort law. There needs to be a product review and an opportunity for denial. For instance, in the U.S. you cannot market a drug without FDA approval. The drug must be evaluated for safety and efficacy and even requires pre-investigative approval. NEPA is built on an expansive list of potential harms and is then connected to several initiatives. NEPA is unique because it embeds environmental initiatives into already existing frameworks that are coordinated. For example, the FDA, U.S. Fish & Wildlife Service, FCC, and USDA all work alongside the Council on Environmental Quality. Similar strategies can be used for regulating algorithms. A NEPA for AI could include regulator-implemented assessments, more robust community initiatives, safeguards against regulatory balkanization, and an approach with real teeth—in short, a “Toothy NEPA for AI.”   


Unfair AI
Andrew D. Selbst & Solon Barocas

The FTC has signaled that it would like to address algorithmic discrimination, as suggested by a blog post from the FTC in 2021. According to Selbst and Barocas, there are good reasons for the FTC to do this. Under the FTC’s Section 5 unfairness authority, the Commission also has the power to do so. There are five ways the FTC can make a difference. 

  1. Activities: Discrimination law is sector-specific and is generally limited to employment (Title VII), credit (Equal Opportunity Act), housing (Fair Housing Act), and a few other areas. However, Section 5 unfairness authority is not so limited. Take, for example, a consumer product that uses facial recognition: if the product does not work for a consumer with darker skin, this could be addressed by Section 5 even though it falls outside the scope of existing discrimination law.
    Actors: Discrimination law covers the actors that make the ultimate decisions that may be discriminatory; the upstream vendors of technology upon which decision makers rely are not necessarily covered. For example, employers purchasing hiring algorithms from a vendor or a landlord purchasing tenant screening tool from a vendor are the liable actors, not the vendors themselves. The FTC, by contrast, can go after software vendors who enable discrimination. 
  2. The FTC’s advantages as a litigant: Plaintiffs have trouble bringing and winning discrimination claims. However, the FTC has a few advantages. It can bring claims based on likely harm rather than realized harm. The FTC has the authority and resources to conduct investigation and gather evidence. It is also not bound by contractual concerns, such as arbitration clauses. 
  3. Alternative notions of fairness: Discrimination law is limited to disparate treatment and disparate impact doctrine. The FTC can rely on other notions of fairness. For example, the FTC can pursue cases of “differential validity” in which the performance of a product or the accuracy of a decision-making process differs between groups, even if it doesn’t produce a disparate impact. The agency has much more flexibility in interpreting the meaning of unfairness. 
  4. Evolving standards of unfairness: The ways in which unfairness can manifest in technology are not always clear. Over time, the public will learn more about how these technologies operate and discriminate. Taking inspiration from its approach to data security, the FTC can update its standards over time according to emerging consensus.

Section 5 defines unfair acts and practices as those that: 

Impose a substantial injury on consumers

  • Discrimination is a substantial harm (i.e., it is not trivial or speculative) 
    • Denial of important opportunities clearly meets this bar (e.g., jobs, housing, credit)
  • Discriminatory consumer harms might as well (e.g., not getting what you paid for)
    • Does it depend on how much a consumer spends or how important the feature is?

That cannot be reasonably avoided…

  • Technology harms might be avoidable with proper disclosure and choice, but:
    • Assessment is rarely done and when done it is a closely guarded secret 
    • Consumers lack knowledge; cannot reasonably avoid
    • Algorithmic monoculture: competitors might be doing the same thing

The cost of which outweigh any benefits to consumers.

  • Details of cost-benefit analysis are not well specified
  • Whose costs? What benefits count? 
    • As far as we can tell, no case law or settlement has spelled it out—settlements just stipulate the test is satisfied as a whole
    • Does not say anything about differential costs and benefits between groups
    • Should countervailing benefits go to the same consumers who are harmed? Or can a minority bear all harm and gain nothing? 

The FTC is well positioned to overcome the limitations of discrimination law and it has the legal authority to do so.


Regulating to Address Racial Logics

Moderated by: Ngozi Okidegbe, Assistant Professor of Law, Benjamin N. Cardozo School of Law

Chaz Arnett, Associate Professor of Law, University of Maryland 
Jessica Eaglin, Professor Of Law, Indiana University Maurer School of Law 
Amba Kak, White House Office of Science and Technology Policy
Vincent Southerland, Co-Faculty Director, Center on Race, Inequality, and the Law, NYU Law 


KEYNOTE

by Jennifer Lee, Technology & Liberty Project Manager, ACLU of Washington

Moderated by:  Ben Winters, Counsel, EPIC


Creating Actionable Rights for those Subjected to AI

Moderated by: Alan Butler, Executive Director, EPIC

PANEL:

Margot Kaminski, Associate Professor of Law, Univ. of Colorado Law School
Dr. Rumman Chowdhury, Director of ML Ethics, Transparency & Accountability, Twitter
Rachel Levinson-Waldman, Deputy Director,  Liberty & National Security Program, Brennan Center  for Justice
Jacob Metcalf, Program Director, AI on the  Ground Initiative, Data & Society