Comments
Comments of EPIC to the Office of the Privacy Commissioner of Canada Regarding the Update to Guidance on Handling Biometric Information
COMMENTS OF THE ELECTRONIC PRIVACY INFORMATION CENTER
to the
OFFICE OF THE PRIVACY COMMISSIONER OF CANADA
Regarding the
UPDATE TO GUIDANCE ON HANDLING BIOMETRIC INFORMATION
January 12, 2024
By notices published October 11, 2023, the Office of the Privacy Commissioner of Canada (“OPC”) has solicited input on its guidance for both public1 and private2 sector organizations handling biometric information, to close on January 12, 2024.3 These guidance documents are intended to update the biometrics guidance first published in 2011 and address evolutions in biometric technology and use as well as addressing how both the Personal Information Protection and Electronic Documents Act (“PIPEDA”) and the Privacy Act intersect with biometric information processing. Pursuant to the request for input on the updated guidance documents, the Electronic Privacy Information Center (“EPIC”) submits the following comments.
EPIC is a public interest research center based in Washington, D.C., established in 1994 to focus public and regulatory attention on emerging privacy and human rights issues and to protect privacy, freedom of expression, and democratic values in the information age.4 EPIC has an extensive history of promoting individual and societal privacy and civil rights interests relating to biometric information processing, both nationally and internationally.5 EPIC has submitted comments to proposed regulations, guidelines, and practices at the state, federal, and international level as well as filing amicus curiae briefs in cases addressing biometric information use and calling for a ban on face surveillance.6
EPIC welcomes this opportunity to support the OPC’s extensive work in putting forth clear guidance on the use of biometric information and technologies. We note the particular importance of this effort in light of the broader rise of development and use of biometric technologies and appreciate the efforts to counter the serious risks to individuals. The current guidance drafts are well-formed, including clear case examples, incorporating data principles, and setting forth what organizations must and should do relating to biometric information processing. However, we believe some gaps exist. Filling these gaps and clarifying the points listed below would not only aid organizations by clarifying standards for biometric information use, but would also promote public confidence that the OPC is actively protecting their rights in the face of rapidly-shifting technology and industry claims of shifting norms. Broadly, we recommend that the OPC:
- Update both guidelines to specifically address the high risks of bias and discrimination when using biometric information, include measures to factor these risks into required assessments, and mitigate these risks where possible
- Set forth clear requirements on when and how the government can request access to biometric information held or processed by private organizations
- Ban all use of “soft biometrics”
- Address the heightened risks present where AI is used to analyze biometric information or is incorporated into biometric systems
EPIC recommends that both guidelines be updated to include specific mention of potential bias and discrimination when using biometric information
While the draft guidelines currently make passing mention of potential bias problems, particularly when discussing accuracy, the pervasive and serious bias issues present in biometric information use warrant more explicit mention. Facial recognition technology alone has been demonstrated to misidentify people of color and transgender or non-binary individuals at a vastly increased rate to white and cisgender individuals.7 A federal study in the United States concluded that “Asian and African American people were up to 100 times more likely to be misidentified than white men.”8 Independent analysis commissioned by the United Kingdom’s Metropolitan Police found that matches made using the Metropolitan Police’s facial recognition systems were inaccurate 81 percent of the time.9 While the best commercially available algorithms available today display little bias in controlled testing conditions, the range in facial recognition products is broad, and even the best testing cannot account for all real-world deployments.10 As input photo quality declines and novel types of image are submitted for facial recognition searches, bias becomes more likely. Further, even where facial recognition is formally unbiased, the systems built around facial recognition can unintentionally enshrine and amplify bias, resulting in heavily biased outcomes.11
The risk of misidentification is highest for population groups that already face outsized societal bias and discrimination. This concern must be addressed when assessing whether biometric information processing is appropriate. This should be done either by acknowledging where the risk of discrimination and bias is too high such that other methods should be used or by putting mitigating measures in place that will substantially reduce the risk of harm, like mandatory human review of the algorithm’s output before any decision is made based off those results. We recommend that notice of the risk of bias and discrimination in biometric information use be explicitly included in both sets of guidance in the Overview section, in Identifying an Appropriate Purpose, in Limiting Use, Disclosure, and Retention, in Safeguards, in Accuracy, and in Accountability. The risk of bias affects each of these sections substantially and should be specifically noted.
Finally, the “biometric categorization” section in the Overview should note that certain categorizations are extremely high-risk and should either not be performed or should only be performed in exceptional circumstances. For example, sorting individuals by race or gender can lead to additional discrimination unless there is a strong and legitimate justification for doing so. Even where these categorizations prove to be performed accurately, there are few uses that do not carry with them elements of discrimination and high potential for misuse that would violate civil liberties.12
EPIC recommends that the OPC set forth clear guidelines addressing the pervasive law enforcement ties to private companies’ biometric systems and establish clear limits and standards.
The line between public and private data collection and use is increasingly blurry. It is common for law enforcement to access private sector biometric recognition systems, both through third party contracts and through voluntary information sharing requests. The draft guidance for public organizations mentions requirements for privacy review before contracting with third party service providers which is an excellent step in addressing this overlap. However, law enforcement and government bodies regularly make informal requests for biometric information or access to biometric systems held by private companies, evading legal standards requiring a legitimate basis and formal process like warrants. The OPC has an opportunity to establish protections against this practice and make clear under what conditions private companies and law enforcement may appropriately interact.
Canadian law enforcement expanded their use of biometric technology, such as facial recognition, in the last few years. The OPC’s investigation (in partnership with provincial authorities in Quebec, Alberta, and British Columbia) into the RCMP’s use of Clearview AI concluded that Clearview AI’s biometric information collection processes were legally non-compliant and that the RCMP failed to properly assess whether the practice was compliant with the Privacy Act before implementation.13 Despite this finding and the subsequent suspension of Clearview AI use, the RCMP has continued to use at least two additional facial recognition tools and has failed to make public all biometric information processing practices.14 The Minister of Citizenship and Immigration used Clearview AI’s facial recognition software to strip two women of their refugee status – a decision that was ordered under review by a federal court due to possibly being based on a misidentification.15
This rise in law enforcement use of biometric information and lack of safeguards on law enforcement access to privately-held biometric information add up to a volatile and dangerous social precedent. Privacy and civil liberty advocates have repeatedly noted that expanded law enforcement use of biometric information could effectively create surveillance states.16 The technology’s embedded problems with accuracy and discrimination add to concerns over improper application and potentially harmful results for individuals.
The OPC can help to directly address this problem by incorporating mandates in both sets of guidelines. First, the guidelines for private companies should explicitly state in the “keep a tight circle” portion that private organizations must require a warrant from law enforcement before sharing biometric information or allowing law enforcement to access biometric information that the private organization holds. Requiring warrants rather than allowing the organizations to comply with mere requests for access will provide better protections for individual information, provide more clarity for private companies regarding how they are legally permitted and required to limit access to biometric information, and encourage law enforcement to establish a reasonable basis for accessing biometric information.
Second, the guidelines for public institutions should include explicit language stating that they must obtain a warrant before requesting biometric information from private organizations. This makes expectations clear for public institutions, will provide limitations that keep law enforcement from improperly using their authority to intimidate private organizations into sharing biometric information without a reasonable basis, and may encourage law enforcement to seek out less invasive means of exercising authority.
EPIC recommends the OPC take note of the scale and scope of harms present where AI is integrated with biometric systems and ban use of biometrics to evaluate behavioral attributes
Artificial intelligence (“AI”) systems, which are becoming endemic to mass data analysis, exacerbate the existing high risks common in biometric information processing. Biometric information may be subjected to algorithmic systems for analysis, resulting in expanded reach, impact, and risk potential. Biometric systems using AI not only face bias and discrimination risks, but also risks due to the speed and scale of AI scanning and the types of processing biometric information may be used for.
AI systems are able to scan biometric information at speed and scale far beyond human review. This poses serious risks to privacy rights and high potential for misuse. For example, the voice recognition technology used in systems like Alexa or Siri has also been used for years by the NSA to automatically identify speakers through voiceprints and monitor individual speakers across millions of recordings.17 While the NSA claims this is used to search for criminals or terrorists, it also can be (and has been) easily misused to track individuals like politicians, whistle blowers, protest leaders, journalists, their sources, and more.18 With billions of biometric information points available for analysis by AI systems, the potential for surveillance and misuse is near limitless.
The applications of biometric information processing are also a high risk. AI incorporation often gives organizations and individuals false confidence that a system has some hidden insight into matters that either require expertise to determine or cannot be accurately determined through biometric evaluation. This frequently means evaluating “behavioral attributes,” such as emotional state, mental state, personality traits, moral characteristics, and other generalizable qualities, sometimes referred to as “soft biometrics.”19 For example, systems with AI incorporation have already claimed the ability to scan biometrics to determine a person’s emotions,20 evaluate employability,21 identify mental disorders,22 or determine when individuals are drowsy or distracted.23 There have even been claims that biometrics can be used to assess “criminality” through facial analysis – essentially amounting to digital phrenology.24
Many of these claims are literally not possible. There is no facial characteristic that ties to criminality and systems claiming to identify such a connection extrapolate and train from datasets created by racist criminal justice systems which disproportionately punish people of color, perpetuating historic injustice and an ongoing racially discriminative system.25 Evaluations of emotion or employability will react incorrectly across cultural or neurologically diverse variances in expression. As noted in some academic rebuttals to the idea of emotion recognition, people regularly outwardly project emotions they are not feeling in order to appear more professional, pleasant, or nonconfrontational.26 Emotion recognition technology assumes the existence of universal emotional expression and a strong correlation between physical expression and actual emotional state – neither of these things are reliable. Further, emotion recognition systems have been shown to hold significant racial bias, often assigning more threatening emotions to Black faces than White faces, regardless of expression.27
The impacts of combining AI with biometric information and using the results to make serious decisions can be catastrophic. Use of soft biometrics could subject individuals to unjust law enforcement surveillance and harassment, affect employment and housing prospects, impact financial and educational opportunities, and further marginalize communities already facing discrimination.
We recommend that the OPC explicitly ban or strictly limit use of biometric information for emotion, characteristic, criminality, mental health, or other soft biometrics purposes – including adjusting the language in the “biometric categorization” section of the overview of both guidance documents to remove mention of emotion recognition as if it is uncontroversial technology. We further recommend that the guidance address the elevated risks of AI incorporation into biometric information analysis and mandate heightened review, assessment, limitations, and legal liabilities for systems that use AI.
Conclusion
The OPC has clearly put substantial effort into carefully considering how best to guide both private and public organizations in their use of biometric information. We believe that the recommendations listed above would further improve the guidance, providing more substantial protections for consumers relating to privacy and civil liberties. We also feel these additions will benefit the organizations using biometric information by providing clarity and insight into what the OPC expects. These updates would reflect current discussions in biometric ethics and privacy rights and further establish Canada as a leader in human rights protections in emerging technology. To that end, EPIC urges the OPC to (i) update the guidelines to specifically address the high risks of bias and discrimination tied to biometric information use and include recommendations to address these risks, (ii) set forth clear and enforceable requirements on where and how law enforcement can request access to biometric information held or processed by private organizations, (iii) ban all use of “soft biometrics,” and (iv) address the heightened risks present where AI intersects with biometric information. We believe that these actions will strengthen privacy protections, guard against harmful surveillance practices, and aid in mitigating several major harms of invasive biometric systems.
Respectfully submitted,
Calli Schroeder
Calli Schroeder
EPIC Senior Counsel and Global Privacy Counsel
News
EPIC Applauds FTC Order Following Camera Hack and CAN-SPAM Violations
September 5, 2024
EPIC Backs CPPA’s Efforts to Strengthen Data Broker Registry
August 20, 2024
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate