The Rise of Chinese Surveillance Technology in Africa (part 1 of 6)

May 31, 2022 | Bulelani Jili, EPIC Scholar-in-Residence

The Use of Black Faces in Facial Recognition Systems

Many Chinese tech companies continue to export their facial recognition technologies into Africa markets while supporting domestic surveillance practices that include Uyghur and ethnic minority detection. Facial recognition technology, once framed as a potential ameliorant to social challenges like crime, is now widely criticized for racial bias and the risks the technology poses to privacy and civil liberties. These risks and more are clearly present in China’s current testing and expansion of facial recognition systems in Africa. The technology’s arrival in African countries like Zimbabwe is a mark of China’s growing geopolitical footprint and Chinese corporate expansion. 

What is facial recognition technology?

Facial recognition is a digitally automated process of comparing images of human faces to determine whether they represent the same individual. This process is contingent on an algorithm that first detects a face and then is able to rotate, scale, and align the image so that every face the algorithm compares it with will be in the same position. The algorithm also aims to capture qualities like skin pigmentation and eye color. The algorithm then examines and compares faces found in a biometric dataset where it issues a numerical score reflecting the degree of similarity between the face detected and the ones found in the dataset.

 Crucially, this probabilistic approach aims to identify likely matches. 

Identifying black faces

These systems do not operate perfectly and are, in fact, plagued by inaccuracies and biases, resulting in false matches which can undermine civil liberties or failures to match correct identification which can lead to denial of access to services or functions. The substantial disparities in the accuracy of being able to identify dark-skinned people has inspired much research and urgent attention from commercial companies. Recent studies show that algorithms trained with biased data have resulted in algorithmic discrimination. For instance, Buolamwini and Gebru have produced extensive work demonstrating bias present in automated facial analysis algorithms and datasets in regard to race and gender. Purportedly in an attempt to improve accuracy in these areas, companies like CloudWalk, a Guangzhou-based start-up, have entered developing markets like Zimbabwe in part to improve their means of facial recognition. By gaining access to a black population, their algorithm will supposedly be better trained at identifying darker-skinned people. 

More to the point, computer vision systems with better performance in identifying dark-skinned people give Chinese companies a comparative advantage over Western competition. 

cloudwalk and the zimbabwean state

The Zimbabwean government, working with CloudWalk, aims to establish a mass facial recognition program. This initiative was supplemented by a grant from the Guangzhou municipality given to Cloudwalk.  The purpose of this initiative is to supposedly improve administrative and security capacity. The Zimbabwean state has insisted that these technologies would empower the state to fight crime and advance the state’s law enforcement ambitions. Yet digital rights advocates have expressed trepidations over the country’s poor human rights record and the unwarranted surveillance and collection of citizens’ biometric data. Examining how these improvements in phenotypic and demographic accuracy of facial recognition could be used or abused requires urgent attention to ensure these developments are accountable to the public. Accordingly, studies are also needed to explore how facial recognition systems are employed – and whether any damages posed to citizens’ rights by these systems can be mitigated. 

huawei and the ugandan state

Likewise, in Uganda, the Kampala police procured Closed-Circuit Television Camera AI technology from Huawei, purportedly to help address the city’s growing incidence of crime. Facial recognition systems are part of a wider state-led initiative to conscript technologies to resolve social challenges. State actors tend to justify procurement of facial recognition systems in African countries as a vehicle to deliver development and establish security. However, facial recognition technologies also inspire worry among citizens over the atrophy of civil liberties under the guise of national security and development. This conundrum, then, raises critical concerns. 

Several scholars like Feldstein and Hoffman posit that China is driving the proliferation of AI surveillance technology and thereby the rise of authoritarianism in Africa. This argument implies a coordinated effort between Chinese private actors and the state to export surveillance technology. While this argument currently lacks robust empirical evidence to establish Chinese intention and coordination, it critically points to how  the supply of these technologies  exacerbate unwarranted surveillance practices that have  adverse effects, particularly in political and legal environments that have weak checks and balances. 

legislative gaps

This gap between the adoption of novel facial recognition tools and robust legal measures that prevent abuses – along with citizens’ inability to provide input on how this technology should be used – allows for rampant exploitation by private companies and state actors in the facial recognition space. When these developments in Africa are not accompanied by robust legal protections, they have the potential to exacerbate problems at the intersection of inequality, race, gender, and policing. False positives and unwarranted searchers by facial recognition systems threaten civil liberties. 

Crucially, facial recognition technologies do not operate simply as smooth-functioning, well-rationalized systems that provide the means of socially equitable, highly competent policing, and efficient public administration; rather, they are complex systems that are entangled in broader social challenges. When their faculties are not accompanied by preventative measures, their promise undermines civil liberties. 

Author

Bulelani Jili is a Ph.D. candidate at Harvard University, as a Meta Research Ph.D. Fellow. He is also a Visiting Fellow at Yale Law School, a Cybersecurity Fellow at the Harvard Kennedy School, a Fellow at the Atlantic Council, a Research Associate at Oxford University, and a Scholar-in-residence at the Electronic Privacy Information Center and will be writing a series of blog posts on Chinese surveillance products in Africa. 

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate