FTC Targets Rite Aid’s Discriminatory Use of Facial Recognition, Imposes 5-Year Moratorium
December 19, 2023
The Federal Trade Commission announced a settlement with Rite Aid today over the pharmacy’s discriminatory use of facial recognition technology in its stores. Between 2012 and 2020, Rite Aid deployed facial recognition surveillance systems to identify individuals who may be shoplifting—yet did so without assessing the accuracy or bias of the technology. Rite Aid also used facial recognition technology disproportionately in stores in plurality non-white neighborhoods.
While the use of facial recognition surveillance can be harmful in any context, Rite Aid failed to implement even the most basic safeguards, validation studies, or trainings for employees required to “enforce” the match alerts issued by the system. As a result, “Rite Aid employees recorded thousands of false positive match alerts,” the FTC explained.
In addition to placing a 5-year ban on Rite Aid’s use of facial recognition, the settlement requires the company to delete any images of consumers collected with the technology and any algorithms developed using such images. Rite Aid must notify consumers when their biometric information is processed by a surveillance system in the future or any action is taken affecting them because of such a system. The company is also required to implement strong data security and provenance practices.
“This is a groundbreaking case, a major stride for privacy and civil rights, and hopefully just the beginning of a trend,” EPIC Director of Litigation John Davison said. “Rite Aid engaged in an appalling program of surveillance, deploying an untested and discriminatory facial recognition system against its own customers. The result was sadly predictable: thousands of misidentifications that disproportionately affected Black, Asian, and Latino customers, some of which led to humiliating searches and store ejections. But it’s important to note that Rite Aid isn’t alone. Businesses routinely use unproven algorithms and snake oil surveillance tools to screen consumers, often in secret. The FTC is right to crack down on these practices, and businesses would be wise to take note. Algorithmic lawlessness is not an option anymore.”
“Companies should not be able to collect data, break the law, and continue to profit from it. Deletion of both all data collected as part of this illegal operation as well as any algorithms created using that data is the right decision by the Commission and should create a warning to companies considering irresponsible use. This enforcement order institutes essential practices such as meaningful notice, independent third-party assessments, and commonsense data deletion practices,” EPIC Senior Counsel Ben Winters said.
Facial recognition systems have been shown to produce biased and inaccurate results that disproportionately affect non-white populations, particularly Black people. Although the FTC did not disclose the particular vendors used by Rite Aid, a 2019 National Institute of Standards and Technology study analyzing a majority of industry models found the highest rates of false positives were for Black women.
Due to this disparate impact and the inherent threats the technology poses to privacy and autonomy, EPIC has consistently advocated to ban facial recognition.
Senior Counsel and Director of EPIC’s Project on Surveillance Oversight Jeramie Scott said, “This case illustrates how the use of facial recognition technology by businesses contributes to the over-policing of communities of color. Rite Aid’s discriminatory face surveillance system led to thousands of misidentifications of people of color and women, and directly led to Rite Aid employees reporting to police that innocent customers had engaged in criminal activity. Undoubtedly, there are other businesses with similarly problematic face surveillance systems. One hopes this case is a wakeup call for Congress because it’s past time for legislators to address the dangers of facial recognition technology.”