Analysis
FTC’s Strong Rite Aid Enforcement Order Is a Warning to Companies Using Biometric AI Systems
February 15, 2024 |
In December, the Federal Trade Commission announced a settlement with Rite Aid over the pharmacy’s discriminatory use of facial recognition technology in its stores. Between 2012 and 2020, Rite Aid deployed facial recognition surveillance systems to identify individuals who may be shoplifting—yet did so without assessing the accuracy or bias of the technology. Rite Aid also used facial recognition technology disproportionately in stores in plurality non-white neighborhoods.
While the use of facial recognition surveillance can be harmful in any context, Rite Aid failed to implement even the most basic safeguards, validation studies, or trainings for employees required to “enforce” the match alerts issued by the system. As a result, “Rite Aid employees recorded thousands of false positive match alerts,” the FTC explained. This led to profiling, harassment, and embarrassment from people simply trying to shop.
This example, while particularly well documented and egregious, is not much of an outlier. AI systems, particularly systems that used to conduct biometric surveillance, have a long track record of causing racially disparate harms.
In addition to placing a five-year ban on Rite Aid’s use of facial recognition, the settlement requires the company to delete any images of consumers collected with the technology and any algorithms developed using such images. Rite Aid must notify consumers when their biometric information is processed by a surveillance system in the future or when any action is taken affecting them because of such a system. The company is also required to implement strong data security and provenance practices.
This enforcement order requires Rite Aid to institute essential measures such as meaningful notice to consumers, independent third-party assessments, and commonsense data deletion practices. As the FTC has made clear through its enforcement actions—as well as through its blog posts and public statements—entities using AI systems that implicate sensitive types of personal data or in sensitive contexts must:
- Institute basic data minimization requirements, only collecting, keeping, and using the data required to perform a legitimate specific purpose;
- Institute data security practices that reflect the risks of unnecessary and invasive data collection and use practices;
- Proactively notify people being surveilled or processed by an automated system;
- Require specific training and limitations on access; and
- Be consistent, fair, and forthcoming in privacy policies.
Rite Aid is the latest in a line of FTC orders that require model or algorithmic deletion, also known as disgorgement (more on that here). This remedy rests on the concept that a business should not be able to enjoy further profit and innovation off of models and algorithms created or enhanced with illegally obtained data.
Deletion of products developed from ill-gotten personal data is also required in other recent FTC orders. This includes the Commission’s recent X-Mode consent decree, which requires the data broker to “[d]elete or destroy all the location data it previously collected and any products produced from this data unless it obtains consumer consent or ensures the data has been deidentified or rendered non-sensitive[.]”
Though the path toward a comprehensive AI regulatory framework is a long and slow one, companies using AI should be aware that regulators are not waiting for new laws to pass to exercise their long-standing authority to enforce consumer protection laws.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate