Platform Accountability & Governance
Age Assurance
Background
To protect children online, various statutory and regulatory proposals recommend estimating or verifying users' ages. This can provide safety benefits but also, depending on the details of implementation, trigger constitutional or policy concerns. EPIC's work focuses on determining when age assurance is beneficial and when it is harmful.
Documents
Age assurance is an umbrella term for a wide range of methods used to determine, estimate, or communicate the age of an online user. Age verification is a type of age assurance that evaluates age with a high level of certainty, while age estimation is the process of categorizing an individual within an age range, or over or under a certain age.
While existing laws like the Children’s Online Privacy and Protection Act (COPPA) have required certain forms of age assurance for many years, there has been a recent wave of proposed and enacted state and federal legislation aiming to protect the privacy and safety of minors online with age assurance provisions. Within these pieces of legislation, the terms age verification, age estimation, and age assurance are interpreted inconsistently, and the age assurance methods that can be used to comply with these laws vary widely.
Many recently enacted platform regulation laws use age assurance as a mechanism to restrict minors’ access to certain platforms like social media, or to salacious content like pornography. But age gating is not the only possible use of age assurance. Age assurance can also be used to ensure that platforms provide heightened privacy protections to minors, like in the California Age-Appropriate Design Code (CAADC). Depending on the context in which age assurance methods like age verification are required, tech companies have brought legal challenges under the First Amendment.
EPIC believes that age assurance can be required and performed in a way that is privacy protective and does not violate the First Amendment. While some use purposes and methods may be less privacy protective or likely to garner First Amendment scrutiny, laws like the CAADC involve age assurance in a dynamic way. In EPIC’s amicus brief in NetChoice v. Bonta, the section devoted to demystifying the age assurance provision in the CAADC to clarify how age estimation functions within the law. First, the CAADC requires all companies to perform a Data Protection Impact Assessment (DPIA) to evaluate how companies use kids’ personal data and consider how those data practices create risks for children. Then, companies have two options: give all users heightened privacy protections, or “estimate the age of users to a level of certainty proportionate to the risk level identified in the DPIA and then apply the heightened privacy protections to the users the company estimates to be children.” While there are many age estimation methods that a company could choose to comply with CAADC, like self-attestation, parental involvement and use of existing company data, the law does not require companies to verify the age of a user.
There has been increasing demand for technologically advanced, privacy-protective age assurance methods in the US and internationally. From face-estimation technology to third party age checking using tokens or technology like zero knowledge proof, EPIC will continue to encourage policymakers and companies alike to prioritize the development and use of privacy-protective age assurance methods.
For more on this issue, read EPIC’s amicus brief in NetChoice v. Bonta, and comments to the Federal Trade Commission on face estimation as a COPPA verifiable parental consent method.
Recent Documents on Age Assurance
-
Amicus Briefs
NetChoice v. Bonta
Northern District of California
Whether a legislature may regulate the provision of addictive algorithmic feeds to minors.
Top Updates
Support Our Work
EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.
Donate