There is a growing recognition that children can be harmed online. Children regularly have their data extracted and used to target them with ads, serve them inappropriate content such as pro-eating-disorder or pro-suicide messages, and other harms. Children are especially vulnerable to these risks because they lack the experience and critical thinking skills required to mitigate these risks.
To address the issue, many states around the world—including some in the United States—have begun drafting and passing child-specific platform design laws. These laws have taken very different forms and exist on a range of restrictiveness. For example, a relatively restrictive law is Utah’s Social Media Regulation Act, which requires social media platforms to obtain parental consent for any child under 18 to use the site and requires the platforms to proactively verify the age of every user.
California recently passed a child safety law modeled off of a law from the United Kingdom that is relatively more relaxed that the Utah version. Starting in July 2024, California’s Age-Appropriate Design Code (AADC) will require tech companies to assess how their services collect and use children’s data and to refrain from certain harmful uses of that data.
The law uses a few tools to achieve this goal. For example, it requires covered businesses to conduct a Data Protection Impact Assessment (DPIA) before offering a new service, product, or feature to the public that is likely to be used by children. The DPIA must identify the purpose of the online service, product, or feature; how it uses children’s personal information; and how/whether the business’s data management practices impact the risks of material detriment to children. Businesses must generate a mitigation strategy for any risks they have identified.
NetChoice, whose members include Google, Meta, Amazon, Twitter, and TikTok, has sued California to try to block the AADC’s implementation. NetChoice argues that the law restricts what content a company can show on its website and, consequently, the law violates the First Amendment and is inconsistent with Section 230 of the Communications Decency Act, which prevents tech companies from being treated as the publishers of third-party content.
EPIC’s brief, joined by a coalition of civil society groups, lawmakers, and tech experts, argues that the AADC should be understood as a common-sense regulation of business conduct, not a restriction on speech or unfair imposition on tech companies. The AADC, like many business regulations across economic sectors, seeks to address market incentives that the legislature believes currently encourage businesses to design and produce harmful products. The AADC “does not require companies to remove or even demote any specific content—as long as they do not use children’s data in a way that violates the law, companies can show users whatever information they like.” The coalition pointed out that courts have “increasing rejected” tech companies’ attempts to use Section 230 to immunize themselves from their own harmful conduct, such as when they use personal information about users to deny them access to information in violation of anti-discrimination laws or when the design of an app feature causes foreseeable harm. Since California’s new law regulates similar platform conduct, Section 230 does not apply. The coalition also argued that “the impact assessments required by the AADC are common in regulatory frameworks across the United States and the world.” The coalition wrote that NetChoice’s and its supporters’ “argument that generally applicable privacy regulations, targeted specifically at mitigating harms to children’s privacy, are unconstitutional would undermine numerous federal and state laws and undermine the state’s compelling interest in protecting the privacy of children.”