AI Harm Report: Iowa’s Book Ban Implementation Illustrates How New Tech Enables Bad Policy 

August 31, 2023 | Calli Schroeder and Ben Winters

The rapid spread of Generative AI has recently entered a new front: censorship. An Iowa school administrator turned to ChatGPT to assist her in complying with the state’s recent legislative mandate to ban books with descriptions of sexual acts. The administrator had ChatGPT assess 50 commonly assigned books to see whether they described a sex act. Based on the results generated by ChatGPT, 19 of those books were removed from student access. 

This is just the latest example of how generative AI and its boosters can facilitate dangerous public policy, reinforcing the faulty belief that complex work requiring personal judgment—assessing context, propriety, artistic merit, and more—can be accurately performed by an algorithm. In this context, ChatGPT provides a handy list of books to ban while removing the workload and accountability from humans. And if those lists are incorrect, what are the community and authors harmed going to do—argue with a computer? 

Book banning, of course, is a problem that predates and extends well beyond bans based on automated or artificial intelligence input. In the counter-reformation era, the Roman Catholic Church attempted to halt unauthorized reading so people received information solely from priests and were less likely to interpret text themselves. In the 1930s, the Nazi party burned countless works by Jewish, communist, and other authors. Historically, books were banned in an attempt to keep the public uninformed so they wouldn’t question existing religious or political structures.

Nowadays, books are banned for… well, pretty much the same reason. Under the guise of protecting children from “harmful” or “offensive” material, schools and libraries are choosing or being forced to remove books with material that is deemed dangerous. The criteria for what makes information “dangerous” vary. Iowa is far from the only state banning books recently—school districts in Texas, Florida, Utah, Missouri, South Carolina, and many more have introduced bans this year as well. Often these policies effectively remove minority voices or block access to challenging ideas. Frequent targets of bans include content addressing sexuality (often including the existence of gay, lesbian, transgender, or non-binary characters), violence (often including racial violence), or profanity. 

The forces behind these bans seem to believe that children (and sometimes adults) should be prevented from interacting with uncomfortable or new ideas entirely rather than having the opportunity to engage with or question the material. The unfortunate result will be fewer adults who have encountered difficult issues and unfamiliar perspectives in a safe setting where they are able to discuss and explore—and therefore fewer adults equipped to engage with these issues and perspectives in the real world. Book banning is censorship. It is antithetical to the concept of free speech, it infantilizes the people it claims to protect, and it denies them the opportunity to think critically about the banned content. It does not keep weighty issues or differing perspectives from existing in the real world, it merely silences those attempting to engage with and educate others. It violates individual privacy and autonomy, extending state control over education and individual choice. Freedom of speech under the First Amendment includes the right to pursue information and ideas without government interference. Privacy rights extend to the privacy of what one reads on their own personal devices or in their own home. Book bans infringe on those rights. 

Generative AI takes the already-bad practice of book banning and makes it worse. 

Book banning is bad enough when it’s humans deciding which content should be censored. Leaving that determination to generative AI opens the door for a program—one that is unable to accurately read context—to irrationally limit what we are able to access and engage with. As pointed out in previous reporting, these systems are often inaccurate. To see how common these inaccurate results were, we decided to test ChatGPT ourselves by asking it whether various classic literature included description of a sex act. We only asked it about ten commonly-assigned books. In that small sample set, ChatGPT claimed that To Kill a Mockingbird (a classic novel that largely centers around a sexual assault case in a small town) did not include a sex act and invented a fake author for an existing book.

By its nature, ChatGPT will provide some answer in response to an open-ended question like “what YA books contain sexual content?” But whether that response is factually accurate is a separate question, and whether titles identified by ChatGPT should be removed from school libraries is another matter still. ChatGPT is a chatbot based on a large-language model, not a fact engine or a system capable of making human value judgments. Indeed, the tendency of ChatGPT-type tools to produce false outputs has been at issue in several court proceedings, as generative AI has wholly invented court cases when asked by attorneys for material supporting their arguments.  

Public perception seems to be that because AI generates a response to a question, that response must be accurate—indeed, a recent report on users of generative AI revealed that the most common use of ChatGPT systems is as an information search engine, outstripping even text generation. Purveyors of these tools have done little to combat this misperception.

While generative AI may be new, its harms are not. AI scholars have been warning us of the problems that large AI models can cause for years. These old problems are exacerbated by the lack of limitations or protections on industry pursuit of profit, opacity, and concentration of power. The widespread availability of (and hype surrounding) generative AI systems has led to increased use in nearly all areas and industries, often without consideration of what damage they can do. As these systems become more widely adopted, their harmful impacts—whether caused by bias, a tendency to falsify information, or other inherent flaws and limitations—will only increase.

So where do we go from here? Regulations limiting where and how these systems can be applied and mandating audits of training data and outputs from these systems could counter some of these harms. However, industry, policymakers, and the public must also understand that applying an AI system to a bad policy, like book banning, won’t make it good—only automate it.  

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate