Analysis

Generative AI and Elections: The Approaching Train Wreck

September 18, 2023

By Calli Schroeder and Ben Winters

If you already find parsing through election ads and campaign speeches to get to the truth difficult, just wait for what is ahead with a 2024 election season filled with generative AI generated ads. Misinformation and disinformation already exist in our elections, but generative AI will exacerbate this problem, leaving voters flooded with information and unable to sort out what is real. Generative AI will make it much easier for campaigns, PACs, and individuals to create content that is false, misleading, biased, inflammatory, or dangerous. As generative AI tools grow more sophisticated, it will be quicker, cheaper, and easier to produce this content. But this is not inevitable – the technology is evolving quickly, but technology, election, and civil rights experts have proposed several possible approaches to counter these risks.

Misinformation (sharing false information unknowingly) and disinformation (sharing false information knowingly) are common methods of election interference. During the 2016 election, a voter shared a tweet about a “rigged” voting machine. It was later revealed that the voter had made a mistake while following the instructions on the voting machine which caused the problem – a misunderstanding that was then spread as though it were fact, leading to the spread of misinformation. Contrast that with the case of Jacob Wohl and Jack Burkman who targeted minorities prior to the 2020 election with robocalls claiming that voting by mail would make them targets of police and credit card companies – a known false claim intended to stop people from voting.  

This is a global issue. From false articles in France claiming Emmanuel Macron was funded by Saudi Arabia to ads in the UK claiming that voting for Brexit would “result in the NHS getting 350 million pounds a week,” to multiple lies spread through social media and news in the U.S. that the 2020 election was illegitimate, misinformation and disinformation have already impacted votes and elections worldwide. With nearly 50 international elections set to take place in 2024, the problem extends far beyond the borders of the U.S. 

Generative AI makes this problem exponentially bigger, faster, and more convincing. 

Generative AI is built to create new content. This may be in multiple forms – systems can produce audio, visual, video, or text results. Generative AI might draft a realistic-seeming news article that cites specific (but made-up) examples of bad or illegal behavior by a candidate or a text message that reads like it came from a respected colleague, telling you that a candidate’s policies will result in a huge windfall for your workplace when that isn’t the case. Audio, visual, and video generative AI may be able to create extremely realistic looking and sounding instances of candidates doing or saying horribly offensive or out of character things (many much less benign than this current example of Biden, Obama, and Trump as some very intense gamers). Further, a high volume of AI-generated information and content may provide cover for when candidates actually do or say bad and offensive things – if we can’t tell what is real and what is generated, how can we tell when a candidate is actually in a scandal? Finally, tools like robocallers and robotexters can spread this information at an absurdly high volume and can use information from data brokers and other services, like Cambridge Analytica, to target people most likely to be influenced. 

Since generated content is created at high volume, it gets put back into the information ecosystem, leading to even wider spread and convincing more people that the false information is true. Many generative AI systems continuously scrape online content, including content that has been generated by other AIs. As false content proliferates, it becomes part of AI training sets that the AI is then “trained” on and takes as fact, pushing out more content that incorporates and seemingly validates the incorrect information. Corrections to this false information generally cannot be put out at a high enough volume and speed to adequately counter the false information. 

What Might GAI Do

It’s impossible to predict all of the ways that AI could be used to spread misinformation and disinformation during an election cycle, but here are some current and likely future scenarios that lawmakers and companies must act to mitigate now

  • Earlier this year, a politician in India was able to create doubt about real audio clips from him as an AI generated feedback. By showing that AI can generate false audio clips, he effectively cast enough doubt about whether the clips were real or false to avoid accountability for the content of what he said. 
  • A tool like ChatGPT can create unique, cognizable text for texts, scripts, and emails. This can easily be used to write (and then spread) messages that support scams, provide false information, perpetuate conspiracies, or otherwise create misinformation and disinformation. Because these AI systems are unable to accurately determine when the information in their training sets or in the prompts they receive are true, the risk increases. For example, a user may ask an AI system to draft an email requesting donor money to “fight election fraud.” The AI does not know if election fraud is actually occurring or is a viable concern – it simply generates the requested script and may include invented “examples” of fraud if it will supplement the requested content. 
  • AI systems and the content they generate can be combined with targeted lists of people and their contact information from data brokers. This would enable bad actors to target financially or otherwise vulnerable groups – like the poor, elderly, minority groups, and more – with content specifically tailored to manipulate them based on fears, stereotypes, or other individualized characteristics. 
  • Bad actors will then be able to use robotext and robocall programs to blast out messages with broad reach and at high volume that can be created and tweaked using a tool like ChatGPT. AI systems that can write text that sounds like it comes from a native English-speaker may also allow foreign agents to interfere with elections more effectively, as the content they use to target voters will not appear to obviously be from foreign actors. 
  • Generative AI tools can create videos with audio of an authoritative voice, like Barack Obama, Donald Trump, or someone more personal to them, appearing to relay the message containing misinformation or disinformation. 
  • Bad actors can create and tailor the content of the messages using generative AI to more effectively manipulate different groups and to effectively evade spam filters that would otherwise identify widely repeated messages and prevent some spam. 
  • Use policies on these AI systems that prohibit these actions may go unenforced and the operation of the service will not reflect these policies. 

What Can We Do

Though AI interference in elections is a high risk, both locally and globally, it is not inevitable. The technology is evolving quickly, but technology, election, and civil rights experts have proposed several possible approaches to counter these risks. Here’s how different actors can start to address the problem right now: 

Voters can support and participate in third-party efforts to check, expose, and track misinformation and disinformation. Researchers should prioritize developing and implementing effective technical measures that identify and capture AI-generated disinformation to illustrate the breadth of the problem and alert voters to stop the spread. Anyone coming across election-related misinformation and disinformation should report their experiences to their Attorney General, the Federal Trade Commission, the Federal Election Commission, or the Consumer Financial Protection Bureau. Finally, Public Citizen has put out a petition to update Federal Election Commission rules to require disclosure of AI campaign materials and to prohibit several high-risk types of false statements. You can submit a comment to the Federal Election Commission prior to October 15 to support the petition. 

Legislators and regulators proposing bills to regulate AI must include specific measures that would help to address the risks generative AI poses to elections. For example, any proposed regulation should ban deceptive acts and practices beyond candidate action so that AI systems are prevented from spreading information intended to impede or prevent someone from exercising the right to vote, including a private right of action against systems and individuals that do so. 

Finally, companies developing and selling these AI systems must limit API access, be transparent about who has access to the systems and the training data sets, set regular review and quality test requirements to ensure that training data is accurate and correctly collected, and integrate use policies into the system that specifically bar using the AI systems to generate misinformation and disinformation. 

Voters and politicians are worried about AI election interference. A recent poll revealed that over half of Americans expect misinformation spread by AI to impact who wins the 2024 presidential election. Last week alone there were four Congressional hearings on AI. It is time for Congress to move past hearings and enact protections that encourage election integrity before it is too late.  

Support Our Work

EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.

Donate