By Jordi Calvet-Bademunt

As the US and EU shape their AI frameworks, they should consider lessons from recent experiences. The fear-driven narrative surrounding AI and the most recent elections, where AI-created content had limited impact, should caution policymakers against rushing ahead on laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and the authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.

The AI Disinformation Narrative

Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, The Washington Post warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” The Associated Press shared similar concerns, offering that “AI could supercharge disinformation and disrupt EU elections.” Many other reputable organizations echoed these warnings, which have been circulating for years. Researchers have found that news consumption appeared linked to voters’ heightened concerns about AI’s impact on elections.

Public concern matched the media warnings. In the United States, a Pew survey last September found that 57% of adults across political divides were very concerned about AI-driven misinformation about elections. Similarly, 40% of European voters feared AI misuse during elections. EU Commissioner Vice President Věra Jourová vividly described AI deepfakes of politicians as “an atomic bomb [that could] change the course of voter preferences.”

Several AI-generated incidents did emerge. Up to 20,000 voters in New Hampshire received robocalls with an AI-generated voice mimicking President Biden, falsely discouraging voter participation. Former President Donald Trump circulated an AI-generated image of pop star Taylor Swift endorsing him, prompting Swift to respond on social media to correct the misinformation.

Yet, research suggests the fear-driven narrative about AI in 2024 was not backed up by evidence. The Alan Turing Institute found no significant evidence that AI altered results in elections in the UK, France, Europe, or the US Similarly, Sayash Kapoor and Arvind Narayanan of Princeton concluded, through their analysis of all 78 cases from the WIRED AI Elections Project, that the feared “wave” of AI-driven disinformation was far less extensive and impactful than anticipated. Half of the analyzed AI-generated content was non-deceptive, while deceptive content mostly reached audiences already predisposed to believe it.

Read More

 

 

Research Fellow 
  + Recent

Jordi Calvet-Bademunt is a Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.