By Jacob Mchangama and Jordi Calvet-Bademunt

The Trump administration is waging a public crusade against so-called “woke AI,” emphasizing the need for “neutral models” that engage in “truth seeking” instead of promoting certain (left-leaning) biases. We recently warned that such actions pose significant risks to free expression.

Many Europeans are likely to roll their eyes at such a policy, rightly pointing out how this approach could easily be weaponized to promote particular viewpoints. In other words, they reject Trump’s concerns that AI platforms must be reined in to avoid promoting a particular political agenda.

That is, until it comes to EU elections, apparently.

In October, the Dutch Data Protection Authority warned that AI chatbots are “unreliable and clearly biased” when offering voting advice. According to the regulator, several systems — including those developed by OpenAI, xAI, and Mistral — produced skewed recommendations favoring certain parties ahead of national elections.

These warnings about political bias in AI-generated content might be aimed at different outcomes, but they echo many of the Trump administration’s underlying complaints — that AI platforms are not producing the type of political content that government officials think they should.

Read More
Executive Director  at   
  + Recent

Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).

Senior Research Fellow  at   
  + Recent

Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.