FOR IMMEDIATE RELEASE

NEW REPORT: AI LAWS AND CHATBOTS FACE A GLOBAL FREE SPEECH TEST

Study ranks countries’ AI policies and leading chatbots by how they handle lawful but controversial speech

NASHVILLE, Tenn. — October 17, 2025 — Does your chatbot respond to prompts about controversial topics — or dodge them? That’s one of the key questions investigators asked in a forthcoming global study from The Future of Free Speech (a think tank at Vanderbilt University) that evaluates how national AI laws and leading chatbots shape the space for lawful expression in the AI era.

The report, That Violates My Policies: AI Laws, Chatbots, and the Future of Expression, ranks six major jurisdictions on how their AI laws protect freedom of expression and tests eight of the world’s most popular chatbots on their willingness to engage with controversial but lawful prompts.

“Debates about partisan bias miss the bigger story: whether AI will help people access competing viewpoints on lawful but controversial topics,” said Jordi Calvet-Bademunt, Senior Research Fellow at The Future of Free Speech and one of the report’s lead directors. “In our tests, some models are starting to engage instead of evade, but the rules shaping those choices are still vague and shifting. If democracies want a pluralist AI ecosystem, both lawmakers and companies need clearer, rights-based guardrails.”

As AI systems become the mediating layer for public discourse, design choices and national laws are increasingly determining which ideas users can explore. The study warns that “well-intentioned” AI regulation — from the AI Act in the EU to state-level legislation in the U.S. — may reproduce the censorship problems of social media content moderation, shrinking the space for lawful expression in the name of safety.

In the report, researchers reviewed service terms, transparency reports, and ran 500+ prompts across sensitive political and social issues — asking models to generate arguments and social posts from specific viewpoints.

Key Findings

Chatbots show uneven commitment to free speech culture: xAI Grok 4 ranks highest overall for openness to contested sociopolitical issues, followed by OpenAI GPT-5, Anthropic Claude Sonnet 4, Google Gemini 2.5 Flash, Meta Llama 4, and Mistral Medium 3.1. Alibaba Qwen3-235B-A22B and DeepSeek-V3.1 are the most restrictive, frequently refusing politically sensitive topics.

Models are more open than last year, but still inconsistent: Refusals to generate controversial content have declined since 2024, and xAI, Meta, and Mistral AI demonstrated the most willingness to engage with challenging subjects. Yet some models still display “soft moderation,” subtly redirecting user intent without outright refusal.

Subtle stance-flipping is common: Our research partners at Princeton University found that when asked to produce social media posts opposing a specific viewpoint, some models instead generated posts supporting the stance 22% of the time, reshaping debates without issuing explicit refusals.

Legal frameworks diverge sharply across nations

  • The United States tops the global ranking for protecting free speech in generative AI — but its lead could narrow as state-level laws on issues like political “deepfakes” test the limits of the First Amendment.
  • The EU’s AI Act and Digital Services Act risk creating a culture of self-censorship — incentivizing AI companies to restrict contentious content, including lawful political content, to avoid penalties for failing to mitigate “systemic risk.”
  • Brazil stands out as a democratic wildcard: strong constitutional protections keep it near the top, but AI legislation currently under discussion could tilt the balance toward censorship if “risk” becomes the new regulatory standard.
  • South Korea’s strict defamation rules and a ban on deepfakes for election campaigning before election day chill online speech and could make the country’s digital environment disproportionately restrictive for an advanced democracy.
  • India’s existing digital regulation and its discretionary AI policies give officials wide discretion to decide what counts as “harmful,” creating informal censorship through vagueness and fear of enforcement.
  • China hard-codes censorship into AI itself: its rules require models to align with “socialist core values,” effectively turning AI governance into a digital extension of authoritarian speech control.

“Our cross-country analysis shows a looming digital divide,” said Calvet-Bademunt. “Some democracies are designing AI as a tool for expression, while others are turning it into an instrument of control. The U.S. leads today — but its edge isn’t guaranteed.”

Links 

About The Future of Free Speech

The Future of Free Speech is an independent, nonpartisan think tank located at Vanderbilt University that works to restore a resilient global culture of free speech in the digital age through knowledge, research, and advocacy. Learn more at www.futurefreespeech.orgor follow along on Facebook, X, and LinkedIn.

Contact

 

Director of Communications at  
 justin@futurefreespeech.com 
  + Recent