Download Report

Executive Summary

Generative artificial intelligence (AI) has transformed the way people access information and create content, pushing us to consider whether existing frameworks remain fit for purpose. Less than three years after ChatGPT’s launch, hundreds of millions of users now rely on models from OpenAI and other companies for learning, entertainment, and work. Against a backdrop of political tension and public backlash, heated debates have emerged over what kinds of AI-generated content should be considered acceptable. Generative AI’s capacity both to expand and to restrict expression makes it central to the future of democratic societies.

This raises urgent questions: Do national laws and corporate practices governing AI safeguard freedom of expression, or do they constrain it? Our report — “That Violates My Policies”: AI Laws, Chatbots, and the Future of Expression — addresses this by assessing legislation and public policies in six jurisdictions (the United States, the European Union, China, India, Brazil, and the Republic of Korea) and the corporate practices of eight leading AI providers (Alibaba, Anthropic, DeepSeek, Google, Meta, Mistral AI, OpenAI, and xAI). Taken together, these public and private systems of governance define the conditions under which generative AI shapes free expression and access to information worldwide.

This report marks a step toward rethinking how AI governance shapes free expression, using international human rights law as its benchmark. Rather than accepting vague rules or opaque systems as inevitable, policymakers and developers can embrace clear standards of necessity, proportionality, and transparency. In doing so, both legislation and corporate practice can help ensure that generative AI protects pluralism and user autonomy while reinforcing the democratic foundations of free expression and access to information.

AI Legislation: Key Takeaways

  • The United States is the most speech-protective country in relation to generative AI. In the US, restrictions on AI models and AI-generated content remain limited, with the First Amendment providing strong protections. However, a patchwork of state-level measures on issues such as political deepfakes, combined with heavy reliance on judicial interpretation, means the situation could evolve in the future, potentially with detrimental effects for free expression.
  • By contrast, China was the weakest performer, with a regulatory framework that amounts to a state-imposed regime of strict control over AI-generated content. These measures impose ideological, technical, and political constraints, requiring AI systems to conform to “socialist core values,” censorship norms, and national security priorities through anticipatory censorship and political oversight.
  • The European Union performed strongly and ranked second. The European Convention on Human Rights and the EU Charter of Fundamental Rights establish strong protections for freedom of expression in principle, but broad hate speech rules and poorly defined “systemic risk” provisions are a cause for concern.
  • Brazil ranked third, with a robust performance. The country’s legal and institutional framework is marked by strong constitutional protections for expressive freedom, though recent cases reveal a shift toward more interventionist regulation in response to online harms (real or perceived). The future outlook largely depends on a new AI bill currently under discussion. While the bill embeds freedom of expression and pluralism as guiding principles, it has also been criticized for its vague definitions and potential chilling effects on freedom of expression.
  • The Republic of Korea ranks fourth in our assessment. It has fallen behind other developed countries in protecting freedom of expression, a trend that extends into the AI context. The strict application of defamation laws has curtailed online speech, including AI-generated content. The new AI Basic Act, modeled after the EU’s, aims to balance regulation and risk but does not always succeed in practice.
  • India ranked fifth. In the absence of a dedicated AI law, generative AI is governed through existing legislation. While the current framework promotes access and participation, it also risks over-removal of lawful speech, selective enforcement against alleged harmful content, and fragmented protections. India’s case highlights both the challenges and opportunities of aligning national priorities with a human rights baseline.

Country Rankings

The Future of Free Speech’s country ranking provides a comparative overview of how effectively each jurisdiction protects or constrains free speech in the context of generative AI. It ranks the countries we evaluated from the most to least speech-protective.

AI Models: Key Takeaways

  • Among the models, xAI’s Grok 4 demonstrated the strongest “free-speech culture,” earning a perfect score when tested with prompts on contentious sociopolitical issues. In contrast, Alibaba’s Qwen3-235B-A22B ranked lowest, displaying little commitment to free expression and systematically refusing to respond to our prompts. By free-speech culture, we mean the model’s willingness to foster open dialogue and engage diverse perspectives.
  • Restrictions on hate speech and disinformation are generally formulated in vague terms and not anchored in explicitly defined legitimate aims. Regarding the necessity and proportionality criteria, some providers (i.e., Anthropic, OpenAI, Google, and Meta) indicate efforts to engage with viewpoint diversity and to reduce refusal frequencies.
  • The opacity in relation to training is consistent across models. No provider discloses the datasets and reinforcement learning processes, where critical decisions about “helpful” versus “harmful” speech are made.
  • While several companies have clearly moved toward more open engagement on lawful but controversial topics, there remain differences in how platforms interpret the boundary between permissible discussion and prohibited content. Models from Anthropic, Google, and OpenAIwhich we also assessed last year — showed notable improvement, engaging more readily with a wider range of views.
  • Most models are more willing to generate abstract arguments than user-framed social media content. There is evidence of restrictions on free expression in the types of social media posts that models will produce across a range of issues. This potentially reflects greater sensitivity to requests that are more actionable and potentially aimed at reaching a wider public.
  • In general, hard moderation (understood as the outright refusal to respond to a prompt) has declined and become rare. However, there is modest evidence of some soft moderation, where models provide arguments contrary to the request. Since the underlying training data are unlikely to vary significantly across the tested models, this suggests that companies’ design choices play a decisive role in shaping the kinds of responses their models produce on politically salient issues and, ultimately, their free-speech culture.

Model Rankings

The Future of Free Speech’s model ranking provides a comparative overview of each AI company’s commitment to freedom of expression within the selected model. It ranks models from the most to least speech-protective.

Download Report

Download Report by Section

 

Sponsors

The Future of Free Speech is especially grateful to the Rising Tide Foundation and the Swedish Postcode Lottery
Foundation for their generous support of this work, and we thank Vanderbilt University for their
collaboration with and support of our organization.

 

About The Authors

Senior Research Fellow  at   
  + Recent

Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.

Executive Director  at   
  + Recent

Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).

Research Associate, AI Policy and Free Speech  at  

Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.

Associate Professor of Global Media and Information Law  at  Durham Law School

Ge Chen is associate professor of Global Media and Information Law and director of the Centre for Chinese Law and Policy, Durham Law School. He is the recipient of the 2025 Franklyn S. Haiman Award for Distinguished Scholarship in Freedom of Expression, awarded by the U.S. National Communication Association.

Research Fellow  at  German Institute for Global and Area Studies (GIGA)

Dr. Sangeeta Mahapatra is a research fellow at the German Institute for Global and Area Studies (GIGA), Hamburg, working on artificial intelligence and internet governance, digital authoritarianism, countering disinformation, and building cyber resilience.

Professor  at  Korea University Law School

Kyung Sin (K.S.) Park is Professor, Korea University Law School; AB in physics, Harvard University; JD, UCLA Law School; visiting professor at the Law Schools of UCLA, UC Irvine, and UC Davis; director, Open Net.

Professor  at  State University of Rio de Janeiro (UERJ)

Carlos Affonso Souza is a professor at the State University of Rio de Janeiro (UERJ). He holds a PhD (2009) and a Master’s (2003) degree in Private Law (UERJ) and is a Director of the Institute for Technology and Society (ITS Rio), a leading organization in Brazil focusing on tech policy and regulation.

John Foster Dulles Professor of International Affairs  at  Princeton University

Jacob Shapiro is John Foster Dulles Professor of International Affairs at Princeton University. He co-founded the Empirical Studies of Conflict Project and leads Princeton’s Accelerator Initiative to advance research on the information environment.

Research Manager  at  Empirical Studies of Conflict Project at Princeton University

Kevin T. Greene is a research manager with the Empirical Studies of Conflict Project at Princeton University.

PhD Candidate  at  Vanderbilt University

Carlos Olea is a PhD Candidate at Vanderbilt University.