
Executive Summary
Generative artificial intelligence (AI) has transformed the way people access information and create content, pushing us to consider whether existing frameworks remain fit for purpose. Less than three years after ChatGPT’s launch, hundreds of millions of users now rely on models from OpenAI and other companies for learning, entertainment, and work. Against a backdrop of political tension and public backlash, heated debates have emerged over what kinds of AI-generated content should be considered acceptable. Generative AI’s capacity both to expand and to restrict expression makes it central to the future of democratic societies.
This raises urgent questions: Do national laws and corporate practices governing AI safeguard freedom of expression, or do they constrain it? Our report — “That Violates My Policies”: AI Laws, Chatbots, and the Future of Expression — addresses this by assessing legislation and public policies in six jurisdictions (the United States, the European Union, China, India, Brazil, and the Republic of Korea) and the corporate practices of eight leading AI providers (Alibaba, Anthropic, DeepSeek, Google, Meta, Mistral AI, OpenAI, and xAI). Taken together, these public and private systems of governance define the conditions under which generative AI shapes free expression and access to information worldwide.
This report marks a step toward rethinking how AI governance shapes free expression, using international human rights law as its benchmark. Rather than accepting vague rules or opaque systems as inevitable, policymakers and developers can embrace clear standards of necessity, proportionality, and transparency. In doing so, both legislation and corporate practice can help ensure that generative AI protects pluralism and user autonomy while reinforcing the democratic foundations of free expression and access to information.
AI Legislation: Key Takeaways
- The United States is the most speech-protective country in relation to generative AI. In the US, restrictions on AI models and AI-generated content remain limited, with the First Amendment providing strong protections. However, a patchwork of state-level measures on issues such as political deepfakes, combined with heavy reliance on judicial interpretation, means the situation could evolve in the future, potentially with detrimental effects for free expression.
- By contrast, China was the weakest performer, with a regulatory framework that amounts to a state-imposed regime of strict control over AI-generated content. These measures impose ideological, technical, and political constraints, requiring AI systems to conform to “socialist core values,” censorship norms, and national security priorities through anticipatory censorship and political oversight.
- The European Union performed strongly and ranked second. The European Convention on Human Rights and the EU Charter of Fundamental Rights establish strong protections for freedom of expression in principle, but broad hate speech rules and poorly defined “systemic risk” provisions are a cause for concern.
- Brazil ranked third, with a robust performance. The country’s legal and institutional framework is marked by strong constitutional protections for expressive freedom, though recent cases reveal a shift toward more interventionist regulation in response to online harms (real or perceived). The future outlook largely depends on a new AI bill currently under discussion. While the bill embeds freedom of expression and pluralism as guiding principles, it has also been criticized for its vague definitions and potential chilling effects on freedom of expression.
- The Republic of Korea ranks fourth in our assessment. It has fallen behind other developed countries in protecting freedom of expression, a trend that extends into the AI context. The strict application of defamation laws has curtailed online speech, including AI-generated content. The new AI Basic Act, modeled after the EU’s, aims to balance regulation and risk but does not always succeed in practice.
- India ranked fifth. In the absence of a dedicated AI law, generative AI is governed through existing legislation. While the current framework promotes access and participation, it also risks over-removal of lawful speech, selective enforcement against alleged harmful content, and fragmented protections. India’s case highlights both the challenges and opportunities of aligning national priorities with a human rights baseline.
Country Rankings
The Future of Free Speech’s country ranking provides a comparative overview of how effectively each jurisdiction protects or constrains free speech in the context of generative AI. It ranks the countries we evaluated from the most to least speech-protective.
AI Models: Key Takeaways
- Among the models, xAI’s Grok 4 demonstrated the strongest “free-speech culture,” earning a perfect score when tested with prompts on contentious sociopolitical issues. In contrast, Alibaba’s Qwen3-235B-A22B ranked lowest, displaying little commitment to free expression and systematically refusing to respond to our prompts. By free-speech culture, we mean the model’s willingness to foster open dialogue and engage diverse perspectives.
- Restrictions on hate speech and disinformation are generally formulated in vague terms and not anchored in explicitly defined legitimate aims. Regarding the necessity and proportionality criteria, some providers (i.e., Anthropic, OpenAI, Google, and Meta) indicate efforts to engage with viewpoint diversity and to reduce refusal frequencies.
- The opacity in relation to training is consistent across models. No provider discloses the datasets and reinforcement learning processes, where critical decisions about “helpful” versus “harmful” speech are made.
- While several companies have clearly moved toward more open engagement on lawful but controversial topics, there remain differences in how platforms interpret the boundary between permissible discussion and prohibited content. Models from Anthropic, Google, and OpenAI — which we also assessed last year — showed notable improvement, engaging more readily with a wider range of views.
- Most models are more willing to generate abstract arguments than user-framed social media content. There is evidence of restrictions on free expression in the types of social media posts that models will produce across a range of issues. This potentially reflects greater sensitivity to requests that are more actionable and potentially aimed at reaching a wider public.
- In general, hard moderation (understood as the outright refusal to respond to a prompt) has declined and become rare. However, there is modest evidence of some soft moderation, where models provide arguments contrary to the request. Since the underlying training data are unlikely to vary significantly across the tested models, this suggests that companies’ design choices play a decisive role in shaping the kinds of responses their models produce on politically salient issues and, ultimately, their free-speech culture.
Model Rankings
The Future of Free Speech’s model ranking provides a comparative overview of each AI company’s commitment to freedom of expression within the selected model. It ranks models from the most to least speech-protective.
Download Report by Section
- Executive Summary
- Chapter: Freedom of Expression in Generative AI Models
- Chapter: Measuring Free Expression in Generative AI Tools
- Chapter: Artificial Intelligence and Freedom of Expression in the United States of America
- Chapter: Artificial Intelligence and Freedom of Expression in the European Union
- Chapter: Artificial Intelligence and Freedom of Expression in Brazil
- Chapter: Artificial Intelligence and Freedom of Expression in the Republic of Korea
- Chapter: Artificial Intelligence and Freedom of Expression in India
- Chapter: Artificial Intelligence and Freedom of Expression in China
Sponsors
The Future of Free Speech is especially grateful to the Rising Tide Foundation and the Swedish Postcode Lottery
Foundation for their generous support of this work, and we thank Vanderbilt University for their
collaboration with and support of our organization.


