
How to fight back against government censorship of chatbots.
By Jacob Mchangama
“Well, then you need to shut it down.”
That was Republican Senator Marsha Blackburn’s reaction when a Google executive explained during a recent Senate hearing that large language models (LLMs) sometimes “hallucinate” and generate false information. The Tennessee senator was outraged that Google’s open-source model Gemma had fabricated defamatory stories about her and a conservative activist. The next day Google disabled consumer access to Gemma.
Blackburn’s demand captures a dangerous moment for AI and free expression. As generative AI is embedded into search engines, email, and word processors, it will mediate ever larger parts of the information ecosystem that people rely on. Governments are discovering that they can pressure companies to censor what may be the most consequential communications technology since the printing press.
Recent efforts go well beyond combating clearly illegal content such as child sexual abuse material. From Brussels to New Delhi, Warsaw to Washington, officials are wielding regulations, threats, and public shaming to shape what information, ideas, and perspectives billions of people can access through AI.
In October, the Dutch Data Protection Authority warned that AI chatbots made by OpenAI, xAI, and Mistral are “unreliable and clearly biased,” since they produced voter recommendations that tilted toward far-left and far-right parties ahead of national elections.
Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).
