In the Friday, March 1, 2024 POLITICO Pro Morning Tech Newsletter (Subscriber Only), Mallory Culhane and John Hende cite The Future of Free Speech‘s latest report, “Freedom of Expression in Generative AI – A Snapshot of Content Policies”:


ANALYZING CHATBOT CONTENT POLICIES — Some of the most popular chatbots’ policies on hate speech and disinformation are too broad and vague, posing a threat to users’ freedom of expression when using the platforms, researchers from The Future of Free Speech argue in a report out this morning.

Researchers at the Future of Free Speech — a collaboration between global think tank Justitia [ . . . ] and Vanderbilt University that advocates for free speech in the digital era — analyzed content policies of six of the most popular chatbots and found that many lack precise definitions on what constitutes mis- or disinformation or hate speech. Without a clear line being drawn, researchers argue that companies may be excessively restricting content generated by their AI tools and potentially limiting users’ freedom of expression. The paper recommends that any restrictions on output “should be narrow and well-defined to protect our ability to express ourselves, seek information, effectively search for the truth, and protect our democracies.”

Makers of five of the six chatbots in the study — Google, OpenAI, AI21 Labs, Cohere and Inflection did not respond to requests for comment. Anthropic declined to comment.