By Tim Keary
The risk of AI systems being used to enable censorship is often overlooked. On the vendor side, so much focus is placed on safety and content moderation that the output of these tools often displays ideological biases, often outright blocking legitimate outputs.
[ . . . ]How AI Vendor’s Content Moderation Policies are Shutting Out Free Speech
While generative AI vendors have done a good job at creating models that can generate human natural language, they haven’t been able to create moderation policies that balance safety and free speech.
“Research has revealed how the usage policies and guardrails for popular generative AI models prevent them from generating certain legal content and privilege certain viewpoints over others.
Jordi Calvet-Bademunt, a senior research fellow at The Future of Free Speech, told Techopedia:
“Much of the focus on generative AI has revolved around safety, with little attention paid to their impact on freedom of expression and censorship.”
Calvet-Bademunt notes that, in many cases, AI safety itself can be used as an excuse to enact censorship.
“Countries like China have also used these safety concerns to justify censorship, including flagging and banning content that undermines ‘the core socialist values’.
“Meanwhile, in democracies like the European Union, the recently adopted AI Act requires AI platforms to assess and mitigate ‘systemic risks’ which could impact content generated about the conflicts in Israel-Palestine or Russia-Ukraine, for example,” Calvet-Bademunt said.
The Future of Free Speech examined this issue in a report released earlier this year.
The study analyzed the usage policies of six major AI chatbots, including Gemini and ChatGPT, and found that the companies’ misinformation and hate speech policies were so overly vague and expansive that chatbots refused to generate content for 40 percent of the 140 prompts used, and suggests the chatbots were biased on specific topics.
Read MoreJordi Calvet-Bademunt is a Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.