That Violates My Policies: AI Laws, Chatbots, and The Future of Expression

Executive Summary Generative artificial intelligence (AI) has transformed the way people access information and create content, pushing us to consider whether existing frameworks remain fit for purpose. Less than three years after ChatGPT’s launch, hundreds of millions of users now rely on models from OpenAI and other companies for learning, entertainment, and work. Against a […]

CELE: Moderating Hate or Moderating Rights? The Paradox of the European Approach to Online Hate Speech and Platform Liability

Abstract This paper critically assesses the European approach to regulating online hate speech through platform liability frameworks, focusing on Germany’s Network Enforcement Act (NetzDG) and the European Union’s Digital Services Act. It argues that these laws, while aiming to curb online harm, risk infringing on the right to freedom of expression and non-discrimination by delegating […]

Report: Freedom of Expression in Generative AI – A Snapshot of Content Policies

Download the report   By Jacob Mchangama and Jordi Calvet-Bademunt Summary Anyone who has tested generative AI with slightly controversial issues is now familiar with expressions such as “I’m unable to help you with that” (Google’s Gemini). Or “I’m not able to generate content that takes a stand on controversial historical or political issues” (Inflection’s […]

International journal of Human Rights: Artificial intelligence and online hate speech moderation

Justitias Natalie Alkiviadou in International journal of Human Rights “Whilst automated mechanisms can assist human moderators by picking up on potentially hateful speech, they should not be solely responsible for removing hate speech. Biased training data sets, the lack of relevant data and the lack of conceptualization of context and nuance can lead to wrong […]