
By Arunita Das and Cole Hennig | Global Network on Extremism & Technology
How can we incentivise social media companies to implement AI tools for content moderation?
With routine auditing and updating, we can continue to build precise AI-led tools to complement content moderation that targets extremism. The next challenge, however, is to encourage social media companies to adopt moderation infrastructure. For example, Meta CEO Mark Zuckerberg announced in early 2025 that the company was stepping back from using automated systems because they were generating “too many mistakes and too much censorship.” While these changes apply in the United States for now, Meta suggested that this will eventually apply internationally.
As legal scholar Dr Natalie Alkiviadou explains (2025), social media platforms vehemently emphasise free speech rights. Often, users who promote free speech rights consider any suspensions or other penalties imposed by social media platforms as censorship and a violation of their constitutional rights to freedom of speech, particularly in the US. As social media platforms thrive from viral engagement of any kind, their websites adhere to limited content regulation.
Social media platforms largely take a reactive approach when taking down violent extremist content, enforcing measures only once it has been flagged in an ongoing investigation by authorities and other public pressures.
The goal is not to deplatform or censor users. Rather, AI-moderation with human monitoring, clearer frameworks around extremist conduct, and increased accountability can help in addressing the root problems associated with the spread of violent extremism.
Read MoreNatalie Alkiviadou is a Senior Research Fellow at The Future of Free Speech. Her research interests lie in the freedom of expression, the far-right, hate speech, hate crime, and non-discrimination.
