By Natalie Alkiviadou
Introduction
The rise of social media has fundamentally altered the landscape of information dissemination, bypassing traditional editorial and governmental controls. This has allowed for rapid global information sharing but has also raised concerns about the influence of social media platforms, even in democratic societies. Legislative responses, such as Germany’s Network Enforcement Act (NetzDG) of 2017, mandated swift removal of illegal content such as incitement to hatred, defamation of religions and insults, influencing over 20 other nations to enact similar laws. Such forms of regulation often target hate speech but risk suppressing political opposition, particularly in authoritarian regimes. The European Union’s Digital Services Act (DSA) came into force in 2024, imposing stringent removal obligations on platforms. While legislative developments aim to mitigate the adverse impacts of unmoderated online content, they also reveal the delicate balance between preserving freedom of expression and addressing the challenges posed by harmful digital content dissemination. A 2024 report published by the Future of Free Speech found that a substantial majority of deleted comments on Facebook and YouTube in France, Germany, and Sweden were legally permissible, suggesting that platforms may be over-removing content to avoid regulatory penalties. The focus of this report were comments that fall within the ambit of hate speech. Against this backdrop, this short piece will look at some key issues that arise from the current strategies adopted towards moderating online ‘hate speech.’
Hate Speech on Social Media Platforms: Semantics and Context
Hate speech exists in a complex nexus between the right to freedom of expression and that of non-discrimination in addition to concepts of dignity, liberty, and equality. There is no universally accepted definition of hate speech, which may result from varying interpretations of free speech and harm across different countries or regions. Recommendation CM/Rec (2022) 16 of the Council of Europe’s Committee of Ministers provides some definitional framework for hate speech, describing it as:
“all types of expression that incite, promote, spread or justify violence, hatred or discrimination against a person or group of persons, or that denigrates them, by reason of their real or attributed personal characteristics or status such as ‘race’, colour, language, religion, nationality, national or ethnic origin, age, disability, sex, gender identity and sexual orientation.”
The DSA refers to ‘illegal hate speech’ but does not define it. Meta offers a broad definition of hate speech, noting that hate speech includes ‘direct attacks against people – rather than concepts or institutions – on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.’ There are additional characteristics such as age and occupation which are considered protected for the purposes of the hate speech clause if they are targeted in combination with another or other protected characteristic(s). Meta extends the definitional framework by referring to a ‘hate speech attack’ which includes, amongst others, dehumanizing speech, cursing and calls for exclusion or segregation. A 2023 report assessing hate speech policies of eight social media platforms found a significant expansion in the scope of these policies, encompassing both the types of content and the protected characteristics.
Read MoreNatalie Alkiviadou is a Senior Research Fellow at The Future of Free Speech. Her research interests lie in the freedom of expression, the far-right, hate speech, hate crime, and non-discrimination.