Rushing to Judgment: Are Short Mandatory Takedown Limits for Online Hate Speech Compatible with The Freedom of Expression?

For the first time in human history, ordinary people have been given the ability to publicly share and access information instantly and globally through social media without the mediation of traditional gatekeepers such as newspaper editors or government censors. Yet, the growth of social media has made even democracies wary of the resulting impact on the global ecosystem of news, opinion, and information. Unmediated and instant access to the global digital sphere has gone hand in hand with the amplification and global dissemination of harms, including online extremism and disinformation.

With the entry into force of the Network Enforcement Act (NetzDG) in 2017, Germany became the first country in the world to require online platforms, with more than 2 million users nationally, to remove “manifestly illegal” content within a time period of 24 hours. Since the adoption of the NetzDG, more than 20 States around the world have adopted similar laws imposing “intermediary liability” on social media platforms. Another example of new developments in the realm of the privatization of stringent content moderation includes the EU´s Proposal for a Regulation on preventing the dissemination of terrorist content online, for which political agreement was reached in December 2020, aiming to incorporate a legally binding one-hour deadline for content removal. The proposal was criticized by the then UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression David Kaye. Kaye expressed his concern about the short timeframe given to “comply with the sub-contracted human rights responsibilities that fall on platforms by virtue of State mandates on takedown”.

Such developments enhance the pressure on social media platforms quickly to remove speech. Indicative of this argument is the trend of increased removals of alleged hate speech. For example, between January and March 2020, YouTube removed 107,174 videos globally which is considered to be hate speech; 80,033 between April and June 2020 and 85,134 between July and September 2020. This was significantly higher than the period from September to December 2018 when YouTube only took down 25,145 videos.  Similarly, Facebook removed 9.6 million pieces of ‘hateful’ content in Q1 of 2020 which rose to 22.5 million in Q2 of 2020. This number is emphatically higher than the 11.3 million removals for the whole of 2018, when NetzDG became operational.

Further, online platforms are increasingly relying on automated content moderation. However, these algorithms pose a risk to free speech, as they are poor at understanding context. As such, they are susceptible to flagging false positives and disallowing sensitive speech that does not necessarily fall foul of legal limits or terms of service.

Concerned about the ramifications on free speech and due process of short time periods imposed by developments such as those referred to above, Justitia´s Future of Free Speech project has issued a report which sketches the duration of national legal proceedings in hate speech cases in Denmark, Germany, The United Kingdom, France, and Austria. This is then compared with the timeframe within which some governments require platforms to decide and take down hate speech under laws such as the NetzDG.

Jacob Mchangama, Director of Justitia and the Future of Free Speech project, says:

“The current status quo in terms of how online content is viewed and moderated, means that we are expecting a myriad of complex cases involving speech to be processed within just a few hours by private companies, with scant regard for due process and free speech of the affected users. How can this be managed whilst simultaneously upholding the fundamental value of freedom of expression? Moreover, how can platforms conduct proper analysis of the cases when under the risk of huge fines?”

Mchangama further notes that:

“The report’s findings, and specifically the vast differentiation in terms of the duration of court proceedings, in comparison with State mandated takedown periods of several hours, demonstrate the latter’s incompatibility with free speech, a point which was underlined by the French Constitutional Council when striking down central provisions of the Avia Law, deeming them unconstitutional restrictions of freedom of expression .”

On the findings of the Report

The nature of the available data does not allow direct and exact comparisons between the different jurisdictions studied in the report. Still, even when allowing for this shortcoming, all the surveyed domestic legal authorities took significantly longer than the time mandated for social media platforms to answer the question of whether the relevant content was lawful or not. As compared to the short time frames granted to platforms – ranging from hours to a week, the average figures for authorities are as follows:

Austria

1273.5 days (ECtHR)

Denmark

601 days (National authorities)

1341 days (ECtHR)

France

420.91 days (ECtHR)

 

Germany

678.8 days (ECtHR)

The United Kingdom

35.01 days (National authorities)

393 days (ECtHR)

Overall, data extracted from all European Court of Human Rights’ hate speech cases reveals that domestic legal authorities took 778.47 days, on average, from the date of the alleged offending speech until the conclusion of the trial at first instance.

In light of the findings, Mchangama says:

“While there are crucial differences between criminal proceedings and private content moderation, we urge States, international organizations, social media platforms and civil society, more broadly, to move towards procedures and processes which include adequate time frames for review, respect the principles laid out by Article 19 of the ICCPR and be mindful of the harmful impact of over-removal on the global ecosystem of freedom of expression and information..”

Download the full report.


 

Read the blogpost by Executive Director Jacob Mchangama in Lawfare, covering the report.


For more information please contact:

Jacob Mchangama, Ecutive Director of Justitia and Executive Director of the Future of Free Speech project at jacob@justitia-int.org or phone: +45 24664220

About The Future of Free Speech 

Executive Director  at   
 + Recent

Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).

Senior Research Fellow 
 + Recent

Natalie Alkiviadou is a Senior Research Fellow at The Future of Free Speech. Her research interests lie in the freedom of expression, the far-right, hate speech, hate crime, and non-discrimination.

Case and Policy Officer  at  Meta‘s Oversight Board 
 + Recent

Raghav is a Case and Policy Officer at Meta‘s Oversight Board where he is working on making Facebook and Instagram’s legal policies on misinformation, electoral integrity, hate speech, gender, and nudity fairer and more complaint with International human rights.