A Framework of First Reference – Decoding a Human Rights Approach to content moderation on social media

SUMMARY

4.66 billion people have Internet access, and 4.20 billion are active social media users. Despite the unprecedented scale and ease with which information and opinions are shared globally, Internet freedom is seen more and more as both a curse and a blessing. On one hand, social media use has empowered previously silenced groups to mobilize and find ways around traditional forms of censorship. On the other, such platforms have become vehicles for phenomena such as hate speech and disinformation.

Authoritarian regimes as well as liberal democracies are placing increasing pressure on social media platforms to deal with allegedly harmful content. For example, the German Network Enforcement Act (NetzDG) imposes a legal obligation on social media companies with more than 2 million users to remove manifestly illegal content, including insult, incitement, and religious defamation, within 24 hours or risk a fine of up to 50 million Euros. The NetzDG blueprint for “intermediary liability” has been followed by over 20 countries around the world, including Belarus, Turkey, Venezuela, and Russia.

Such legislative measures and the pressure they bring to bear have contributed to a regulatory race to the bottom, and social media platforms have become the ultimate arbiters of harm, truth, and the practical limits of the fundamental right to freedom of expression. This is demonstrated by the drastic increase in content removal over the last few years. For example, Facebook removed 2.5 million pieces of content in Q1 of 2018 for violating its Community Standards on Hate Speech. This rose to With respect to disinformation, Twitter announced that, between March 2020 and July 2020 alone, it took down 14,900 Tweets and “challenged” 4.5 million accounts that regularly posted COVID-19 misinformation. Yet, the standards and practical methods used to regulate content moderation are often vague, conflicting, and nontransparent, which has serious negative consequences for the practical exercise and protection of freedom of expression for users around the world.

As the “great bulwark of liberty”, freedom of expression must be respected and upheld. International Human Rights Law (IHRL) has provided for freedom of expression in both the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR). As private entities, social media platforms are not signatories to or bound by such documents, but, as the former Special Rapporteur for Freedom of Opinion and Expression David Kaye has argued, IHRL is a means to facilitate a more rights-compliant and transparent model of content moderation. At the same time, the global nature of IHRL may also prove useful in dealing with the differences in national perception and legislation that characterize the global ecosystem of online expression. Yet, applying IHRL to private companies is a difficult task involving a plethora of challenges and dilemmas.

In this report, Justitia sets out IHRL as a “framework of first reference” for moderating online hate speech and disinformation. It decodes relevant IHRL principles, applies them to hate speech and disinformation through real-life examples, and offers recommendations on their adoption by social media platforms. The report explains how a human rights approach may be implemented by such platforms to bring about a rights-protective and transparent moderation of online content.

We argue that, to be compliant with IHRL, a platform’s content moderation practices must be legitimate, necessary, and proportional within the framework of Article 19(3) ICCPR (restrictions on freedom of expression), which sets out the grounds for limitation of freedom of expression. For hate speech, platforms should frame terms and conditions based on a threshold established by and take strictly into consideration the Rabat Plan of Action’s six-part threshold test for context, speaker, intent, content and form, extent of dissemination, and likelihood of imminent harm before taking any enforcement action. For disinformation, a platform’s terms and conditions should be tailored to protect the grounds in Article 19(3) ICCPR and Article 25 ICCPR (right to participate in voting and elections). In addition, platforms must refrain from adopting vague blanket policies for removal. Only disinformation promoting real and immediate harm should be subject to the most intrusive restrictive measures such as content removal. In determining the limits of disinformation, platforms should focus on the post’s content, its context, its impact, its likelihood of causing imminent harm, and the speaker’s intent.

Justitia recommends that major platforms formally commit to adopting an IHRL approach to content moderation by signing a non-binding Free Speech Framework Agreement (FSFA) administered by the Office of the UN High Commissioner for Human Rights (OHCHR) under the specific auspices of the Special Rapporteur on Freedom of Opinion and Expression.

Report Materials

Download report PDF

Executive Director  at   
 + Recent

Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).

Senior Research Fellow 
 + Recent

Natalie Alkiviadou is a Senior Research Fellow at The Future of Free Speech. Her research interests lie in the freedom of expression, the far-right, hate speech, hate crime, and non-discrimination.

Case and Policy Officer  at  Meta‘s Oversight Board 
 + Recent

Raghav is a Case and Policy Officer at Meta‘s Oversight Board where he is working on making Facebook and Instagram’s legal policies on misinformation, electoral integrity, hate speech, gender, and nudity fairer and more complaint with International human rights.