The Disinformation Toolkit

What is disinformation counter-speech?

Counter-speech, in the context of disinformation, can be defined as proactive and reactive communication that aims to correct false information, highlight trustworthy information, and build resilience against deceptive narratives.

Counter-speech against disinformation happens at two levels.

First, communities, platforms, organizations, and authorities can engage in active counter-speech. They can debunk concrete false news stories in circulation by fact-checking information and sources. Such entities can also facilitate prebunking, i.e., share actionable advice on how users themselves can spot disinformation and turn them into their own fact-checkers, even before they are exposed to concrete disinformation.

Second, individual users can engage in passive forms of counter-speech. By paying attention and developing awareness of disinformation, users can passively counter the spread of disinformation by refraining from buying into and passing on disinformation. Individual users can also engage in active counter-speech by only sharing accurate and reliable news stories on unfolding events. Sometimes well-meaning individual users also actively share disinformation with the deliberate goal of ridiculing it. However, such forms of large-scale active counter-speech among users may backfire as other users may later continue to remember the false information but forget the fact that it is wrong.

What is disinformation?

Disinformation is false information that is deliberately created and spread with the intention to deceive or mislead. It is characterized by the purposeful production of false information or the manipulation of existing information to create a false narrative. The intent behind disinformation is often to cause harm, sow confusion, incite conflict, or influence public opinion or behavior in a certain way.

There are many examples of disinformation. Ethnic riots have often been found to be preceded by false information about other ethnic groups including about violence towards children and women. Prominent political examples of disinformation include the allegation that the 2020 US presidential election was won by electoral fraud. Many examples of disinformation were also seen during the COVID-19 pandemic including about side-effects from vaccines and alternative treatments for COVID-19.

Case studies of counter speech against disinformation 

A prominent approach to counter-speech against disinformation is debunking or fact-checking. This occurs in collaboration between fact-checkers and social media companies where journalists or other fact-checkers identifies that a circulating piece of information is false or misleading and social media companies subsequently flags or labels this piece of information as misleading. Fact-checks work in the sense that they reduce beliefs in the false information, although the effects have been found to be somewhat small and ephemeral. Furthermore, a practical problem is that there is more potential disinformation in circulation than what it is possible to fact-check. At the same time, research suggests that the threat of being fact-checked by media institutions play an important role in keeping politicians accountable.

Given the high resources required to perform and broadcast fact-checks, increasing attention is invested in supplementing debunking of disinformation with attempts to empower online audiences through pre-bunking interventions.

One form of pre-bunking is nudging. The premise of a nudging strategy is that people are already motivated to only believe in and share accurate news and the nudge is a simple reminder to remember this motivation when the user browses social media. Examples of such “accuracy nudges” are pop-up questions that ask people whether they believe a news story is true or whether it is important to only share accurate information. Several studies have demonstrated that accuracy nudges work in multiple different cultural settings but also that the effects are small. At the same time, these nudges are fast to complete and easy to implement on actual social media platforms. This is a significant advantage.

Another form of pre-bunking is inoculation via gamified interventions. Whereas the accuracy nudge focuses on the motivation of users, these games focus on empowerment through competence-building. The underlying idea is that people can learn typical strategies of disinformation producers by playing games that put them in the role of such a producer. Several different games have been developed including games that focus on particular topics such as climate change and covid-19. The research shows that the competences developed in the game can help people identify false information in both laboratory settings and on actual social media platforms and that the learned competences continue to empower users over a span of several months. At the same time, these gamified interventions take a much longer time to complete and are therefore more difficult to administer in online environments outside of explicit training sessions (e.g., in schools or workplaces).

A final form of pre-bunking, digital media literacy interventions, is also oriented towards competence building. Whereas gamified interventions provide competences indirectly as the user plays a game, digital media literacy interventions provide more direct instructions on how to spot false news online, essentially turning the user into their own fact-checker. The below is a prominent example of actionable advice provided by Facebook to their users, both directly on their platform and in ads in newspapers across the United States, the United Kingdom, France, Germany, Mexico, and India.

  • Be skeptical of headlines. False news stories often have catchy headlines in all caps with exclamation points. If shocking claims in the headline sound unbelievable, they probably are.
  • Look closely at the URL. A phony or look-alike URL may be a warning sign of false news. Many false news sites mimic authentic news sources by making small changes to the URL. You can go to the site to compare the URL to established sources.
  • Investigate the source. Ensure that the story is written by a source that you trust with a reputation for accuracy. If the story comes from an unfamiliar organization, check their “About” section to learn more.
  • Watch for unusual formatting. Many false news sites have misspellings or awkward layouts. Read carefully if you see these signs.
  • Consider the photos. False news stories often contain manipulated images or videos. Sometimes the photo may be authentic, but taken out of context. You can search for the photo or image to verify where it came from.
  • Inspect the dates. False news stories may contain timelines that make no sense, or event dates that have been altered.
  • Check the evidence. Check the author’s sources to confirm that they are accurate. Lack of evidence or reliance on unnamed experts may indicate a false news story. Look at other reports. If no other news source is reporting the same story, it may indicate that the story is false. If the story is reported by multiple sources you trust, it’s more likely to be true.
  • Is the story a joke? Sometimes false news stories can be hard to distinguish from humor or satire. Check whether the source is known for parody, and whether the story’s details and tone suggest it may be just for fun. Some stories are intentionally false. Think critically about the stories you read, and only share news that you know to be credible.

Research has found that exposure to these ads – and interventions like them – do indeed make people more likely to identify false information and consider the size of these effects relatively large. Digital media literacy interventions seem to work, in part, because they boost feelings of self-efficacy and, hence, create a sense of empowerment. However, there is some evidence that it requires some pre-existing digital media literacy to make use of digital tips like these. Furthermore, many digital media literacy interventions are relatively elaborate and, hence, requires a context of explicit training.

Things to consider

When engaging in counter-speech against disinformation, for example in the form of designing interventions to empower users, there are several important aspects to consider.

First, several studies indicate that many interventions that help people identify false information also trigger more general skepticism such that people begin to mistrust accurate information too, even if the effect is stronger for false than accurate information. A core focus of counter-speech against disinformation should therefore be to foster intellectual humility rather than skepticism. Whereas general skepticism and mistrust as even been found to correlate positively with the sharing of disinformation (e.g., conspiracy-related content), intellectual humility has consistently been found to decrease motivations to share disinformation. One way to build intellectual humility is to show examples of user’s cognitive fallibility.

Second, some forms of counter-speech focus on stopping the spread of disinformation whereas other forms focus on empowering people to develop resistance against believing in disinformation. A focus on cognitive resistance may be particularly important. Research shows that most people only very rarely share information on social media, accurate or false. The most common problem with regards to disinformation is the confusion and distraction it creates for those incidentally exposed to it.

Third, when designing interventions to empower users it is often important to consider their scalability and repeatability. The most effective intervention often requires a context of explicit training or instructions, spanning at least 5-10 minutes. Most likely, the most viable and effective strategy is therefore one that combines explicit training to build competences (e.g., combinations of explicit digital media literacy instructions and gamified rehearsal), frequent online reminders to keep motivation high (e.g., via accuracy nudges) and direct fact-checks when possible.

Fourth, disinformation is detectable because it often bears particular signatures. As artificial intelligence becomes refined it is likely that disinformation becomes harder to detect because information more closely can come to mimic accurate information (e.g., through the use of artificially created but highly realistic videos). In these circumstances, cost-intensive fact-checking by media institutions become more important. Yet, to be effective, these media institutions need to be trusted by the public. A key focus for anyone concerned about disinformation is therefore also build and sustain free, independent, and resourceful media institutions.


Council of Europe – Toolkit on combating hate speech during electoral processes (2022)


UK Government – RESIST 2 Counter Disinformation Toolkit 

European Union – Disinformation toolkit

UK Department of Culture Vaccine – Disinformation Toolkit

US Cybersecurity & Infrastructure Security Agency – Election Disinformation Toolkit

The Alan Turing Institute – Counterspeech: a better way of tackling online hate?

Pen America – Guidelines for Safely Practicing Counterspeech

Final Remarks

The FFS thanks the below institutions for all their support in the creation of this output.


For more information on the FFS please visit:

For media inquiries please contact the FFS’ Executive Director Jacob Mchangama at