The shooting in Buffalo last month was but the latest example of terrorist attacks becoming a lethal outlet for white supremacist grievances. What connects much of this violent right-wing extremism is that the perpetrators are radicalized or share their ideas—sometimes even stream their terrorist attacks—online. In fact, several terrorists have explicitly cited other terrorists as inspiration for their deadly deeds in online messages or manifestos.
Understandably, the toxic combination of hate-infused violence and online virality has prompted many politicians and experts to call for tougher regulation of social media and hate speech. After the Buffalo attack, New York governor Kathy Hochul took aim at social media platforms and called for the imposition of “a legal responsibility to ensure that such hate cannot populate these sites.” Similarly, Bloomberg technology columnist Parmy Olson decried the First Amendment’s protection of hate speech and argued that “the world’s best hope for weeding out extremism on mainstream social media is coming from Europe, and specifically from two new laws—the Online Safety Act from the United Kingdom and Digital Services Act from the European Union.” These regulatory efforts follow in the footsteps of the German Network Enforcement Act of 2017 and oblige online platforms to remove illegal content, including categories such as hate speech and glorification of terrorism, or risk huge fines. However, in liberal democracies committed to both equality and free expression, this approach raises a number of questions and dilemmas.
Read the full piece by Jacob Mchgangama
Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).