The Effects of Censoring the Far-Right Online

By Ofra Klein

Since 2016, censorship of far-right groups and individuals on social media platforms has been the subject of much public discussion. With the implementation of laws to counter hateful speech, such as the German Network Enforcement Act (NetzDG) and the EU code of conduct, social media companies are now much more responsible for regulating the content that is posted by users online.

Censorship of such hateful content can take different forms. Entire pages or accounts – or in less extreme circumstances individual posts – can be removed, a practice known as de-platforming or no-platforming. Examples of this were the deletion of Facebook pages of several British organizations and individuals, amongst which included the British National Party, Britain First, and the English Defence League (EDL). This happened in April last year, two months after the leader of the EDL, Tommy Robinson, was banned from the platform. More recently, in September 2019, the pages of the Italian parties Fuorza Nuova and CasaPound were made inactive. Besides entirely removing pages or posts, content can also be algorithmically ranked lower. Consequently, users will see it appear less prominently in their Facebook or Twitter feeds. Former UKIP leader Nigel Farage argued in 2018 that this practice lead to a much lower engagement from his followers on Facebook.

Anecdotal evidence suggests that censorship can make it more difficult for far-right individuals or groups to reach a mainstream audience. Perhaps the best example of this is Milo Yiannopoulos. The former alt-right influencer went allegedly bankrupt after being removed from Twitter. Yiannopoulos has had a hard time selling his merchandise via alternative platforms such as Telegram and Gab, which have a much smaller and more specific user base. It has also been argued that the removal of conspiracy theorist Alex Jones from YouTube and Yiannopoulos from Twitter lead to drop-offs in audiences. At the same time, fears exist that through censorship, far-right actors are forced to migrate to more fringe, unmonitored online environments, where more extremist content can continue to circulate. This fear is legitimate, as forums such as Gab and 8kun have been linked to shootings.

Removing pages might also strengthen the far-right’s perception of injustice. As CARR’s William Allchorn shows, censorship has become a key focus in the far-rights’ victimization discourse. The removal of Robinson and Yiannopoulos from online platforms, as well as the court case against Geert Wilders, leader of the Dutch Party for Freedom (PVV), have fueled this discourse. These examples are being used by the far-right as a tactic of being silenced and restricted in their freedom of speech. When censorship seems to not just remove content that is extremist in nature, but also content to which we merely disagree, this rightly spurs feelings of injustice and increases opposition. For example, in a recent case in Italy the court agreed that Facebook’s censorship of CasaPound was unjust and that it excluded the party from the Italian political debate. Consequently, Facebook had to reactivate CasaPound’s page and pay the group 800 euros for each day that it had been shut down.

Far-right actors can also adapt their strategies online in order to avoid being blocked. Prashanth Bhat and myself show in a forthcoming book chapter (in the edited volume #TalkingPoints: Twitter, the public sphere, and the nature of online deliberation) how practices of coded verbal or visual language are being applied by far-right users online to stay clear from censorship. These so-called dog whistles are used to express more covert forms of hate speech online. A well-known example was Operation Google in 2016, during which users were adopting code words as a revenge on Google that was implementing automated hate speech detection using AI. Far-right users started to refer to black people as “Googles” and Jews as “Skypes”. Consequently, these coded forms of hate speech would turn the brand name into a racist slur and force companies to censor their own brand name.

Similarly, CARR Fellow Bharath Ganesh addresses how terms such as “white genocide” or “rapefugees” were invented by Twitter users as these words are not immediately recognizable as extreme under the community guidelines of platforms. This strategy of dog whistling also allows for addressing content that is not necessarily hateful but harmful in other ways. Recently, for example, online audiences have started to employ hashtags such as #maga2020 on Instagram as to avoid their content being blocked when talking about anti-vaccines. Far-right users, as early adopters of the internet, have adopted creative strategies around censorship.

Shifting to more extreme platforms and using converted forms of hateful speech are tactics that underline the difficulties that platforms have with dealing with extreme content online. Debates on the effectiveness of online censorship often look at the numbers of takedowns, but therefore tend to neglect how censorship might in fact also spur support for the far-right.


This article was originally published on the The Centre for Analysis of the Radical Right (CARR) website. Republished here with permission. You can follow CARR on Twitter: @C4ARR.

Ofra Klein is a Doctoral Fellow at CARR and a Doctoral candidate in Department of Political and Social Sciences, European University Institute. See her profile here and on twitter @ofra_okn.

Want to submit a blog post? Click here.

Leave a Reply