Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Recontextualising the News: How antisemitic discourses are constructed in extreme far-right alternative media
2021 Haanshuus, B.P. and Ihlebæk, K.A. Article
This study explores how an extreme far-right alternative media site uses content from pro-fessional media to convey uncivil news with an antisemitic message. Analytically, it rests on a critical discourse analysis of 231 news items, originating from established national and international news sources, published on Frihetskamp from 2011–2018. In the study, we explore how news items are recontextualised to portray both overt and covert antisemitic discourses, and we identify four antisemitic representations that are reinforced through the selection and adjustment of news: Jews as powerful, as intolerant and anti-liberal, as exploiters of victimhood, and as inferior. These conspiratorial and exclusionary ideas, also known from historical Nazi propaganda, are thus reproduced by linking them to con-temporary societal and political contexts and the current news agenda. We argue that this kind of recontextualised, uncivil news can be difficult to detect in a digital public sphere.
Hate, Obscenity, and Insults: Measuring the Exposure of Children to Inappropriate Comments in YouTube
2021 Alshamrani, S., Abusnaina, A., Abuhamad, M., Nyang, D. and Mohaisen, D. Article
Social media has become an essential part of the daily routines of children and adolescents. Moreover, enormous efforts have been made to ensure the psychological and emotional well-being of young users as well as their safety when interacting with various social media platforms. In this paper, we investigate the exposure of those users to inappropriate comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records, and studied the presence of five age-inappropriate categories and the amount of exposure to each category. Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments. Our results show a large percentage of worrisome comments with inappropriate content: we found 11% of the comments on children’s videos to be toxic, highlighting the importance of monitoring comments, particularly on children platforms.
Role of Public WhatsApp Groups Within the Hindutva Ecosystem of Hate and Narratives of “CoronaJihad”
2021 Nizaruddin, F. Article
This article uses the context of the widespread circulation of accounts about “CoronaJihad” in India during the COVID-19 pandemic to examine how public WhatsApp groups that participate in disseminating such accounts function within the ecosystem of hate around Hindutva majoritarianism in the country. The manner in which the WhatsApp platform operates within this ecosystem is mapped through a granular study of three public Hindutva WhatsApp groups; the messages within these groups during the first phase of the COVID-19 lock-down in India were examined during the course of this study. The pattern of messaging within the three groups that contribute to the narrative of “CoronaJihad,” which blames the minority Muslim community for the spread of the virus in India, were analyzed. The article focuses on factors including company policies and the specific sociopolitical situation in the country to understand the circumstances that make WhatsApp’s deep entanglement with the divisive politics of Hindutva majoritarianism in India possible.
Redirect Method: Canada
2021 Moonshot CVE Report
In February 2019, Moonshot launched The Redirect Method with funding from the Community Resilience Fund and in collaboration with the Canada Centre for Community Engagement and
Violence Prevention at Public Safety Canada. Canada Redirect was first deployed across all 13 Canadian provinces and territories, and in June 2019 our campaigns were subdivided to incorporate 353 postcodes in Canada’s six largest cities. These localized campaigns enabled Moonshot to collect granular data on extremist search appetite, test experimental messaging, and
explore the viability of providing at-risk users with content and services in their community.

Using the updated Redirect Method, Canada Redirect aimed to reach at-risk users with content that aligned as closely as possible with what they were searching for. Moonshot used subject-matter expertise and in-house knowledge to match relevant counter-narratives to their respective target audiences. This approach taps into a range of content ecosystems, such as music, gaming, and literature, to deliver alternative messaging that contrasts with the extremist content a user may be searching for, such as neo-Nazi manifestos or an ISIS nasheed. Moonshot adapted the Redirect Method to include content specific to the Canadian context in order to increase the relevance and impact of counter-narratives to the Canadian at-risk audience.

This report details Canada Redirect’s project phases, achievements, and findings from our digital campaigns, which were deployed across Canada for over a year, from 21 February 2019 to 23 March 2020.
Operating with impunity: legal review
2021 Commission for Countering Extremism Policy
The independent Commission for Countering Extremism has published a legal review, to examine whether existing legislation adequately deals with hateful extremism.
Countering terrorism or criminalizing curiosity? The troubled history of UK responses to right-wing and other extremism
2021 Zedner, L. Article
The growth of right-wing extremism, especially where it segues into hate crime and terrorism, poses new challenges for governments, not least because its perpetrators are typically lone actors, often radicalized online. The United Kingdom has struggled to define, tackle or legitimate against extremism, though it already has an extensive array of terrorism-related offences that target expression, encouragement, publication and possession of terrorist material. In 2019, the United Kingdom went further to make viewing terrorist-related material online on a single occasion a crime carrying a 15-year maximum sentence. This article considers whether UK responses to extremism, particularly those that target non-violent extremism, are necessary, proportionate, effective and compliant with fundamental rights. It explores whether criminalizing the curiosity of those who explore radical political ideas constitutes legitimate criminalization or overextends state power and risks chilling effects on freedom of speech, association, academic freedom, journalistic enquiry and informed public debate—all of which are the lifeblood of a liberal democracy.
An Ensemble Method for Radicalization and Hate Speech Detection Online Empowered by Sentic Computing
2021 Araque, O. and Iglesias, C. A. Article
The dramatic growth of the Web has motivated researchers to extract knowledge from enormous repositories and to exploit the knowledge in myriad applications. In this study, we focus on natural language processing (NLP) and, more concretely, the emerging field of affective computing to explore the automation of understanding human emotions from texts. This paper continues previous efforts to utilize and adapt affective techniques into different areas to gain new insights. This paper proposes two novel feature extraction methods that use the previous sentic computing resources AffectiveSpace and SenticNet. These methods are efficient approaches for extracting affect-aware representations from text. In addition, this paper presents a machine learning framework using an ensemble of different features to improve the overall classification performance. Following the description of this approach, we also study the effects of known feature extraction methods such as TF-IDF and SIMilarity-based sentiment projectiON (SIMON). We perform a thorough evaluation of the proposed features across five different datasets that cover radicalization and hate speech detection tasks. To compare the different approaches fairly, we conducted a statistical test that ranks the studied methods. The obtained results indicate that combining affect-aware features with the studied textual representations effectively improves performance. We also propose a criterion considering both classification performance and computational complexity to select among the different methods.
"Short is the Road that Leads from Fear to Hate": Fear Speech in Indian WhatsApp Groups
2021 Saha, P., Mathew, B., Garimella, K. and Mukherjee, A. Article
WhatsApp is the most popular messaging app in the world. Due to its popularity, WhatsApp has become a powerful and cheap tool for political campaigning being widely used during the 2019 Indian general election, where it was used to connect to the voters on a large scale. Along with the campaigning, there have been reports that WhatsApp has also become a breeding ground for harmful speech against various protected groups and religious minorities. Many such messages attempt to instil fear among the population about a specific (minority) community. According to research on inter-group conflict, such `fear speech' messages could have a lasting impact and might lead to real offline violence. In this paper, we perform the first large scale study on fear speech across thousands of public WhatsApp groups discussing politics in India. We curate a new dataset and try to characterize fear speech from this dataset. We observe that users writing fear speech messages use various events and symbols to create the illusion of fear among the reader about a target community. We build models to classify fear speech and observe that current state-of-the-art NLP models do not perform well at this task. Fear speech messages tend to spread faster and could potentially go undetected by classifiers built to detect traditional toxic speech due to their low toxic nature. Finally, using a novel methodology to target users with Facebook ads, we conduct a survey among the users of these WhatsApp groups to understand the types of users who consume and share fear speech. We believe that this work opens up new research questions that are very different from tackling hate speech which the research community has been traditionally involved in.
Variations on a Theme? Comparing 4chan, 8kun, and Other chans’ Far-Right “/pol” Boards
2021 Baele, S.J., Brace, L. and Coan, T.G. Article
Online forums such as 4chan and 8chan have grown in notoriety following a number of high-profile attacks conducted in 2019 by right-wing extremists who used their “/pol” boards (dedicated to “politically incorrect” discussions). Despite growing academic interest in these online spaces, little is still known about them; in particular, their similarities and differences remain to be teased out, and their respective roles in fostering a certain farright subculture need to be specified. This article therefore directly compares the content and discussion pace of six different /pol boards of “chan” forums, including some that exist solely on the dark web. We find that while these boards constitute together a particular subculture, differences in terms of both rate of traffic and content demonstrate the fragmentation of this subculture. Specifically, we show that the different /pol boards can be grouped
into a three-tiered architecture based upon both at once how popular they are and how extreme their content is.
Understanding online hate: VSP Regulation and the broader context
2021 Vidgen, B., Burden, E. and Margetts, H. Report
This report aims to contribute to our understanding of online hate in the context of the requirements of the revised Audiovisual Media Services Directive (AVMSD) for Video Sharing Platforms (VSPs) to protect the general public from incitement to hatred or violence. However, online hate is complex and it can only be fully understood by considering issues beyond the very specific focus of these regulations. Hence, we draw on recent social and computational research to consider a range of points outside VSP regulations, such as the impact, nature and dynamics of online hate. For similar reasons,
we have considered expressions of hate across a range of online spaces, including VSPs as well as other online platforms. In particular, we have closely examined how online hate is currently addressed by industry, identifying key and emerging issues in content moderation practices. Our analyses will be relevant to a range of experts and stakeholders working to address online hate, including researchers, platforms, regulators and civil society organisations.
Discourse patterns used by extremist Salafists on Facebook: identifying potential triggers to cognitive biases in radicalized content
2021 Bouko, C., Naderer, B., Rieger, D., Van Ostaeyen, P. and Voué, P. Article
Understanding how extremist Salafists communicate, and not only what, is key to gaining insights into the ways they construct their social order and use psychological forces to radicalize potential sympathizers on social media. With a view to contributing to the existing body of research which mainly focuses on terrorist organizations, we analyzed accounts that advocate violent jihad without supporting (at least publicly) any terrorist group and hence might be able to reach a large and not yet radicalized audience. We constructed a critical multimodal and multidisciplinary framework of discourse patterns that may work as potential triggers to a selection of key cognitive biases and we applied it to a corpus of Facebook posts published by seven extremist Salafists. Results reveal how these posts are either based on an intense crisis construct (through negative outgroup nomination, intensification and emotion) or on simplistic solutions composed of taken-for-granted statements. Devoid of any grey zone, these posts do not seek to convince the reader; polarization is framed as a presuppositional established reality. These observations reveal that extremist Salafist communication is constructed in a way that may trigger specific cognitive biases, which are discussed in the paper.
Content personalisation and the online dissemination of terrorist and violent extremist content
2021 Tech Against Terrorism Policy
We welcome the increased focus amongst policymakers on the role played by content personalisation and other algorithmic recommendation systems on online platforms. Such scrutiny is warranted. Terrorist groups exploit platforms that make use of recommendation algorithms, and there are examples of individuals coming into contact with terrorist and violent extremist content via platforms using content personalisation. However, we are concerned that the current debate is, on a policy level, based on an incomplete understanding of terrorist use of the internet, and that a focus on content personalisation is a distraction from more important steps that should be taken to tackle terrorist use of the internet.
On Frogs, Monkeys, and Execution Memes: Exploring the Humor-Hate Nexus at the Intersection of Neo-Nazi and Alt-Right Movements in Sweden
2021 Askanius, T. Article
This article is based on a case study of the online media practices of the militant neo-Nazi organization the Nordic Resistance Movement, currently the biggest and most active extreme-right actor in Scandinavia. I trace a recent turn to humor, irony, and ambiguity in their online communication and the increasing adaptation of stylistic strategies and visual aesthetics of the Alt-Right inspired by online communities such as 4chan, 8chan, Reddit, and Imgur. Drawing on a visual content analysis of memes (N = 634) created and circulated by the organization, the analysis explores the place of humor, irony, and ambiguity across these cultural expressions of neo-Nazism and how ideas, symbols, and layers of meaning travel back and forth between neo-Nazi and Alt-right groups within Sweden today.
Governing Hate: Facebook and Digital Racism
2021 Siapera, E. and Viejo-Otero, P. Article
This article is concerned with identifying the ideological and techno-material parameters that inform Facebook’s approach to racism and racist contents. The analysis aims to contribute to studies of digital racism by showing Facebook’s ideological position on racism and identifying its implications. To understand Facebook’s approach to racism, the article deconstructs its governance structures, locating racism as a sub-category of hate speech. The key findings show that Facebook adopts a post-racial, race-blind approach that does not consider history and material differences, while its main focus is on enforcement, data, and efficiency. In making sense of these findings, we argue that Facebook’s content governance turns hate speech from a question of ethics, politics, and justice into a technical and logistical problem. Secondly, it socializes users into developing behaviors/contents that adapt to race-blindness, leading to the circulation of a kind of flexible racism. Finally, it spreads this approach from Silicon Valley to the rest of the world.
Affective Practice of Soldiering: How Sharing Images Is Used to Spread Extremist and Racist Ethos on Soldiers of Odin Facebook Site
2021 Nikunen, K., Hokka, J. and Nelimarkka, M. Article
The paper explores how visual affective practice is used to spread and bolster a nationalist, extremist and racist ethos on the public Facebook page of the anti-immigrant group, Soldiers of Odin. Affective practice refers to a particular sensibility of political discourse, shaped by social formations and digital technologies—the contexts in which political groups or communities gather, discuss and act. The study shows how visual affective practice and sharing and responding to images fortify moral claims, sense exclusionary solidarity and promote white nationalist masculinity which legitimizes racist practices of “soldiering.” By examining both the representations and their reactions (emoticons), the study demonstrates how ideas and values are collectively strengthened through affective sharing and are supported by platform infrastructures. Most importantly, it demonstrates that instead of considering the affect of protecting the nation as a natural result of “authentic” gut feeling, we should understand the ways it is purposefully and collectively produced and circulated.
Racism, Hate Speech, and Social Media: A Systematic Review and Critique
2021 Matamoros-Fernández, A. and Farkas, J. Article
Departing from Jessie Daniels’s 2013 review of scholarship on race and racism online, this article maps and discusses recent developments in the study of racism and hate speech in the subfield of social media research. Systematically examining 104 articles, we address three research questions: Which geographical contexts, platforms, and methods do researchers engage with in studies of racism and hate speech on social media? To what extent does scholarship draw on critical race perspectives to interrogate how systemic racism is (re)produced on social media? What are the primary methodological and ethical challenges of the field? The article finds a lack of geographical and platform diversity, an absence of researchers’ reflexive dialogue with their object of study, and little engagement with critical race perspectives to unpack racism on social media. There is a need for more thorough interrogations of how user practices and platform politics co-shape contemporary racisms.
A Snapshot of the Syrian Jihadi Online Ecology: Differential Disruption, Community Strength, and Preferred Other Platforms
2021 Conway, M., Khawaja, M., Lakhani, S. and Reffin, J. Article
This article contributes to the growing literature on extremist and terrorist online ecologies and approaches to snapshotting these. It opens by measuring Twitter’s differential disruption of so-called “Islamic State” versus other jihadi parties to the Syria conflict, showing that while Twitter became increasingly inhospitable to IS in 2017 and 2018, Hay’at Tahrir al-Sham and Ahrar al-Sham retained strong communities on the platform during the same period. An analysis of the same groups’ Twitter out-linking activity has the twofold purpose of determining the reach of groups’ content by quantifying the number of platforms it was available on and analyzing the nature and functionalities of the online spaces out-linked to.
The online behaviors of Islamic state terrorists in the United States
2021 Whittaker, J. Article
This study offers an empirical insight into terrorists’ use of the Internet. Although criminology has previously been quiet on this topic, behavior‐based studies can aid in understanding the interactions between terrorists and their environments. Using a database of 231 US‐based Islamic State terrorists, four important findings are offered: (1) This cohort utilized the Internet heavily for the purposes of both networking with co‐ideologues and learning about their intended activity. (2) There is little reason to believe that these online interactions are replacing offline ones, as has previously been suggested. Rather, terrorists tend to operate in both domains. (3) Online activity seems to be similar across the sample, regardless of the number of co‐offenders or the sophistication of attack. (4) There is reason to believe that using the Internet may be an impediment to terrorists’ success.
Rushing to Judgement: Are Short Mandatory Takedown Limits for Online Hate Speech Compatible with The Freedom of Expression?
2021 Mchangama, J., Alkiviadou, N. and Mendiratta, R. Report
Uncovering the Far-Right Online Ecosystem: An Analytical Framework and Research Agenda
2020 Baele, S.J., Brace, L. and Coan, T.G. Article
Recent years have seen a substantial increase in far-right inspired attacks. In this context, the present article offers an analytical framework for the study of right-wing extremists’ multifaceted and fast-growing activity on the Internet. Specifically, we conceptualize the far-right online presence as a dynamic ecosystem, teasing out four major components that correspond to the different levels of analysis available to future research. We illustrate the benefits of this framework with key illustrative examples from the English-, French-, and German- speaking far-right, revealing the worrying size and breadth – but also heterogeneity – of today’s far-right online ecosystem.