Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Understanding online hate: VSP Regulation and the broader context
2021 Vidgen, B., Burden, E. and Margetts, H. Report
This report aims to contribute to our understanding of online hate in the context of the requirements of the revised Audiovisual Media Services Directive (AVMSD) for Video Sharing Platforms (VSPs) to protect the general public from incitement to hatred or violence. However, online hate is complex and it can only be fully understood by considering issues beyond the very specific focus of these regulations. Hence, we draw on recent social and computational research to consider a range of points outside VSP regulations, such as the impact, nature and dynamics of online hate. For similar reasons,
we have considered expressions of hate across a range of online spaces, including VSPs as well as other online platforms. In particular, we have closely examined how online hate is currently addressed by industry, identifying key and emerging issues in content moderation practices. Our analyses will be relevant to a range of experts and stakeholders working to address online hate, including researchers, platforms, regulators and civil society organisations.
Discourse patterns used by extremist Salafists on Facebook: identifying potential triggers to cognitive biases in radicalized content
2021 Bouko, C., Naderer, B., Rieger, D., Van Ostaeyen, P. and Voué, P. Article
Understanding how extremist Salafists communicate, and not only what, is key to gaining insights into the ways they construct their social order and use psychological forces to radicalize potential sympathizers on social media. With a view to contributing to the existing body of research which mainly focuses on terrorist organizations, we analyzed accounts that advocate violent jihad without supporting (at least publicly) any terrorist group and hence might be able to reach a large and not yet radicalized audience. We constructed a critical multimodal and multidisciplinary framework of discourse patterns that may work as potential triggers to a selection of key cognitive biases and we applied it to a corpus of Facebook posts published by seven extremist Salafists. Results reveal how these posts are either based on an intense crisis construct (through negative outgroup nomination, intensification and emotion) or on simplistic solutions composed of taken-for-granted statements. Devoid of any grey zone, these posts do not seek to convince the reader; polarization is framed as a presuppositional established reality. These observations reveal that extremist Salafist communication is constructed in a way that may trigger specific cognitive biases, which are discussed in the paper.
Content personalisation and the online dissemination of terrorist and violent extremist content
2021 Tech Against Terrorism Policy
We welcome the increased focus amongst policymakers on the role played by content personalisation and other algorithmic recommendation systems on online platforms. Such scrutiny is warranted. Terrorist groups exploit platforms that make use of recommendation algorithms, and there are examples of individuals coming into contact with terrorist and violent extremist content via platforms using content personalisation. However, we are concerned that the current debate is, on a policy level, based on an incomplete understanding of terrorist use of the internet, and that a focus on content personalisation is a distraction from more important steps that should be taken to tackle terrorist use of the internet.
On Frogs, Monkeys, and Execution Memes: Exploring the Humor-Hate Nexus at the Intersection of Neo-Nazi and Alt-Right Movements in Sweden
2021 Askanius, T. Article
This article is based on a case study of the online media practices of the militant neo-Nazi organization the Nordic Resistance Movement, currently the biggest and most active extreme-right actor in Scandinavia. I trace a recent turn to humor, irony, and ambiguity in their online communication and the increasing adaptation of stylistic strategies and visual aesthetics of the Alt-Right inspired by online communities such as 4chan, 8chan, Reddit, and Imgur. Drawing on a visual content analysis of memes (N = 634) created and circulated by the organization, the analysis explores the place of humor, irony, and ambiguity across these cultural expressions of neo-Nazism and how ideas, symbols, and layers of meaning travel back and forth between neo-Nazi and Alt-right groups within Sweden today.
Governing Hate: Facebook and Digital Racism
2021 Siapera, E. and Viejo-Otero, P. Article
This article is concerned with identifying the ideological and techno-material parameters that inform Facebook’s approach to racism and racist contents. The analysis aims to contribute to studies of digital racism by showing Facebook’s ideological position on racism and identifying its implications. To understand Facebook’s approach to racism, the article deconstructs its governance structures, locating racism as a sub-category of hate speech. The key findings show that Facebook adopts a post-racial, race-blind approach that does not consider history and material differences, while its main focus is on enforcement, data, and efficiency. In making sense of these findings, we argue that Facebook’s content governance turns hate speech from a question of ethics, politics, and justice into a technical and logistical problem. Secondly, it socializes users into developing behaviors/contents that adapt to race-blindness, leading to the circulation of a kind of flexible racism. Finally, it spreads this approach from Silicon Valley to the rest of the world.
Affective Practice of Soldiering: How Sharing Images Is Used to Spread Extremist and Racist Ethos on Soldiers of Odin Facebook Site
2021 Nikunen, K., Hokka, J. and Nelimarkka, M. Article
The paper explores how visual affective practice is used to spread and bolster a nationalist, extremist and racist ethos on the public Facebook page of the anti-immigrant group, Soldiers of Odin. Affective practice refers to a particular sensibility of political discourse, shaped by social formations and digital technologies—the contexts in which political groups or communities gather, discuss and act. The study shows how visual affective practice and sharing and responding to images fortify moral claims, sense exclusionary solidarity and promote white nationalist masculinity which legitimizes racist practices of “soldiering.” By examining both the representations and their reactions (emoticons), the study demonstrates how ideas and values are collectively strengthened through affective sharing and are supported by platform infrastructures. Most importantly, it demonstrates that instead of considering the affect of protecting the nation as a natural result of “authentic” gut feeling, we should understand the ways it is purposefully and collectively produced and circulated.
Racism, Hate Speech, and Social Media: A Systematic Review and Critique
2021 Matamoros-Fernández, A. and Farkas, J. Article
Departing from Jessie Daniels’s 2013 review of scholarship on race and racism online, this article maps and discusses recent developments in the study of racism and hate speech in the subfield of social media research. Systematically examining 104 articles, we address three research questions: Which geographical contexts, platforms, and methods do researchers engage with in studies of racism and hate speech on social media? To what extent does scholarship draw on critical race perspectives to interrogate how systemic racism is (re)produced on social media? What are the primary methodological and ethical challenges of the field? The article finds a lack of geographical and platform diversity, an absence of researchers’ reflexive dialogue with their object of study, and little engagement with critical race perspectives to unpack racism on social media. There is a need for more thorough interrogations of how user practices and platform politics co-shape contemporary racisms.
A Snapshot of the Syrian Jihadi Online Ecology: Differential Disruption, Community Strength, and Preferred Other Platforms
2021 Conway, M., Khawaja, M., Lakhani, S. and Reffin, J. Article
This article contributes to the growing literature on extremist and terrorist online ecologies and approaches to snapshotting these. It opens by measuring Twitter’s differential disruption of so-called “Islamic State” versus other jihadi parties to the Syria conflict, showing that while Twitter became increasingly inhospitable to IS in 2017 and 2018, Hay’at Tahrir al-Sham and Ahrar al-Sham retained strong communities on the platform during the same period. An analysis of the same groups’ Twitter out-linking activity has the twofold purpose of determining the reach of groups’ content by quantifying the number of platforms it was available on and analyzing the nature and functionalities of the online spaces out-linked to.
The online behaviors of Islamic state terrorists in the United States
2021 Whittaker, J. Article
This study offers an empirical insight into terrorists’ use of the Internet. Although criminology has previously been quiet on this topic, behavior‐based studies can aid in understanding the interactions between terrorists and their environments. Using a database of 231 US‐based Islamic State terrorists, four important findings are offered: (1) This cohort utilized the Internet heavily for the purposes of both networking with co‐ideologues and learning about their intended activity. (2) There is little reason to believe that these online interactions are replacing offline ones, as has previously been suggested. Rather, terrorists tend to operate in both domains. (3) Online activity seems to be similar across the sample, regardless of the number of co‐offenders or the sophistication of attack. (4) There is reason to believe that using the Internet may be an impediment to terrorists’ success.
Rushing to Judgement: Are Short Mandatory Takedown Limits for Online Hate Speech Compatible with The Freedom of Expression?
2021 Mchangama, J., Alkiviadou, N. and Mendiratta, R. Report
Uncovering the Far-Right Online Ecosystem: An Analytical Framework and Research Agenda
2020 Baele, S.J., Brace, L. and Coan, T.G. Article
Recent years have seen a substantial increase in far-right inspired attacks. In this context, the present article offers an analytical framework for the study of right-wing extremists’ multifaceted and fast-growing activity on the Internet. Specifically, we conceptualize the far-right online presence as a dynamic ecosystem, teasing out four major components that correspond to the different levels of analysis available to future research. We illustrate the benefits of this framework with key illustrative examples from the English-, French-, and German- speaking far-right, revealing the worrying size and breadth – but also heterogeneity – of today’s far-right online ecosystem.
Unity Starts with U: A Case Study of a Counter-Hate Campaign Through the Use of Social Media Platforms
2020 Leung, C. and Frank, R. Article
Hate has been a growing concern with hate-groups and individuals using the Internet, or more specifically, social media platforms, to globalize hate. Since these social media platforms can connect users around the world, hate-organizations are using these connections as opportunities to recruit candidates and spread their propaganda. Without opposing views, these extreme viewpoints can establish themselves as legitimate and then be used to incite hate in individuals. Thus, these extreme viewpoints must be countered by similar messages to discourage this online hate, and one such way is to use the same platforms through grassroots movements. This paper presents a case study which was conducted on a class of Criminology students who implemented a grassroots community-based campaign called Unity Starts with U (USwithU) to counter-hate in a community by using social media platforms to spread messages of inclusion and share experiences. The results from the campaign showed improvements on people’s attitude towards hate at the local community level. Based on literature and this campaign, policy recommendations are suggested for policymakers to consider when creating or making improvements on counter-narrative programs.
Jihadist, Far-right And Far-left Terrorism In Cyberspace – Same Threat And Same Countermeasures?
2020 Ingelevič-Citak, M. and Przyszlak, Z. Article
This paper investigates whether the counter-terrorism measures developed and implemented within the European Union have a universal character and are equally effective in the context of various types of terrorism. The authors focus on the strategies applicable to the terrorist activities online, since information and communication technology is perceived as the fastest growing and continually changing field of the terrorist threat. So far, most of the counteractions and security strategies have been subordinated to the jihadism combating. However, in recent years, the significant growth of threats coming from far-right and far-left terrorist activities has been observed. It raises questions about the capability of instruments to prevent and combat other types of terrorism as well as jihadism. The research was conducted in particular, on the basis of international organizations' reports, the authors' observations, and practitioners' remarks. As follows from its results, there are significant differences in the phenomenon, current trends, and modus operandi of the perpetrators in the jihadi, far-right, and far-left terrorism. Consequently, it is possible to conclude that the effectiveness of chosen countermeasures, subordinated - as a rule – to the fighting of the jihadi extremists, is doubtful in preventing and combating far-right and far-left terrorism.
The ‘tarrant effect’: what impact did far-right attacks have on the 8chan forum?
2020 Baele, S.J., Brace, L. and Coan, T.G. Article
This paper analyses the impact of a series of mass shootings committed in 2018–2019 by right-wing extremists on 8chan/pol, a prominent far-right online forum. Using computational methods, it offers a detailed examination of how attacks trigger shifts in both forum activity and content. We find that while each shooting is discussed by forum participants, their respective impact varies considerably. We highlight, in particular, a ‘Tarrant effect’: the considerable effect Brenton Tarrant’s attack of two mosques in Christchurch, New Zealand, had on the forum. Considering the rise in far-right terrorism and the growing and diversifying online far-right ecosystem, such interactive offline-online effects warrant the attention of scholars and security professionals.
Global Jihad and International Media Use
2020 Sirgy, M.J., Estes, R.J. and Rahtz, D.R. Chapter
Globalization and international media are potent contributors to the rise of the Islamist global jihad. Widespread digital communication technologies that connect people all over the world are a substantial component of globalization. Over the past three decades, “virtual jihad” has emerged as a potent disseminator of radical religious-political ideologies, instilling fear and fostering instability worldwide. Western and global media, while often misrepresenting Islam and Muslims, have played a significant role in disseminating jihadist ideologies. The involvement of global jihadists (mujāhidīn) across myriad media outlets and platforms has allowed them to promote their agenda around the world. Using the Internet and media outlets, global jihadists are able to attract and recruit people to their ranks in an accelerated manner. Jihadists have engaged in media activities that have empowered and expanded the global jihad movement, even in the face of increased mitigation efforts.
Crisis and Loss of Control: German-Language Digital Extremism in the Context of the COVID-19 Pandemic
2020 Guhl, J. and Gerster, L. Report
This report analyses the networks and narratives of German-speaking far-right, far-left and Islamist extremist actors on mainstream and alternative social media platforms and extremist websites in the context of the COVID-19 pandemic. Our results show: Extremists from Germany, Austria and Switzerland have been able to increase their reach since the introduction of the lockdown measures.
The Anti-Hate Brigade: How a Group of Thousands Responds Collectively to Online Vitriol
2020 Buerger, C. Report
#jagärhar is by far the largest and best-organized collective effort to respond directly to hatred online, anywhere in the world, as far as we know. It is also one of only two civil society efforts against hatred online to have been replicated in numerous other countries. In this detailed account of its efforts– the first qualitative study of such a group – Cathy Buerger shares her findings on how and why #jagärhär members do what they do, how working collectively influences members’ ability and willingness to respond to hatred, and how the group’s strategy is carefully designed to take advantage of Facebook’s algorithms and influence ideas and discourse norms among the general public – not necessarily the ones writing the hateful comments.
Indonesia: Social Grievances and Violent Extremism
2020 Moonshot CVE Report
Extremist groups’ online recruitment mechanisms frequently exploit the wide range of grievances and vulnerabilities experienced by individuals at-risk of radicalisation. While it is widely accepted that mental health and wellbeing play a vital role in resilience to violent extremism, most approaches tend to focus on preventing violent extremism through purely ideological means and are not sufficiently tailored to the individual at-risk.

In an effort to understand this audience further and, more importantly, the most effective means of preventing violent extremism, Moonshot conducted an experiment to assess the propensity among at-risk users in Indonesia to engage with ideological counter-content compared to psychosocial support content. The data gathered during this pilot indicate that psychosocial support is an area of unmet need among some of the individuals most vulnerable to violent jihadist recruitment online in Indonesia, and that this population is open to engaging with online support.
Facebook Redirect Programme: Moonshot Evaluation
2020 Moonshot CVE Report
The Facebook Redirect Programme (FRP) is designed to combat violent extremism and dangerous organisations by redirecting users who have entered hate or violence-related search queries towards educational resources and outreach groups. A pilot of the programme was launched with delivery partners Life After Hate in May 2019 and Exit Australia in September 2019. It was specifically designed to ensure that individuals searching for white supremacist and/or neo-Nazi communities on Facebook would be offered authentic, meaningful and impactful support off-platform. The purpose of this pilot was to test the programme design and inform future deployments targeting both new geographies and other hate-based communities. Moonshot was contracted by Facebook to evaluate the pilot period of programme performance and make recommendations for future deployments. This report evaluates the pilot programme by examining:
-Facebook’s use of keywords and the safety module as a method of redirecting people off-platform;
-The full user journey from Facebook to delivery partner landing pages;
-The extent to which the pilot can be considered a proof of concept for future deployments.
Shades of hatred online: 4chan duplicate circulation surge during hybrid media events
2020 Zelenkauskaite, A., Toivanen, P., Huhtamäki, J. and Valaskivi, K. Article
The 4chan /pol/ platform is a controversial online space on which a surge in hate speech has been observed. While recent research indicates that events may lead to more hate speech, empirical evidence on the phenomenon remains limited. This study analyzes 4chan /pol/ user activity during the mass shootings in Christchurch and Pittsburgh and compares the frequency and nature of user activity prior to these events. We find not only a surge in the use of hate speech and anti-Semitism but also increased circulation of duplicate messages, links, and images and an overall increase in messages from users who self-identify as “white supremacist” or “fascist” primarily voiced from English-speaking IP-based locations: the U.S., Canada, Australia, and Great Britain. Finally, we show how these hybrid media events share the arena with other prominent events involving different agendas, such as the U.S. midterm elections. The significant increase in duplicates during the hybrid media events in this study is interpreted beyond their memetic logic. This increase can be interpreted through what we refer to as activism of hate. Our findings indicate that there is either a group of dedicated users who are compelled to support the causes for which shooting took place and/or that users use automated means to achieve duplication.