Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.


Full Listing

From Bombs to Books, and Back Again? Mapping Strategies of Right-Wing Revolutionary Resistance
2021 Ravndal, J.A. Article
This article begins by outlining four post-WWII strategies of right-wing revolutionary resistance: vanguardism; the cell system; leaderless resistance; and metapolitics. Next, the article argues that metapolitics became a preferred strategy for many right-wing revolutionaries during the 2000s and early 2010s, and proposes three conditions that may help explain this metapolitical turn: limited opportunities for armed resistance; a subcultural style shift; and new opportunities for promoting alternative worldviews online. Finally, the article theorizes about the types of threats that may emerge in the wake of this metapolitical turn, and speculates about the likelihood of a new and more violent turn in the near future.
Understanding online hate: VSP Regulation and the broader context
2021 Vidgen, B., Burden, E. and Margetts, H. Report
This report aims to contribute to our understanding of online hate in the context of the requirements of the revised Audiovisual Media Services Directive (AVMSD) for Video Sharing Platforms (VSPs) to protect the general public from incitement to hatred or violence. However, online hate is complex and it can only be fully understood by considering issues beyond the very specific focus of these regulations. Hence, we draw on recent social and computational research to consider a range of points outside VSP regulations, such as the impact, nature and dynamics of online hate. For similar reasons,
we have considered expressions of hate across a range of online spaces, including VSPs as well as other online platforms. In particular, we have closely examined how online hate is currently addressed by industry, identifying key and emerging issues in content moderation practices. Our analyses will be relevant to a range of experts and stakeholders working to address online hate, including researchers, platforms, regulators and civil society organisations.
The Domestic Extremist Next Door: How Digital Platforms Enable the War Against American Government
2021 Digital Citizens Alliance Report
Digital platforms enabled the disturbing rise of domestic extremism, culminating with the January 6 attack on the U.S. Capitol. Militia groups use social media networks to plan operations, recruit new members, and spread anti-democracy propaganda, a new Digital Citizens Alliance (Digital Citizens) and Coalition for a Safer Web (CSW) investigation has found.
GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right
2021 Mirrlees, T. Article
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the alt-right’s use of GAFAM sites and services as a platform for hate, and examines GAFAM’s establishment and use of hate content moderation apparatuses to de-platform alt-right users and delete hate content. Approach – Drawing upon a political economy of communications approach, this chapter demonstrates GAFAM’s power in society. It also undertakes a reading of GAFAM “terms of service agreements” and “community guidelines” documents to identify GAFAM hate content moderation apparatuses. Findings – GAFAM are among the most powerful platforms in the world, and their content moderation apparatuses are empowered by the US government’s cyber-libertarian approach to Internet law and regulation. GAFAM are defining hate speech, deciding what’s to be done about it, and censoring it. Value – This chapter probes GAFAM’s hate content moderation apparatuses for Internet platforms, and shows how GAFAM enable and constrain the alt-right’s hate speech on their platforms. It also reflexively assesses the politics of empowering GAFAM to de-platform the alt-right.
Layers of Lies: A First Look at Irish Far-Right Activity on Telegram
2021 Gallagher, A. and O’Connor, C. Report
This report aims to provide a first look into Irish far-right activity on the messaging app, Telegram, where the movement is operating both as identifiable groups and influencers, and anonymously-run channels and groups.

The report looks at the activity across 34 such Telegram channels through the lens of a series of case studies where content posted on these channels resulted in real life consequences. Using both quantitative and qualitative methods, the report examines the tactics, language and trends within these channels, providing much-needed detail on the activity of the Irish far-right online.
Variations on a Theme? Comparing 4chan, 8kun, and Other chans’ Far-Right “/pol” Boards
2021 Baele, S.J., Brace, L. and Coan, T.G. Article
Online forums such as 4chan and 8chan have grown in notoriety following a number of high-profile attacks conducted in 2019 by right-wing extremists who used their “/pol” boards (dedicated to “politically incorrect” discussions). Despite growing academic interest in these online spaces, little is still known about them; in particular, their similarities and differences remain to be teased out, and their respective roles in fostering a certain farright subculture need to be specified. This article therefore directly compares the content and discussion pace of six different /pol boards of “chan” forums, including some that exist solely on the dark web. We find that while these boards constitute together a particular subculture, differences in terms of both rate of traffic and content demonstrate the fragmentation of this subculture. Specifically, we show that the different /pol boards can be grouped
into a three-tiered architecture based upon both at once how popular they are and how extreme their content is.
Leaking in terrorist attacks: A review
2021 Dudenhoefer, A.L., Niesse, C., Görgen, T., Tampe, L., Megler, M., Gröpler, C. and Bondü, R. Article
In the recent past, the numbers of religiously- and politically-motivated terrorist attacks have increased, inevitably raising the question of effective measures to prevent further terrorist attacks. Empirical studies related to school shootings have shown that school shooters reliably (directly and indirectly) disclosed their intentions or plans prior to the attack, a phenomenon termed leaking or leakage. Leaking has been used for preventive purposes in this area of research. Recent research has indicated that leaking was also present prior to politically and religiously motivated terrorist attacks. In order to determine the current state of knowledge about leaking related to these offenses, we conducted a review of the international literature on religiously and politically motivated terrorist attacks. Up to 90% of the offenders showed some type of leaking prior to the attacks. A range of different forms of leaking could be observed. Leaking often occurred in the form of verbal communication with family and friends and/or via communication over the Internet. Terrorist offenders apparently tend to show leaking more often than other groups of mass murderers. Findings regarding similarities and dissimilarities in leaking between religiously motivated, jihadist and politically-motivated, far-right terrorist attacks were contradictory. We discuss the implications of these findings for practice and research as well as the strengths and possible weaknesses of the leaking concept.
Recontextualising the News: How antisemitic discourses are constructed in extreme far-right alternative media
2021 Haanshuus, B.P. and Ihlebæk, K.A. Article
This study explores how an extreme far-right alternative media site uses content from pro-fessional media to convey uncivil news with an antisemitic message. Analytically, it rests on a critical discourse analysis of 231 news items, originating from established national and international news sources, published on Frihetskamp from 2011–2018. In the study, we explore how news items are recontextualised to portray both overt and covert antisemitic discourses, and we identify four antisemitic representations that are reinforced through the selection and adjustment of news: Jews as powerful, as intolerant and anti-liberal, as exploiters of victimhood, and as inferior. These conspiratorial and exclusionary ideas, also known from historical Nazi propaganda, are thus reproduced by linking them to con-temporary societal and political contexts and the current news agenda. We argue that this kind of recontextualised, uncivil news can be difficult to detect in a digital public sphere.
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
2021 Thakur, D. and Llansó, E. Report
The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.

This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content.
The hidden hierarchy of far-right digital guerrilla warfare
2021 Cesarino, L. and Nardelli, P.H.J. Article
The polarizing tendency of politically leaned social media is usually claimed to be spontaneous, or a by-product of underlying platform algorithms. This contribution revisits both claims by articulating the digital world of social media and rules derived from capitalist accumulation in the post-Fordist age, from a transdisciplinary perspective articulating the human and exact sciences. Behind claims of individual freedom, there is a rigid pyramidal hierarchy of power heavily using military techniques developed in the late years of the cold war, namely Russia Reflexive Control and the Boyd’s decision cycle in the USA. This hierarchy is not the old-style “command-and-control” from Fordist times, but an “emergent” one, whereby individual agents respond to informational stimuli, coordinated to move as a swarm. Such a post-Fordist organizational structure resembles guerrilla warfare. In this new world, it is the far right who plays the revolutionaries by deploying avant-garde guerrilla methods, while the so-called left paradoxically appears as conservatives defending the existing structure of exploitation. Although the tactical goal is unclear, the strategic objective of far-right guerrillas is to hold on to power and benefit particular groups to accumulate more capital. We draw examples from the Brazilian far right to support our claims.
Identifying Key Players in Violent Extremist Networks: Using Socio-Semantic Network Analysis as Part of a Program of Content Moderation
2021 Bérubé, M., Beaulieu, L.A., Mongeau, P. and Saint-Charles, J. Article
Some moderation strategies of online content have targeted the individuals believed to be the most influential in the diffusion of such material, while others have focused on censorship of the content itself. Few approaches consider these two aspects simultaneously. The present study addresses this gap by showing how a socio-semantic network analysis can help identify individuals and subgroups who are strategically positioned in radical networks and whose comments encourage the use of violence. It also made it possible to identify the individuals and subgroups who act as intermediaries and whose statements are often the most violent.
Hidden order across online extremist movements can be disrupted by nudging collective chemistry
2021 Velásquez, N., Manrique, P., Sear, R., Leahy, R., Restrepo, N.J., Illari, L., Lupu, Y. and Johnson, N.F. Article
Disrupting the emergence and evolution of potentially violent online extremist movements is a crucial challenge. Extremism research has analyzed such movements in detail, focusing on individual- and movement-level characteristics. But are there system-level commonalities in the ways these movements emerge and grow? Here we compare the growth of the Boogaloos, a new and increasingly prominent U.S. extremist movement, to the growth of online support for ISIS, a militant, terrorist organization based in the Middle East that follows a radical version of Islam. We show that the early dynamics of these two online movements follow the same mathematical order despite their stark ideological, geographical, and cultural differences. The evolution of both movements, across scales, follows a single shockwave equation that accounts for heterogeneity in online interactions. These scientific properties suggest specific policies to address online extremism and radicalization. We show how actions by social media platforms could disrupt the onset and ‘flatten the curve’ of such online extremism by nudging its collective chemistry. Our results provide a system-level understanding of the emergence of extremist movements that yields fresh insight into their evolution and possible interventions to limit their growth.
Operating with impunity: legal review
2021 Commission for Countering Extremism Policy
The independent Commission for Countering Extremism has published a legal review, to examine whether existing legislation adequately deals with hateful extremism.
Online Hate and Zeitgeist of Fear: A Five-Country Longitudinal Analysis of Hate Exposure and Fear of Terrorism After the Paris Terrorist Attacks in 2015
2021 Kaakinen, M., Oksanen, A., Gadarian, S.K., Solheim, Ø.B., Herreros, F., Winsvold, M.S., Enjolras, B. and Steen‐Johnsen, K. Article
Acts of terror lead to both a rise of an extended sense of fear that goes beyond the physical location of the attacks and to increased expressions of online hate. In this longitudinal study, we analyzed dynamics between the exposure to online hate and the fear of terrorism after the Paris attacks in November 13, 2015. We hypothesized that exposure to online hate is connected to a perceived Zeitgeist of fear (i.e., collective fear). In turn, the perceived Zeitgeist of fear is related to higher personal fear of terrorism both immediately after the attacks and a year later. Hypotheses were tested using path modeling and panel data (N = 2325) from Norway, Finland, Spain, France, and the United States a few weeks after the Paris attacks in November 2015 and again a year later in January 2017. With the exception of Norway, exposure to online hate had a positive association with the perceived Zeitgeist of fear in all our samples. The Zeitgeist of fear was correlated with higher personal fear of terrorism immediately after the attacks and one year later. We conclude that online hate content can contribute to the extended sense of fear after the terrorist attacks by skewing perceptions of social climate.
Digital Dog Whistles: The New Online Language of Extremism
The International Journal of Security Studies Weimann, G. Article
Terrorists and extremists groups are communicating sometimes openly but very often in concealed formats. Recently Far-right extremists including white supremacist, anti-Semite groups, racists and neo-Nazis started using a coded "New Language". Alarmed by police and security forces attempts to find them online and by the social platforms attempts to remove their contents, they try to apply the new language of codes and doublespeak. This study explores the emergence of a new language, the system of code words developed by Far-right extremists. What are the characteristics of this new language? How is it transmitted? How is it used? Our survey of online Far-right contents reveals the use of visual and textual codes for extremists. These hidden languages enable extremists to hide in plain sight and for others to easily identify like-minded individuals. There is no doubt that the "new language" used online by Far-right groups comprises all the known attributes of a language: It is very creative, productive and instinctive, uses exchanges of verbal or symbolic utterances shared by certain individuals and groups. These findings should serve both Law Enforcement and private sector bodies interested in preventing hate speech online.