Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.


Full Listing

An Exploration of the Involuntary Celibate (Incel) Subculture Online
2020 Liggett O’Malley, R. and Holt, K.M. Article
Incels, a portmanteau of the term involuntary celibates, operate in online communities to discuss difficulties in attaining sexual relationships. Past reports have found that multiple elements of the incel culture are misogynistic and favorable towards violence. Further, several violent incidents have been linked to this community, which suggests that incel communities may resemble other ideologically motivated extremist groups. The current study employed an inductive qualitative analysis of over 8,000 posts made in two online incel communities to identify the norms, values, and beliefs of these groups from a subcultural perspective. Analyses found that the incel community was structured around five interrelated normative orders: the sexual market, women as naturally evil, legitimizing masculinity, male oppression, and violence. The implications of this analysis for our understanding of extremism and the role of the internet in radicalization to violence are considered in depth.
Research Note: More Bucks, Still No Bangs? Why a Cost-Benefit Analysis of Cyberterrorism Still Holds True
2020 Giacomello, G. Article
Taking as reference a cost-benefit analysis of cyberterrorism published in 2004 by Studies in Conflict and Terrorism, this article briefly reviews what happened in the last 15 years in cyberterrorism research, what was correctly forecast, what was wrong and what may happen in the future. Some of the analyses published in during this period have been accurate, indicating that terrorists would use the Web and then social media for supporting operations in financing, recruiting and, especially “propaganda” (information operations), instead of wasting resources in ineffectual cyberattacks against critical infrastructures. The same analyses, however, did not appreciate enough how successful information operations by terrorist groups would have been. Overall, the approach, research methods, findings and forecasting have been quite valid and fruitful and thus they can represent a solid foundation on which scholars of (cyber)terrorism may base their future research.
Hiding hate speech: political moderation on Facebook
2020 Kalsnes, B. and Ihlebæk, K.A. Article
Facebook facilitates more extensive dialogue between citizens and politicians. However, communicating via Facebook has also put pressure on political actors to administrate and moderate online debates in order to deal with uncivil comments. Based on a platform analysis of Facebook’s comment moderation functions and interviews with eight political parties’ communication advisors, this study explored how political actors conduct comment moderation. The findings indicate that these actors acknowledge being responsible for moderating debates. Since turning off the comment section is impossible in Facebook, moderators can choose to delete or hide comments, and these arbiters tend to use the latter in order to avoid an escalation of conflicts. The hide function makes comments invisible to participants in the comment section, but the hidden texts remain visible to those who made the comment and their network. Thus, the users are unaware of being moderated. In this paper, we argue that hiding problematic speech without the users’ awareness has serious ramifications for public debates, and we examine the ethical challenges associated with the lack of transparency in comment sections and the way moderation is conducted in Facebook.
Online influence, offline violence: language use on YouTube surrounding the ‘Unite the Right’ rally
2020 van der Vegt, I., Mozes, M., Gill, P. and Kleinberg, B. Article
The media frequently describes the 2017 Charlottesville ‘Unite the Right’ rally as a turning point for the alt-right and white supremacist movements. Social movement theory suggests that the media attention and public discourse concerning the rally may have engendered changes in social identity performance and visibility of the alt-right, but this has yet to be empirically tested. The presence of the movement on YouTube is of particular interest, as this platform has been referred to as a breeding ground for the alt-right. The current study investigates whether there are differences in language use between 7142 alt-right and progressive YouTube channels, in addition to measuring possible changes as a result of the rally. To do so, we create structural topic models and measure bigram proportions in video transcripts, spanning approximately 2 months before and after the rally. We observe differences in topics between the two groups, with the ‘alternative influencers’, for example, discussing topics related to race and free speech to a larger extent than progressive channels. We also observe structural breakpoints in the use of bigrams at the time of the rally, suggesting there are changes in language use within the two groups as a result of the rally. While most changes relate to mentions of the rally itself, the alternative group also shows an increase in promotion of their YouTube channels. In light of social movement theory, we argue that language use on YouTube shows that the Charlottesville rally indeed triggered changes in social identity performance and visibility of the alt-right.
Network-Enabled Anarchy: How Militant Anarcho-Socialist Networks Use Social Media to Instigate Widespread Violence Against Political Opponents and Law Enforcement
2020 Finkelstein, J., Goldenberg, A., Stevens, S., Jussim, L., Farmer, J., Donohue, J.K. and Paresky, P. Report
Our primary research question was whether memes and codewords, private or fringe online forums, and hybrid real-world/online militia—the three characteristic tactics that support outbreaks of extremist violence for both Jihadi and Boogaloo extremism—are also prevalent in anti-fascist and anarcho-socialist groups. To analyze the use and prevalence of memes and other coded language and activity, we performed analyses on over ten million social media comments ranging from mainstream platforms (such as Twitter) to fringe online forums on Reddit. Throughout the research, we examined whether the same extremist themes and actions that are characteristic of both Jihadi and Boogaloo extremists—themes such as violent revolution, martyrdom, and having a utopian narrative, and actions such as terror attacks—are also found in extremist groups espousing anti-fascism and anarcho-socialism.
The Radicalization Risks Of GPT-3 And Advanced Neural Language Models
2020 McGuffie, K. and Newhouse, A. Report
In 2020, OpenAI developed GPT-3, a neural language model that is capable of sophisticated natural language generation and completion of tasks like classification, question-answering, and summarization. While OpenAI has not opensourced the model’s code or pre-trained weights at the time of writing, it has built an API to experiment with the model’s capacity. The Center on Terrorism, Extremism, and Counterterrorism (CTEC) evaluated the GPT-3 API for the risk of weaponization by extremists who may attempt to use GPT-3 or hypothetical unregulated models to amplify their ideologies and recruit to their communities.
Internet, the Great Radicalizer? Exploring Relationships Between Seeking for Online Extremist Materials and Cognitive Radicalization in Young Adults
2020 Frissen, T. Article
Anecdotal evidence asserts that extremist materials on the internet play a decisive role in radicalization processes. However, due to a structural absence of empirical data in the current literature, it remains uncertain if—and to what extent—online extremist materials radicalize. Therefore, the approach of the study was two-fold. First, we explored what types of online jihadist media are pro-actively sought and consumed by young adults. Second, we investigated if and how active exposure to online jihadist media is related to cognitive radicalization, whilst taking into account one's moral disengagement, prior involvement in petty crime, and socio-demographics. Cross-sectional data analyses within a sample of Belgian young adults (n = 1872) show that beheading videos—the most violent and radical form of any of the jihadist materials under scrutiny—were most sought online (36%), but were, paradoxically, the least predictive for radicalization. On the contrary, the rather static jihadist magazines were sought by a small minority (10–11%) but were most strongly associated with radicalization. A stepwise linear regression analysis and Structural Equation Model support our hypothesis that the process of cognitive radicalization is a complex, phasic trajectory from actively seeking out extremist materials to sympathies for violent political behaviors.
Is IS Online Chatter Just Noise?: An Analysis of the Islamic State Strategic Communications
2020 Royo-Vela, M. and McBee, K.A. Article
The objective of this research is to analyze the potential use of strategic communication, and specifically, strategic brand management and online communications directed to a foreign target by the Islamic State (IS). For this purpose, a review of official the IS online media releases was carried out to determine if they reflect characteristics, components, or tools of strategic communication. In this pursuit, a content analysis of a purposive sample of 381 events—photographs, infographics, English written content, and videos—were applied using two independent judges. Statistically relevant results substantiate claims that the IS uses communication strategies and tactics. Descriptive and inferential statistics point out that Delegitimize Opponents, Utopia-Religion, and Military are the three most commonly used themes, in line with the reported IS identity. Also the IS uses breadth of contents, targets, media outlets, and formats, showing content and social media communications such as in online communication. As far as the authors are aware, it is the first content analysis from a strategic communication perspective used to interpret potential branding by the IS. This study is timely, important and lends credibility to the use of communication and marketing terminology by terrorist experts, as well as bridging brand management and strategic communication and terrorism identifying potential targets, messages, channels and tools for counter arguing and positioning.
Decoding Hate: Using Experimental Text Analysis to Classify Terrorist Content
2020 Alrhmoun, A., Maher, S. and Winter, C. Report
This paper uses automated text analysis – the process by which unstructured text is extracted, organised and processed into a meaningful format – to develop tools capable of analysing
Islamic State (IS) propaganda at scale. Although we have used a static archive of IS material, the underlying principle is that these techniques can be deployed against content produced by any number of violent extremist movements in real‑time. This study therefore aims to complement work that looks at technology‑driven strategies employed by social media, video‑hosting and file‑sharing platforms to tackle violent extremist content disseminators.
The EU Digital Services Act (DSA): Recommendations For An Effective Regulation Against Terrorist Content Online
2020 Ritzmann, A., Farid, H. and Schindler, H-J. Policy
In 2020, the Counter Extremism Project (CEP) Berlin carried out a sample analysis to test the extent to which YouTube, Facebook and Instagram block “manifestly illegal” terrorist content upon notification. Our findings raise doubts that the currently applied compliance systems of these companies achieve the objectives of making their services safe for users. As a result, CEP proposed requirements for effective and transparent compliance regimes with a focus on automated decision-making tools and lessons learned from the regulation of the financial industry. This Policy Paper combines all the relevant lessons learned and gives concrete recommendations on how to make the DSA also an effective regulation against terrorist content online.
Regulating Social Media: The Fight Over Section 230 — and Beyond
2020 Barrett' P.M. Report
There are increasing calls to curtail or revoke Section 230, a 1996 law that protects internet companies from most lawsuits related to user-generated content. Critics of Section 230 say it discourages vigilant self-regulation. Proponents counter that the law has fostered free expression and innovation. Our report concludes that Section 230 ought to be preserved but significantly amended.
What they do in the shadows: examining the far-right networks on Telegram
2020 Urman, A. and Katz, S. Article
The present paper contributes to the research on the activities of far-right actors on social media by examining the interconnections between far-right actors and groups on Telegram platform using network analysis. The far-right network observed on Telegram is highly decentralized, similarly to the far-right networks found on other social media platforms. The network is divided mostly along the ideological and national lines, with the communities related to 4chan imageboard and Donald Trump’s supporters being the most influential. The analysis of the network evolution shows that the start of its explosive growth coincides in time with the mass bans of the far-right actors on mainstream social media platforms. The observed patterns of network evolution suggest that the simultaneous migration of these actors to Telegram has allowed them to swiftly recreate their connections and gain prominence in the network thus casting doubt on the effectiveness of deplatforming for curbing the influence of far-right and other extremist actors.
The German Far-right on YouTube: An Analysis of User Overlap and User Comments
2020 Rauchfleisch, A. and Kaiser, J. Article
This study focuses on the formation of far-right online communities on YouTube and whether the rise of three new actors (Pegida, Identitarian movement, AfD) can also be observed with user behavior on YouTube. We map the network of far-right, conspiracy and alternative media channels in the German-language YouTube sphere, how this network evolves over time and identify the topics that users discuss. Our analysis shows that the overall common denominator within the German far-right YouTube sphere is the refugee crisis and the problems associated with it. Furthermore, we show that the community is getting denser and more centralized over time.
A “Europe des Nations”: far right imaginative geographies and the politicization of cultural crisis on Twitter in Western Europe
2020 Ganesh, B. and Froio, C. Article
Contestation over European integration has been widely studied in the rhetoric of parties, leaders, and movements on the far right in a variety of media. Focusing on Twitter use by far right actors in Western Europe, we apply corpus-aided discourse analysis to explore how imaginative geographies are used to politicize Europe among their digital publics. We find that the idea of a crisis of cultural identity pervades imaginaries of Europe amongst far right digital publics. While Europe is presented as facing a crisis of cultural identity, we find that the far right articulates an aspirational imaginary of Europe, the ‘Europe des Nations’ that rejects liberal-democratic pluralism in the EU and the ‘establishment’. We find that the contestation of Europe in far right digital publics relies on a crisis of cultural identity, representing a translation of Nouvelle Droite imaginaries of Europe into the social media space.
Triggered by Defeat or Victory? Assessing the Impact of Presidential Election Results on Extreme Right-Wing Mobilization Online
2020 Scrivens, R., Burruss, G.W., Holt, T.J., Chermak, S.M., Freilich, J.D. and Frank, R. Article
The theoretical literature from criminology, social movements, and political sociology, among others, includes diverging views about how political outcomes could affect movements. Many theories argue that political defeats motivate the losing side to increase their mobilization while other established models claim the winning side may feel encouraged and thus increase their mobilization. We examine these diverging perspectives in the context of the extreme right online and recent presidential elections by measuring the effect of the 2008 and 2016 election victories of Obama and Trump on the volume of postings on the largest white supremacy web-forum. ARIMA time series using intervention modeling showed a significant and sizable increase in the total number of posts and right-wing extremist posts but no significant change for firearm posts in either election year. However, the volume of postings for all impact measures was highest for the 2008 election.
One year since the Christchurch Call to Action: A Review
2020 Pandey, P. Report
This brief analyses the impact of the Christchurch Call to Action, issued to gather countries and technology companies to stop the use of the internet for disseminating violent extremist content. The Call was the result of a summit organised shortly after a terrorist attack in New Zealand in March 2019. This brief finds that the Call lacks clear conceptual definitions and is singularly focused on social media platforms. It also raises questions about how such efforts can maintain a balance between safeguarding digital platforms against terrorist and violent content, while preserving essential freedoms including of speech and expression.
Proposals for Improved Regulation of Harmful Online Content
2020 Benesch, S. Report
This paper offers a set of specific proposals for better describing harmful content online and for reducing the damage it causes, while protecting freedom of expression. The ideas are mainly meant for OSPs since they regulate the vast majority of online content; taken together they operate the largest system of censorship the world has ever known, controlling more human communication than any government. Governments, for their part, have tried to berate or force the companies into changing their policies, with limited and often repressive results. For these reasons, this paper focuses on what OSPs should do to diminish harmful content online. The proposals focus on the rules that form the basis of each regulation system, as well as on other crucial steps in the regulatory process, such as communicating rules to platform users, giving multiple stakeholders a role in regulation, and enforcement of the rules.
Angry by design: toxic communication and technical architectures
2020 Munn, L. Article
Hate speech and toxic communication online is on the rise. Responses to this issue tend to offer technical (automated) or non-technical (human content moderation) solutions, or see hate speech as a natural product of hateful people. In contrast, this article begins by recognizing platforms as designed environments that support particular practices while discouraging others. In what ways might these design architectures be contributing to polarizing, impulsive, or antagonistic behaviors? Two platforms are examined: Facebook and YouTube. Based on engagement, Facebook’s Feed drives views but also privileges incendiary content, setting up a stimulus–response loop that promotes outrage expression. YouTube’s recommendation system is a key interface for content consumption, yet this same design has been criticized for leading users towards more extreme content. Across both platforms, design is central and influential, proving to be a productive lens for understanding toxic communication.
Repeated and Extensive Exposure to Online Terrorist Content: Counter-Terrorism Internet Referral Unit Perceived Stresses and Strategies
2020 Reeve, Z. Article
U.K. Metropolitan Police Counter-Terrorism Internet Referral Unit (CTIRU) Case Officers (COs) are tasked with identifying, and facilitating the removal of material that breaches the Terrorism Act 2006. COs are extensively and repeatedly exposed to material deemed illegal and harmful (including but not restricted to graphic terrorist and non-terrorist material). However, there is little research on the impact of this work, or how COs manage and mitigate the risks of their roles. Semi-structured interviews reveal the adaptive coping mechanisms that promote good perceived health and wellbeing in CTIRU, as well as areas of concern and improvement.
Far-Right Extremism: Is it Legitimate Freedom of Expression, Hate Crime, or Terrorism?
2020 Lowe, D Article
Following the rise in far-right inspired terrorist attacks globally, social media and electronic communications companies have been criticized, mainly by politicians, for allowing far-right extremist content to be available. This article is a comparative legal study focusing on Australia, Canada, New Zealand, the U.K., and the U.S.’ legal provisions regarding the right to freedom of expression, hate crime, and proscription of terrorist organizations. This study found a disparity in the form of expression protected under this right. This disparity widens further when related to hate crime and proscribing groups as terrorist organizations. As such, social media and communications companies have difficulty setting at global level a baseline in determining whether content is legitimate commentary or is extremism promoting or inciting hatred and violence. The article concludes with a recommendation for how states can provide comparable legislation on hate crime as they have done in relation to Islamist inspired extremism. This will assist social media and communications companies in removing content and suspending accounts. These companies are not the guardians of freedom of expression, that is the role of states’ legislatures and judiciary.
1 2 3 58