Library

Welcome to VOX-Pol’s online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Digital Extremisms: Readings in Violence, Radicalisation and Extremism in the Online Space
2020 Littler, M. and Lee, B. (Eds.) Book
This book explores the use of the internet by (non-Islamic) extremist groups, drawing together research by scholars across the social sciences and humanities. It offers a broad overview of the best of research in this area, including research contributions that address far-right, (non-Islamic) religious, animal rights, and nationalist violence online, as well as a discussion of the policy and research challenges posed by these unique and disparate groups. It offers an academically rigorous, introductory text that addresses extremism online, making it a valuable resource for students, practitioners and academics seeking to understand the unique characteristics such risks present.
Examining the Online Expression of Ideology among Far-Right Extremist Forum Users
2020 Holt, T. J., Freilich, J.D. and Chermak, S. M. Article
Over the last decade, there has been an increased focus among researchers on the role of the Internet among actors and groups across the political and ideological spectrum. There has been particular emphasis on the ways that far-right extremists utilize forums and social media to express ideological beliefs through sites affiliated with real-world extremist groups and unaffiliated websites. The majority of research has used qualitative assessments or quantitative analyses of keywords to assess the extent of specific messages. Few have considered the breadth of extremist ideologies expressed among participants so as to quantify the proportion of beliefs espoused by participants. This study addressed this gap in the literature through a content analysis of over 18,000 posts from eight far-right extremist forums operating online. The findings demonstrated that the most prevalent ideological sentiments expressed in users’ posts involved anti-minority comments, though they represent a small proportion of all posts made in the sample. Additionally, users expressed associations to far-right extremist ideologies through their usernames, signatures, and images associated with their accounts. The implications of this analysis for policy and practice to disrupt extremist movements were discussed in detail.
Extreme Digital Speech: Contexts, Responses and Solutions
2020 Ganesh, B. and Bright, J. (Eds.) VOX-Pol Publication
Extreme digital speech (EDS) is an emerging challenge that requires co-ordination between governments, civil society and the private sector. In this report, a range of experts on countering extremism consider the challenges that EDS presents to these stakeholders, the impact that EDS has and the responses taken by these actors to counter it. By focusing on EDS, consideration of the topic is limited to the forms of extreme speech that take place online, often on social media platforms and multimedia messaging applications such as WhatsApp and Telegram. Furthermore, by focusing on EDS rather than explicitly violent forms of extreme speech online, the report departs from a focus on violence and incorporates a broader range of issues such as hateful and dehumanising speech and the complex cultures and politics that have formed around EDS.
From Inspire to Rumiyah: does instructional content in online jihadist magazines lead to attacks?
2020 Zekulin, M. Article
Considerable time has been spent examining how groups like AQAP and ISIS used their online magazines to reach and radicalize individuals in Western democratic states. This paper continues this investigation but shifts its analysis to focus on the ‘how-to’ or instructional content of these publications, an understudied part of the literature. One of the stated goals of these magazines was to provide tactical know-how and assist supporters conducting terror plots in their home states. The question: did the tactics outlined in the magazines materialize in actual plots/attacks and how quickly were they put into practice? The paper examines this question by creating an overview of the tactics which appear in these publications and cross referencing them with a dataset of 166 Islamist-inspired homegrown terror plots/attacks in 14 Western democratic states to determine if, and when, they first appeared in relation to their publication date. It concluded that while some of the suggested strategies did appear following their publication, often it occurred after considerable time had elapsed. This suggests the instructional content did not resonate with readers in real time.
Many Faced Hate: A Cross Platform Study of Content Framing and Information Sharing by Online Hate Groups
2020 Phadke, S, and Mitra, T. Article
Hate groups are increasingly using multiple social media platforms to promote extremist ideologies. Yet we know little about their communication practices across platforms. How do hate groups (or “in-groups”), frame their hateful agenda against the targeted group or the “out-group?” How do they share information? Utilizing “framing” theory from social movement research and analyzing domains in the shared links, we juxtapose the Facebook and Twitter communication of 72 Southern Poverty Law Center (SPLC) designated hate groups spanning five hate ideologies. Our findings show that hate groups use Twitter for educating the audience about problems with the out-group, maintaining positive self-image by emphasizing in-group’s high social status, and for demanding policy changes to negatively affect the out-group. On Facebook, they use fear appeals, call for active participation in group events (membership requests), all while portraying themselves as being oppressed by the out-group and failed by the system. Our study unravels the ecosystem of cross-platform communication by hate groups, suggesting that they use Facebook for group radicalization and recruitment, while Twitter for reaching a diverse follower base.
Reviewing the Role of the Internet in Radicalization Processes
2019 Odağa, Ö., Leiserb, A. and Boehnkec, K. Article
This review presents the existing research on the role of the Internet in radicalization processes. Using a systematic literature search strategy, our paper yields 88 studies on the role of the Internet in a) right-wing extremism and b) radical jihadism. Available studies display a predominant interest in the characteristics of radical websites and a remarkable absence of a user-centred perspective. They show that extremist groups make use of the Internet to spread right wing or jihadist ideologies, connect like-minded others in echo chambers and cloaked websites, and address particularly marginalized individuals of a society, with specific strategies for recruitment. Existing studies have thus far not sufficiently examined the users of available sites, nor have they studied the causal mechanisms that unfold at the intersection between the Internet and its users. The present review suggests avenues for future research, drawing on media and violence research and research on social identity and deindividuation effects in computer-mediated communication.
A Critical Analysis of the Jihadi Discourse through Online Magazines with Special Reference to ‘Wyeth’ Magazine
2019 Neelamalar, M. and Mangala Vadivu, V. Article
‘Jihadism’ (also known as the jihadi movement) is a popular term that signifies the Islamic terror movement which thrives on extremist ideologies and violence. In addition to the conventional practices, the online medium is currently being employed for disseminating these extremist ideologies across the globe. Radicalisation and recruitment of geographically dispersed individuals as ‘jihadists’ for supporting Islamic terror activities tend to be the primary intent for using the digital platforms as the medium of communication in this context. One such initiative by the Lashkar-e-Taiba of Jammu and Kashmir was the release of the ‘Wyeth: The Resistance in Flow’, an e-magazine which was launched on April 2018. The first issue which was posted with an open access option was primarily designed to influence the Indian youth population through the radical interpretations of Islam. Hence, it is crucial to analyse and understand the jihadi discourse of the Wyeth magazine in order to curb and counter-attack such initiatives at its initial phase. For this purpose, the present study aims to examine the content of the Wyeth magazine and analyse the basic traits of the jihadi propaganda and its potential to aid in the self-radicalisation process.
Islamic State's Online Activity And Responses
2019 Conway, M. and Macdonald, S. Book
'Islamic State’s Online Activity and Responses' provides a unique examination of Islamic State’s online activity at the peak of its "golden age" between 2014 and 2017 and evaluates some of the principal responses to this phenomenon.

Featuring contributions from experts across a range of disciplines, the volume examines a variety of aspects of IS’s online activity, including their strategic objectives, the content and nature of their magazines and videos, and their online targeting of females and depiction of children. It also details and analyses responses to IS’s online activity – from content moderation and account suspensions to informal counter-messaging and disrupting terrorist financing – and explores the possible impact of technological developments, such as decentralised and peer-to-peer networks, going forward. Platforms discussed include dedicated jihadi forums, major social media sites such as Facebook, Twitter, and YouTube, and newer services, including Twister.

'Islamic State’s Online Activity and Responses' is essential reading for researchers, students, policymakers, and all those interested in the contemporary challenges posed by online terrorist propaganda and radicalisation. The chapters were originally published as a special issue of Studies in Conflict & Terrorism.
The Topic of Terrorism on Yahoo! Answers: Questions, Answers and Users’ Anonymity
2019 Chua, A. and Banerjee, S. Article
The purpose of this paper is to explore the use of community question answering sites (CQAs) on the topic of terrorism. Three research questions are investigated: what are the dominant themes reflected in terrorism-related questions? How do answer characteristics vary with question themes? How does users’ anonymity relate to question themes and answer characteristics?

Data include 300 questions that attracted 2,194 answers on the community question answering Yahoo! Answers. Content analysis was employed.

The questions reflected the community’s information needs ranging from the life of extremists to counter-terrorism policies. Answers were laden with negative emotions reflecting hate speech and Islamophobia, making claims that were rarely verifiable. Users who posted sensitive content generally remained anonymous.

This paper raises awareness of how CQAs are used to exchange information about sensitive topics such as terrorism. It calls for governments and law enforcement agencies to collaborate with major social media companies to develop a process for cross-platform blacklisting of users and content, as well as identifying those who are vulnerable.

Theoretically, it contributes to the academic discourse on terrorism in CQAs by exploring the type of questions asked, and the sort of answers they attract. Methodologically, the paper serves to enrich the literature around terrorism and social media that has hitherto mostly drawn data from Facebook and Twitter.
The Roles of ‘Old’ and ‘New’ Media Tools and Technologies in the Facilitation of Violent Extremism and Terrorism
2019 Scrivens, R. and Conway, M. Chapter
The media and communication strategies of two particular ideologies are focused on herein: right-wing extremists and violent jihadis – albeit an array of others is referred to also (e.g. nationalist-separatists such as the Irish Republican Army (IRA) and violent Islamists such as Hezbollah). Violent jihadists are inspired by Sunni Islamist-Salafism and seek to establish an Islamist society governed by their version of Islamic or Sharia law imposed by violence (Moghadam, 2008). Right-wing extremists may also subscribe to some radical interpretation of religion, but unlike those inspired by radical Islam, many extreme right adherents are not inspired by religious beliefs per se. Instead, what binds these actors is a racially, ethnically, and sexually defined nationalism, which is typically framed in terms of white power and grounded in xenophobic and exclusionary understandings of the perceived threats posed by such groups as non-whites, Jews, Muslims, immigrants, homosexuals, and feminists. Here the state is perceived as an illegitimate power serving the interests of all but the white man and, as such, right-wing extremists are willing to assume both an offensive and defensive stance in the interests of “preserving” their heritage and their “homeland” (Perry & Scrivens, 2016). With regard to the chapter’s structuring, the following sections are ordered chronologically, treating, in turn, early low-tech communication methods or what we term ‘pre-media,’ followed by other relatively low-tech tools, such as print and photocopying. The high-tech tools reviewed are film, radio, and television, followed by the Internet, especially social media.
Detecting Weak and Strong Islamophobic Hate Speech on Social Media
2019 Vidgen, B. Article
Islamophobic hate speech on social media is a growing concern in contemporary Western politics and society. It can inflict considerable harm on any victims who are targeted, create a sense of fear and exclusion amongst their communities, toxify public discourse and motivate other forms of extremist and hateful behavior. Accordingly, there is a pressing need for automated tools to detect and classify Islamophobic hate speech robustly and at scale, thereby enabling quantitative analyses of large textual datasets, such as those collected from social media. Previous research has mostly approached the automated detection of hate speech as a binary task. However, the varied nature of Islamophobia means that this is often inappropriate for both theoretically informed social science and effective monitoring of social media platforms. Drawing on in-depth conceptual work we build an automated software tool which distinguishes between non-Islamophobic, weak Islamophobic and strong Islamophobic content. Accuracy is 77.6% and balanced accuracy is 83%. Our tool enables future quantitative research into the drivers, spread, prevalence and effects of Islamophobic hate speech on social media.
Too Dark To See Explaining Adolescents Contact With Online Extremism And Their Ability To Recognize It
2019 Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C. and Neumann, K. Article
Adolescents are considered especially vulnerable to extremists’ online activities because they are ‘always online’ and because they are still in the process of identity formation. However, so far, we know little about (a) how often adolescents encounter extremist content in different online media and (b) how well they are able to recognize extremist messages. In addition, we do not know (c) how individual-level factors derived from radicalization research and (d) media and civic literacy affect extremist encounters and recognition abilities. We address these questions based on a representative face-to-face survey among German adolescents (n = 1,061) and qualitative interviews using a think-aloud method (n = 68). Results show that a large proportion of adolescents encounter extremist messages frequently, but that many others have trouble even identifying extremist content. In addition, factors known from radicalization research (e.g., deprivation, discrimination, specific attitudes) as well as extremism-related media and civic literacy influence the frequency of extremist encounters and recognition abilities.
“Yes, I can”: what is the role of perceived self-efficacy in violent online-radicalisation processes of “homegrown” terrorists?
2019 Schlegel, L. Article
Radicalisation is influenced by a multitude of factors such as situational, social and psychological factors, including social-cognitive processes. This article explores how homegrown extremists are influenced by their perceived agency and how the beliefs of their own abilities to change their situation are directly shaped by the online-propaganda they consume using ISIS propaganda as a case study. The article serves
as an exploratory analysis of the potential explanatory qualities of Bandura’s theory of self-efficacy. This preliminary theoretical work explores how online-propaganda seeks to increase perceived personal self-efficacy to inspire action. The findings indicate that an increased focus on agency beliefs may facilitate a more holistic understanding of the psycho-social processes influencing radicalization and factors driving certain individuals to perpetrate violence while others do not. More research needs to be conducted, but this work is a first exploratory step in advancing our understanding of self-efficacy beliefs in the radicalization of homegrown extremists.
Deep Context-Aware Embedding for Abusive and Hate Speech Detection on Twitter
2019 Naseem, U., Razzak, I. and Hameed, I. A. Article
Violence usually spread online, as it has spread in the past. With the increasing use of social media, the violence attributed to online hate speech has increased worldwide resulting rise in number of attacks on immigrants and other minorities. Analysis of such short text posts (e.g. tweets etc.) is valuable for identification of abusive language and hate speech. In this paper, we present Deep Context-Aware Embedding for the detection of Hate speech and abusive language on twitter. To improve the classification performance, we have enhanced the quality of the tweets by considering polsemy, syntax, semantic, OOV words as well as sentiment knowledge and concatenated to form input vector. We have used BiLSTM with attention modeling to identify tweet with hate speech. Experimental results showed significant improvement in the classification of tweets.
Intersections of ISIS media leader loss and media campaign strategy A visual framing analysis
2019 Winkler, C., El-Damanhoury, K., Saleh, Z., Hendry, J. and El-Karhili, N. Article
The decision to target leaders of groups like ISIS to hamper their effectiveness has served as a longstanding principle of counterterrorism efforts. Yet, previous research suggests that any results may simply be temporary. Using insights from confiscated ISIS documents from Afghanistan to define the media leader roles that qualified for each level of the cascade, CTC (Combating Terrorism Center) records to identify media leaders who died, and a content analysis of all ISIS images displayed in the group’s Arabic weekly newsletter to identify the group’s visual framing strategies, this study assesses whether and how leader loss helps explain changes in the level and nature of the group’s visual output over time. ISIS’s quantity of output and visual framing strategies displayed significant changes before, during, and after media leader losses. The level of the killed leader within the group’s organizational hierarchy also corresponded to different changes in ISIS’s media framing.
Antisemitism on Twitter: Collective efficacy and the role of community organisations in challenging online hate speech
2019 Ozalp, A.S., Williams, M.L., Burnap, P., Liu, H. and Mostafa, M. Article
In this paper, we conduct a comprehensive study of online antagonistic content related to Jewish identity posted on Twitter between October 2015 and October 2016 by UK-based users. We trained a scalable supervised machine learning classifier to identify antisemitic content to reveal patterns of online antisemitism perpetration at the source. We built statistical models to analyse the inhibiting and enabling factors of the size (number of retweets) and survival (duration of retweets) of information flows in addition to the production of online antagonistic content. Despite observing high temporal variability, we found that only a small proportion (0.7%) of the content was antagonistic. We also found that antagonistic content was less likely to disseminate in size or survive fora longer period. Information flows from antisemitic agents on Twitter gained less traction, while information flows emanating from capable and willing counter-speech actors -i.e. Jewish organisations- had a significantly higher size and survival rates. This study is the first to demonstrate that Sampson’s classic sociological concept of collective efficacy can be observed on social media (SM). Our findings suggest that when organisations aiming to counter harmful narratives become active on SM platforms, their messages propagate further and achieve greater longevity than antagonistic messages. On SM, counter-speech posted by credible, capable and willing actors can be an effective measure to prevent harmful narratives. Based on our findings, we underline the value of the work by community organisations in reducing the propagation of cyberhate and increasing trust in SM platforms.
Digital Jihad: Online Communication and Violent Extremism
2019 Marone, F. (Ed.) Report
The internet offers tremendous opportunities for violent extremists across the ideological spectrum and at a global level. In addition to propaganda, digital technologies have transformed the dynamics of radical mobilisation, recruitment and participation. Even though the jihadist threat has seemingly declined in the West, the danger exists of the internet being an environment where radical messages can survive and even prosper. Against this background, this ISPI report investigates the current landscape of jihadist online communication, including original empirical analysis. Specific attention is also placed on potential measures and initiatives to address the threat of online violent extremism. The volume aims to present important points for reflection on the phenomenon in the West (including Italy) and beyond.
The battle for truth: How online newspaper commenters defend their censored expressions
2019 Fangen, K. and Holter, C. R. Article
The presence of hate speech in the commentary field of online newspapers is a pressing challenge for free speech policy. We have conducted interviews with 15 people whose comments were censored for posting comments of a racist, discriminatory or hateful nature. What characterizes their self-understanding and enemy images? We found that central to their motivation for writing such comments was an understanding of themselves as particularly knowledgeable people. They see themselves as people who fight for the revelation of the truth, in contrast to the lies spread by politicians and the media. Furthermore, they regard politicians and the media as corrupt elites that are leading our society into destruction by their naïve support of liberal migration policies. By linking up to alternative news media, these individuals support various forms of racialized conspiracy theories, but also a form of radical right-wing populism in their concern that politics should be acted out by people themselves. As such, our study adds to the literature on conspiracy theories in general and racialized conspiracy theories in particular, but also to the literature on online far-right activists. Our contribution lies both in the newness of focusing on the self-perceptions, but also in opening up for a modification of existing literature on the far right.
Hatred Behind the Screens - A Report on the Rise of Online Hate Speech
2019 Williams, M. and de Reya, M. Report
— The reporting, recording and incidence of online hate speech has increased over the past two years.
— While the number of people personally targeted remains relatively low, large numbers of people are being exposed to online hate speech, potentially causing decreased life satisfaction. In particular, an increasingly large number of UK children (aged 12-15) report that they are exposed to hateful content online.
— Online hate speech tends to spike for 24-48 hours after key national or international events such as a terror attack, and then rapidly fall, although the baseline of online hate can remain elevated for several months. Where it reaches a certain level, online hate speech can translate into offline hate crime on the streets.
— Hate crime, including hate speech, is both hard to define and hard to prosecute. A patchwork of hate crime laws has developed over the last two decades, but there is concern the laws are not as effective as they could be, and may need to be streamlined and/or extended - for example to cover gender and age-related hate crime. The Law Commission is currently reviewing hate crime legislation, and has separately completed a preliminary review of the criminal law in relation to offensive
and abusive online communications, concluding there was "considerable scope for reform".
— According to a recent survey by Demos, the public appreciates the difficult trade-off between tackling hate crime and protecting freedom of speech, with 32% in favour of a safety first approach, 23% in favour of protecting civil liberties, and 42% not favouring either option.
Virtual Plotters. Drones. Weaponized AI?: Violent Non-State Actors as Deadly Early Adopters
2019 Gartenstein-Ross, D., Shear, M. and Jones, D. Article
Over the past decade, violent non-state actors’ (VNSAs) adoption of new technologies that can help their operations have tended to follow a recognizable general pattern, which this study dubs the VNSA technology adoption curve: As a consumer technology becomes widely available, VNSAs find ways to adapt it to their deadly purposes. This curve tends to progress in four stages:

1. Early Adoption – The VNSA tries to adopt a new technology, and disproportionately underperforms or fails in definable ways.
2. Iteration – The consumer technology that the VNSA is attempting to repurpose undergoes improvements driven by the companies that brought the technology to market. These improvements are designed to enhance consumers’ experience and the utility that consumers derive from the technology. The improvements help the intended end user, but also aid the VNSA, which iterates alongside the company.
3. Breakthrough – During this stage, the VNSA’s success rate with the new technology significantly improves.
4. Competition – Following the VNSA’s seemingly sudden success, technology companies, state actors, and other stakeholders develop countermeasures designed to mitigate the VNSA’s exploitation of the technology. The outcome of this phase is uncertain, as both the VNSA and its competitors enter relatively uncharted territory in the current technological environment. The authorities and VNSA will try to stay one step ahead of one another.

This report begins by explaining the adoption curve, and more broadly the manner in which VNSAs engage in organizational learning. The report then details two critical case studies of past VNSA technological adoption to illustrate how the adoption curve works in practice, and to inform our analysis of VNSA technological adoptions that are likely in the future.