Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Virtual Plotters. Drones. Weaponized AI?: Violent Non-State Actors as Deadly Early Adopters
2019 Gartenstein-Ross, D., Shear, M. and Jones, D. Report
Over the past decade, violent non-state actors’ (VNSAs) adoption of new technologies that can help their operations have tended to follow a recognizable general pattern, which this study dubs the VNSA technology adoption curve: As a consumer technology becomes widely available, VNSAs find ways to adapt it to their deadly purposes. This curve tends to progress in four stages:

1. Early Adoption – The VNSA tries to adopt a new technology, and disproportionately underperforms or fails in definable ways.
2. Iteration – The consumer technology that the VNSA is attempting to repurpose undergoes improvements driven by the companies that brought the technology to market. These improvements are designed to enhance consumers’ experience and the utility that consumers derive from the technology. The improvements help the intended end user, but also aid the VNSA, which iterates alongside the company.
3. Breakthrough – During this stage, the VNSA’s success rate with the new technology significantly improves.
4. Competition – Following the VNSA’s seemingly sudden success, technology companies, state actors, and other stakeholders develop countermeasures designed to mitigate the VNSA’s exploitation of the technology. The outcome of this phase is uncertain, as both the VNSA and its competitors enter relatively uncharted territory in the current technological environment. The authorities and VNSA will try to stay one step ahead of one another.

This report begins by explaining the adoption curve, and more broadly the manner in which VNSAs engage in organizational learning. The report then details two critical case studies of past VNSA technological adoption to illustrate how the adoption curve works in practice, and to inform our analysis of VNSA technological adoptions that are likely in the future.
Hatred Behind the Screens - A Report on the Rise of Online Hate Speech
2019 Williams, M. and de Reya, M. Report
— The reporting, recording and incidence of online hate speech has increased over the past two years.
— While the number of people personally targeted remains relatively low, large numbers of people are being exposed to online hate speech, potentially causing decreased life satisfaction. In particular, an increasingly large number of UK children (aged 12-15) report that they are exposed to hateful content online.
— Online hate speech tends to spike for 24-48 hours after key national or international events such as a terror attack, and then rapidly fall, although the baseline of online hate can remain elevated for several months. Where it reaches a certain level, online hate speech can translate into offline hate crime on the streets.
— Hate crime, including hate speech, is both hard to define and hard to prosecute. A patchwork of hate crime laws has developed over the last two decades, but there is concern the laws are not as effective as they could be, and may need to be streamlined and/or extended - for example to cover gender and age-related hate crime. The Law Commission is currently reviewing hate crime legislation, and has separately completed a preliminary review of the criminal law in relation to offensive
and abusive online communications, concluding there was "considerable scope for reform".
— According to a recent survey by Demos, the public appreciates the difficult trade-off between tackling hate crime and protecting freedom of speech, with 32% in favour of a safety first approach, 23% in favour of protecting civil liberties, and 42% not favouring either option.
Right-Wing Extremists’ Persistent Online Presence: History and Contemporary Trends
2019 Conway, M., Scrivens, R. and Macnair, L. Report
This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of rightwing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.
Branding the Islamic State of Iraq and Syria
2019 Bandopadhyaya, S. Article
This article will explore three crucial parameters that have been taken into consideration to attract millennials towards the Islamic State or Islamic State of Iraq and Syria (ISIS) brand: the first parameter is story creation around the historical significance of Islamic prophecies justifying the ISIS brand. Second is the symbolisms attached to the ISIS brand and its relevance (a flag, a leader, a logo, a caliphate) and, third, the actions or the sense of attachment to the ISIS brand in the form of practising ideology, gaining recognition and appeal to the millennials. The promotion of the brand has been advanced through diverse means – social media platforms, mainstream media organizations, YouTube videos, all orchestrated to gain recognition of a rising state brand on the one end and a brand of fear and intimidation on the other.
Intersections of ISIS media leader loss and media campaign strategy A visual framing analysis
2019 Winkler, C., El-Damanhoury, K., Saleh, Z., Hendry, J. and El-Karhili, N. Article
The decision to target leaders of groups like ISIS to hamper their effectiveness has served as a longstanding principle of counterterrorism efforts. Yet, previous research suggests that any results may simply be temporary. Using insights from confiscated ISIS documents from Afghanistan to define the media leader roles that qualified for each level of the cascade, CTC (Combating Terrorism Center) records to identify media leaders who died, and a content analysis of all ISIS images displayed in the group’s Arabic weekly newsletter to identify the group’s visual framing strategies, this study assesses whether and how leader loss helps explain changes in the level and nature of the group’s visual output over time. ISIS’s quantity of output and visual framing strategies displayed significant changes before, during, and after media leader losses. The level of the killed leader within the group’s organizational hierarchy also corresponded to different changes in ISIS’s media framing.
Reviewing the Role of the Internet in Radicalization Processes
2019 Odağa, Ö., Leiserb, A. and Boehnkec, K. Article
This review presents the existing research on the role of the Internet in radicalization processes. Using a systematic literature search strategy, our paper yields 88 studies on the role of the Internet in a) right-wing extremism and b) radical jihadism. Available studies display a predominant interest in the characteristics of radical websites and a remarkable absence of a user-centred perspective. They show that extremist groups make use of the Internet to spread right wing or jihadist ideologies, connect like-minded others in echo chambers and cloaked websites, and address particularly marginalized individuals of a society, with specific strategies for recruitment. Existing studies have thus far not sufficiently examined the users of available sites, nor have they studied the causal mechanisms that unfold at the intersection between the Internet and its users. The present review suggests avenues for future research, drawing on media and violence research and research on social identity and deindividuation effects in computer-mediated communication.
“You Know What to Do”: Proactive Detection of YouTube Videos Targeted by Coordinated Hate Attacks
2019 Mariconti, E., Suarez-Tangil, G., Blackburn, J., de Cristofaro, E., Kourtellis, N., Leontiadis, I., Serrano, J.L. and Stringhini, G. Article
Video sharing platforms like YouTube are increasingly targeted by aggression and hate attacks. Prior work has shown how these attacks often take place as a result of "raids," i.e., organized efforts by ad-hoc mobs coordinating from third-party communities. Despite the increasing relevance of this phenomenon, however, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human moderation. In this paper, we propose an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of videos that were targeted by raids. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with very good results (AUC up to 94%). Overall, our work provides an important first step towards deploying proactive systems to detect and mitigate coordinated hate attacks on platforms like YouTube.
Islamic State's Online Activity And Responses
2019 Conway, M. and Macdonald, S. Book
'Islamic State’s Online Activity and Responses' provides a unique examination of Islamic State’s online activity at the peak of its "golden age" between 2014 and 2017 and evaluates some of the principal responses to this phenomenon. Featuring contributions from experts across a range of disciplines, the volume examines a variety of aspects of IS’s online activity, including their strategic objectives, the content and nature of their magazines and videos, and their online targeting of females and depiction of children. It also details and analyses responses to IS’s online activity – from content moderation and account suspensions to informal counter-messaging and disrupting terrorist financing – and explores the possible impact of technological developments, such as decentralised and peer-to-peer networks, going forward. Platforms discussed include dedicated jihadi forums, major social media sites such as Facebook, Twitter, and YouTube, and newer services, including Twister.

'Islamic State’s Online Activity and Responses' is essential reading for researchers, students, policymakers, and all those interested in the contemporary challenges posed by online terrorist propaganda and radicalisation. The chapters were originally published as a special issue of Studies in Conflict & Terrorism.
Digital Jihad: Online Communication and Violent Extremism
2019 Marone, F. (Ed.) Report
The internet offers tremendous opportunities for violent extremists across the ideological spectrum and at a global level. In addition to propaganda, digital technologies have transformed the dynamics of radical mobilisation, recruitment and participation. Even though the jihadist threat has seemingly declined in the West, the danger exists of the internet being an environment where radical messages can survive and even prosper. Against this background, this ISPI report investigates the current landscape of jihadist online communication, including original empirical analysis. Specific attention is also placed on potential measures and initiatives to address the threat of online violent extremism. The volume aims to present important points for reflection on the phenomenon in the West (including Italy) and beyond.
Terrorism and the Digital Right-Wing
2019 Harwood, E.T. Article
Elizabeth T. Harwood on networks of provocation.
Terrorism, Violent Extremism, and the Internet: Free Speech Considerations
2019 Killion, V. L. Report
Recent acts of terrorism and hate crimes have prompted a renewed focus on the possible links between internet content and offline violence. While some have focused on the role that social media companies play in moderating user-generated content, others have called for Congress to pass laws regulating online content promoting terrorism or violence. Proposals related to government action of this nature raise significant free speech questions, including (1) the reach of the First Amendment’s protections when it comes to foreign nationals posting online content from abroad; (2) the scope of so-called “unprotected” categories of speech developed long before the advent of the internet; and (3) the judicial standards that limit how the government can craft or enforce laws to preserve national security and prevent violence.
Too Dark To See Explaining Adolescents Contact With Online Extremism And Their Ability To Recognize It
2019 Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C. and Neumann, K. Article
Adolescents are considered especially vulnerable to extremists’ online activities because they are ‘always online’ and because they are still in the process of identity formation. However, so far, we know little about (a) how often adolescents encounter extremist content in different online media and (b) how well they are able to recognize extremist messages. In addition, we do not know (c) how individual-level factors derived from radicalization research and (d) media and civic literacy affect extremist encounters and recognition abilities. We address these questions based on a representative face-to-face survey among German adolescents (n = 1,061) and qualitative interviews using a think-aloud method (n = 68). Results show that a large proportion of adolescents encounter extremist messages frequently, but that many others have trouble even identifying extremist content. In addition, factors known from radicalization research (e.g., deprivation, discrimination, specific attitudes) as well as extremism-related media and civic literacy influence the frequency of extremist encounters and recognition abilities.
Antisemitism on Twitter: Collective efficacy and the role of community organisations in challenging online hate speech
2019 Ozalp, A.S., Williams, M.L., Burnap, P., Liu, H. and Mostafa, M. Article
In this paper, we conduct a comprehensive study of online antagonistic content related to Jewish identity posted on Twitter between October 2015 and October 2016 by UK-based users. We trained a scalable supervised machine learning classifier to identify antisemitic content to reveal patterns of online antisemitism perpetration at the source. We built statistical models to analyse the inhibiting and enabling factors of the size (number of retweets) and survival (duration of retweets) of information flows in addition to the production of online antagonistic content. Despite observing high temporal variability, we found that only a small proportion (0.7%) of the content was antagonistic. We also found that antagonistic content was less likely to disseminate in size or survive fora longer period. Information flows from antisemitic agents on Twitter gained less traction, while information flows emanating from capable and willing counter-speech actors -i.e. Jewish organisations- had a significantly higher size and survival rates. This study is the first to demonstrate that Sampson’s classic sociological concept of collective efficacy can be observed on social media (SM). Our findings suggest that when organisations aiming to counter harmful narratives become active on SM platforms, their messages propagate further and achieve greater longevity than antagonistic messages. On SM, counter-speech posted by credible, capable and willing actors can be an effective measure to prevent harmful narratives. Based on our findings, we underline the value of the work by community organisations in reducing the propagation of cyberhate and increasing trust in SM platforms.
Combating Violent Extremism Voices Of Former Right Wing Extremists
2019 Scrivens, R., Venkatesh, V., Bérubé, M. and Gaudette, T. Article
While it has become increasingly common for researchers, practitioners and policymakers to draw from the insights of former extremists to combat violent extremism, overlooked in this evolving space has been an in-depth look at how formers perceive such efforts. To address this gap, interviews were conducted with 10 Canadian former right-wing extremists based on a series of questions provided by 30 Canadian law enforcement officials and 10 community activists. Overall, formers suggest that combating violent extremism requires a multidimensional response, largely consisting of support from parents and families, teachers and educators, law enforcement officials, and other credible formers.
Deep Context-Aware Embedding for Abusive and Hate Speech Detection on Twitter
2019 Naseem, U., Razzak, I. and Hameed, I. A. Article
Violence usually spread online, as it has spread in the past. With the increasing use of social media, the violence attributed to online hate speech has increased worldwide resulting rise in number of attacks on immigrants and other minorities. Analysis of such short text posts (e.g. tweets etc.) is valuable for identification of abusive language and hate speech. In this paper, we present Deep Context-Aware Embedding for the detection of Hate speech and abusive language on twitter. To improve the classification performance, we have enhanced the quality of the tweets by considering polsemy, syntax, semantic, OOV words as well as sentiment knowledge and concatenated to form input vector. We have used BiLSTM with attention modeling to identify tweet with hate speech. Experimental results showed significant improvement in the classification of tweets.
The Topic of Terrorism on Yahoo! Answers: Questions, Answers and Users’ Anonymity
2019 Chua, A. and Banerjee, S. Article
The purpose of this paper is to explore the use of community question answering sites (CQAs) on the topic of terrorism. Three research questions are investigated: what are the dominant themes reflected in terrorism-related questions? How do answer characteristics vary with question themes? How does users’ anonymity relate to question themes and answer characteristics?

Data include 300 questions that attracted 2,194 answers on the community question answering Yahoo! Answers. Content analysis was employed.

The questions reflected the community’s information needs ranging from the life of extremists to counter-terrorism policies. Answers were laden with negative emotions reflecting hate speech and Islamophobia, making claims that were rarely verifiable. Users who posted sensitive content generally remained anonymous.

This paper raises awareness of how CQAs are used to exchange information about sensitive topics such as terrorism. It calls for governments and law enforcement agencies to collaborate with major social media companies to develop a process for cross-platform blacklisting of users and content, as well as identifying those who are vulnerable.

Theoretically, it contributes to the academic discourse on terrorism in CQAs by exploring the type of questions asked, and the sort of answers they attract. Methodologically, the paper serves to enrich the literature around terrorism and social media that has hitherto mostly drawn data from Facebook and Twitter.
Detecting Weak and Strong Islamophobic Hate Speech on Social Media
2019 Vidgen, B. Article
Islamophobic hate speech on social media is a growing concern in contemporary Western politics and society. It can inflict considerable harm on any victims who are targeted, create a sense of fear and exclusion amongst their communities, toxify public discourse and motivate other forms of extremist and hateful behavior. Accordingly, there is a pressing need for automated tools to detect and classify Islamophobic hate speech robustly and at scale, thereby enabling quantitative analyses of large textual datasets, such as those collected from social media. Previous research has mostly approached the automated detection of hate speech as a binary task. However, the varied nature of Islamophobia means that this is often inappropriate for both theoretically informed social science and effective monitoring of social media platforms. Drawing on in-depth conceptual work we build an automated software tool which distinguishes between non-Islamophobic, weak Islamophobic and strong Islamophobic content. Accuracy is 77.6% and balanced accuracy is 83%. Our tool enables future quantitative research into the drivers, spread, prevalence and effects of Islamophobic hate speech on social media.
The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research
2019 Scrivens, R., Gill, P. and Conway, M. Chapter
Many researchers, practitioners, and policy-makers continue to raise questions about the role of the Internet in facilitating violent extremism and terrorism. A surge in research on this issue notwithstanding, relatively few empirically grounded analyses are yet available. This chapter provides researchers with five key suggestions for progressing knowledge on the role of the Internet in facilitating violent extremism and terrorism so that we may be better placed to determine the significance of online content and activity in the latter going forward. These five suggestions relate to (1) collecting primary data across multiple types of populations; (2) making archives of violent extremist online content accessible for use by researchers and on user-friendly platforms; (3) outreaching beyond terrorism studies to become acquainted with, for example, the Internet studies literature and engaging in interdisciplinary research with, for example, computer scientists; (4) including former extremists in research projects, either as study participants or project collaborators; and (5) drawing connections between the on- and offline worlds of violent extremists.
Disrupting the Digital Divide: Extremism's Integration of Online / Offline Practice
2019 Mattheis, A. Report
In its offline aspect the broader right-wing movement is comprised of a range of groups and idealogical variances that have traditionally had difficulty coalescing into a coherent movement with broad appeal. In its online aspect, right-wing extremist practice is focused on spreading ideaology, recruiting and radicalization, and building transnational communities.
"Pine Tree" Twitter and the Shifting Ideological Foundations of Eco-Extremism
2019 Hughes, B. Report
Eco-fascism is emerging at both the highest levels of state and the lowest reaches of the political underworld. However, this may be only part of a much larger, more idealogically complex, emerging extremist threat. The climate crisis--and the crisis of global financial capitalism from which it is inextricable--may yet be driving a realignment of extremist environmental politics. An exploratory analysis of radical environmentalist discourse on the Twitter platform reveals the emergence of an ecological extremism that confounds contemporary understandings of the left, right, authoritarian and liberal. If this represents the future of eco-extremism, it may be necessary for researchers and practitioners to reorient the frameworks that guide their assessment of emerging risks.