Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media
2020 Rogers, R. Article
Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.
From Bombs to Books, and Back Again? Mapping Strategies of Right-Wing Revolutionary Resistance
2021 Ravndal, J.A. Article
This article begins by outlining four post-WWII strategies of right-wing revolutionary resistance: vanguardism; the cell system; leaderless resistance; and metapolitics. Next, the article argues that metapolitics became a preferred strategy for many right-wing revolutionaries during the 2000s and early 2010s, and proposes three conditions that may help explain this metapolitical turn: limited opportunities for armed resistance; a subcultural style shift; and new opportunities for promoting alternative worldviews online. Finally, the article theorizes about the types of threats that may emerge in the wake of this metapolitical turn, and speculates about the likelihood of a new and more violent turn in the near future.
Detecting Markers of Radicalisation in Social Media Posts: Insights From Modified Delphi Technique and Literature Review
2021 Neo, L.S. Article
This study involved the creation of factors and indicators that can detect radicalization in social media posts. A concurrent approach of an expert knowledge acquisition process (modified Delphi technique) and literature review was utilized. Seven Singapore subject-matter experts in the field of terrorism evaluated factors that were collated from six terrorism risk assessment tools (ERG 22+, IVP, TRAP-18, MLG, VERA-2, and Cyber-VERA). They identify those that are of most considerable relevance for detecting radicalization in social media posts. A parallel literature review on online radicalization was conducted to complement the findings from the expert panel. In doing so, 12 factors and their 42 observable indicators were derived. These factors and indicators have the potential to guide the development of cyber-focused screening tools to detect radicalization in social media posts.
New Models for Deploying Counterspeech: Measuring Behavioral Change and Sentiment Analysis
2021 Saltman, E., Kooti, F. and Vockery, K. Article
The counterterrorism and CVE community has long questioned the effectiveness of counterspeech in countering extremism online. While most evaluation of counterspeech rely on limited reach and engagement metrics, this paper explores two models to better measure behavioral change and sentiment analysis. Conducted via partnerships between Facebook and counter-extremism NGOs, the first model uses A/B testing to analyze the effects of counterspeech exposure on low-prevalence-high-risk audiences engaging with Islamist extremist terrorist content. The second model builds upon online safety intervention approaches and the Redirect Method through a search based “get-help” module, redirecting white-supremacy and Neo-Nazi related search-terms to disengagement NGOs.
Mobilizing extremism online: comparing Australian and Canadian right-wing extremist groups on Facebook
2021 Hutchinson, J., Amarasingam, A., Scrivens, R. and Ballsun-Stanton, B. Article
Right-wing extremist groups harness popular social media platforms to accrue and mobilize followers. In recent years, researchers have examined the various themes and narratives espoused by extremist groups in the United States and Europe, and how these themes and narratives are employed to mobilize their followings on social media. Little, however, is comparatively known about how such efforts unfold within and between right-wing extremist groups in Australia and Canada. In this study, we conducted a cross-national comparative analysis of over eight years of online content found on 59 Australian and Canadian right-wing group pages on Facebook. Here we assessed the level of active and passive user engagement with posts and identified certain themes and narratives that generated the most user engagement. Overall, a number of ideological and behavioral commonalities and differences emerged in regard to patterns of active and passive user engagement, and the character of three prevailing themes: methods of violence, and references to national and racial identities. The results highlight the influence of both the national and transnational context in negotiating which themes and narratives resonate with Australian and Canadian right-wing online communities, and the multi-dimensional nature of right-wing user engagement and social mobilization on social media.
New Models for Deploying Counterspeech: Measuring Behavioral Change and Sentiment Analysis
2021 Saltman, E., Kooti, F. and Vockery, K. Article
The counterterrorism and CVE community has long questioned the effectiveness of counterspeech in countering extremism online. While most evaluation of counterspeech rely on limited reach and engagement metrics, this paper explores two models to better measure behavioral change and sentiment analysis. Conducted via partnerships between Facebook and counter-extremism NGOs, the first model uses A/B testing to analyze the effects of counterspeech exposure on low-prevalence-high-risk audiences engaging with Islamist extremist terrorist content. The second model builds upon online safety intervention approaches and the Redirect Method
through a search based “get-help” module, redirecting whitesupremacy and Neo-Nazi related search-terms to disengagement NGOs.
The Case of Jihadology and the Securitization of Academia
2021 Zelin, A.Y. Article
This paper goes to the heart of this special issue by exploring the case of the web site, Jihadology, which the author founded and has managed for the past ten-plus years. It explores various issues including why such a site is necessary and/or useful, questions about dissemination and open access, lessons learned about responsibility and interaction with jihadis online, the evolution of the website that has the largest repository of jihadi content, interactions with governments and technology companies and how they viewed and dealt with the website. The paper also explores how the experience gained might help other researchers interested in creating primary source-first websites to assist in their research as well as to the benefit of others in the field. Therefore, this paper aims to shed light not only on this unique case, but also on the moral and ethical questions that have arisen through maintaining the Jihadology website for more than a decade in a time of changing online environments and more recent calls for censorship.
Online Extremism and Terrorism Research Ethics: Researcher Safety, Informed Consent, and the Need for Tailored Guidelines
2021 Conway, M. Article
This article reflects on two core issues of human subjects’ research ethics and how they play out for online extremism and terrorism researchers. Medical research ethics, on which social science research ethics are based, centers the protection of research subjects, but what of the protection of researchers? Greater attention to researcher safety, including online security and privacy and mental and emotional wellbeing, is called for herein. Researching hostile or dangerous communities does not, on the other hand, exempt us from our responsibilities to protect our research subjects, which is generally ensured via informed consent. This is complicated in data-intensive research settings, especially with the former type of communities, however. Also grappled with in this article therefore are the pros and cons of waived consent and deception and the allied issue of prevention of harm to subjects in online extremism and terrorism research. The best path forward it is argued—besides talking through the diversity of ethical issues arising in online extremism and terrorism research and committing our thinking and decision-making around them to paper to a much greater extent than we have done to-date—may be development of ethics guidelines tailored to our sub-field.
Online Extremism Detection: A Systematic Literature Review With Emphasis on Datasets, Classification Techniques, Validation Methods, and Tools
2021 Gaikwad, M., Ahirrao, S., Phansalkar, S. and Kotecha, K. Article
Social media platforms are popular for expressing personal views, emotions and beliefs. Social media platforms are influential for propagating extremist ideologies for group-building, fund-raising, and recruitment. To monitor and control the outreach of extremists on social media, detection of extremism in social media is necessary. The existing extremism detection literature on social media is limited by specific ideology, subjective validation methods, and binary or tertiary classification. A comprehensive and comparative survey of datasets, classification techniques, validation methods with online extremism detection tool is essential. The systematic literature review methodology (PRISMA) was used. Sixty-four studies on extremism research were collected, including 31 from SCOPUS, Web of Science (WoS), ACM, IEEE, and 33 thesis, technical and analytical reports using Snowballing technique. The survey highlights the role of social media in propagating online radicalization and the need for extremism detection on social media platforms. The review concludes lack of publicly available, class-balanced, and unbiased datasets for better detection and classification of social-media extremism. Lack of validation techniques to evaluate correctness and quality of custom data sets without human interventions, was found. The information retrieval unveiled that contemporary research work is prejudiced towards ISIS ideology. We investigated that deep learning based automated extremism detection techniques outperform other techniques. The review opens the research opportunities for developing an online, publicly available automated tool for extremism data collection and detection. The survey results in conceptualization of architecture for construction of multi-ideology extremism text dataset with robust data validation techniques for multiclass classification of extremism text.
Echo Chambers on Social Media: A Systematic Review of the Literature
2021 Terren, L., and Borge Bravo, R. Article
The increasing pervasiveness of social media has been matched by growing concerns regarding their potential impact on democracy and public debate. While some theorists have claimed that ICTs and social media would bring about a new independent public sphere and increase exposure to political divergence, others have warned that they would lead to polarization, through the formation of echo chambers. The issue of social media echo chambers is both crucial and widely debated. This article attempts to provide a comprehensive account of the scientific literature on this issue, highlighting the different approaches, their similarities, differences, benefits and drawbacks, and offering a consolidated and critical perspective that can hopefully support future research in this area. Concretely, it presents the results of a systematic review of 55 studies investigating the existence of echo chambers on social media, identifying patterns across their foci, methods and findings, and shedding light on the contradictory nature of the literature. We found that the results of research on this issue seem largely influenced by methodological and data collection choices. Indeed, articles that found clear evidence of echo chambers on social media were all based on digital trace data, while those that found no evidence were all based on self-reported data. Future studies should take into account the potential biases of the different approaches and the significant potential of combining self-reported data with digital trace data.
PROTOCOL: What are the effects of different elements of media on radicalization outcomes? A systematic review
2021 Wolfowicz, M., Hasisi, B. and Weisburd, D. Article
Objectives: In this systematic review and meta analysis we will collate and synthesize the evidence on media‐effects for radicalization, focusing on both cognitive
and behavioral outcomes. The goal is to identify the relative magnitudes of the effects for different mediums, types of content, and elements of human‐media
relationships.
Methodology: Random‐effects meta analysis will be used and the results will be rank‐ordered according to the size of the pooled estimates for the different factors.
Meta‐regressions, moderator analysis, and sub‐group analyses will be used to investigate sources of heterogeneity.
Implications: The results of this review will provide a better understanding of the relative magnitude of the effects of media‐related factors. This information should
help the development of more evidence‐based policies.
Fake news: the effects of social media disinformation on domestic terrorism
Dynamics of Asymmetric Conflict Piazza, J.A. Article
This study tests whether social media disinformation contributes to domestic terrorism within countries. I theorize that disinformation disseminated by political actors online through social media heightens political polarization within countries and that this, in turn, produces an environment where domestic terrorism is more likely to occur. I test this theory using data from more than 150 countries for the period 2000–2017. I find that propagation of disinformation through social media drives domestic terrorism. Using mediation tests I also verify that disinformation disseminated through social media increases domestic terrorism by, among other processes, enhancing political polarization within society.
Comparing the Online Posting Behaviors of Violent and Non-Violent Right-Wing Extremists
2021 Scrivens, R., Wojciechowski, T.W., Freilich, J.D., Chermak, S.M. and Frank, R. Article
Despite the ongoing need for researchers, practitioners, and policymakers to identify and assess the online activities of violent extremists prior to their engagement in violence offline, little is empirically known about their online behaviors generally or differences in their posting behaviors compared to non-violent extremists who share similar ideological beliefs particularly. In this study, we drew from a unique sample of violent and non-violent right-wing extremists to compare their posting behaviors within a sub-forum of the largest white supremacy web-forum. Analyses for the current study proceeded in three phases. First, we plotted the average posting trajectory for users in the sample, followed by an assessment of the rates at which they stayed active or went dormant in the sub-forum. We then used logistic regression to examine whether specific posting behaviors were characteristic of users’ violence status. The results highlight a number of noteworthy differences in the posting behaviors of violent and non-violent right-wing extremists, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online. We conclude with a discussion of the implications of this analysis, its limitations and avenues for future research.
Toward an Ethical Framework for Countering Extremist Propaganda Online
2021 Henschke, A. Article
In recent years, extremists have increasingly turned to online spaces to distribute propaganda and as a recruitment tool. While there is a clear need for governments and social media companies to respond to these efforts, such responses also bring with them a set of ethical challenges. This paper provides an ethical analysis of key policy responses to online extremist propaganda. It identifies the ethical challenges faced by policy responses and details the ethical foundations on which such policies can potentially be justified in a modern liberal democracy. We also offer an ethical framework in which policy responses to online extremism in liberal democracies can be grounded, setting clear parameters upon which future policies can be built in a fast-changing online environment.
Hate, Obscenity, and Insults: Measuring the Exposure of Children to Inappropriate Comments in YouTube
2021 Alshamrani, S., Abusnaina, A., Abuhamad, M., Nyang, D. and Mohaisen, D. Article
Social media has become an essential part of the daily routines of children and adolescents. Moreover, enormous efforts have been made to ensure the psychological and emotional well-being of young users as well as their safety when interacting with various social media platforms. In this paper, we investigate the exposure of those users to inappropriate comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records, and studied the presence of five age-inappropriate categories and the amount of exposure to each category. Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments. Our results show a large percentage of worrisome comments with inappropriate content: we found 11% of the comments on children’s videos to be toxic, highlighting the importance of monitoring comments, particularly on children platforms.
Role of Public WhatsApp Groups Within the Hindutva Ecosystem of Hate and Narratives of “CoronaJihad”
2021 Nizaruddin, F. Article
This article uses the context of the widespread circulation of accounts about “CoronaJihad” in India during the COVID-19 pandemic to examine how public WhatsApp groups that participate in disseminating such accounts function within the ecosystem of hate around Hindutva majoritarianism in the country. The manner in which the WhatsApp platform operates within this ecosystem is mapped through a granular study of three public Hindutva WhatsApp groups; the messages within these groups during the first phase of the COVID-19 lock-down in India were examined during the course of this study. The pattern of messaging within the three groups that contribute to the narrative of “CoronaJihad,” which blames the minority Muslim community for the spread of the virus in India, were analyzed. The article focuses on factors including company policies and the specific sociopolitical situation in the country to understand the circumstances that make WhatsApp’s deep entanglement with the divisive politics of Hindutva majoritarianism in India possible.
Redirect Method: Canada
2021 Moonshot CVE Report
In February 2019, Moonshot launched The Redirect Method with funding from the Community Resilience Fund and in collaboration with the Canada Centre for Community Engagement and
Violence Prevention at Public Safety Canada. Canada Redirect was first deployed across all 13 Canadian provinces and territories, and in June 2019 our campaigns were subdivided to incorporate 353 postcodes in Canada’s six largest cities. These localized campaigns enabled Moonshot to collect granular data on extremist search appetite, test experimental messaging, and
explore the viability of providing at-risk users with content and services in their community.

Using the updated Redirect Method, Canada Redirect aimed to reach at-risk users with content that aligned as closely as possible with what they were searching for. Moonshot used subject-matter expertise and in-house knowledge to match relevant counter-narratives to their respective target audiences. This approach taps into a range of content ecosystems, such as music, gaming, and literature, to deliver alternative messaging that contrasts with the extremist content a user may be searching for, such as neo-Nazi manifestos or an ISIS nasheed. Moonshot adapted the Redirect Method to include content specific to the Canadian context in order to increase the relevance and impact of counter-narratives to the Canadian at-risk audience.

This report details Canada Redirect’s project phases, achievements, and findings from our digital campaigns, which were deployed across Canada for over a year, from 21 February 2019 to 23 March 2020.
Countering terrorism or criminalizing curiosity? The troubled history of UK responses to right-wing and other extremism
2021 Zedner, L. Article
The growth of right-wing extremism, especially where it segues into hate crime and terrorism, poses new challenges for governments, not least because its perpetrators are typically lone actors, often radicalized online. The United Kingdom has struggled to define, tackle or legitimate against extremism, though it already has an extensive array of terrorism-related offences that target expression, encouragement, publication and possession of terrorist material. In 2019, the United Kingdom went further to make viewing terrorist-related material online on a single occasion a crime carrying a 15-year maximum sentence. This article considers whether UK responses to extremism, particularly those that target non-violent extremism, are necessary, proportionate, effective and compliant with fundamental rights. It explores whether criminalizing the curiosity of those who explore radical political ideas constitutes legitimate criminalization or overextends state power and risks chilling effects on freedom of speech, association, academic freedom, journalistic enquiry and informed public debate—all of which are the lifeblood of a liberal democracy.
An Ensemble Method for Radicalization and Hate Speech Detection Online Empowered by Sentic Computing
2021 Araque, O. and Iglesias, C. A. Article
The dramatic growth of the Web has motivated researchers to extract knowledge from enormous repositories and to exploit the knowledge in myriad applications. In this study, we focus on natural language processing (NLP) and, more concretely, the emerging field of affective computing to explore the automation of understanding human emotions from texts. This paper continues previous efforts to utilize and adapt affective techniques into different areas to gain new insights. This paper proposes two novel feature extraction methods that use the previous sentic computing resources AffectiveSpace and SenticNet. These methods are efficient approaches for extracting affect-aware representations from text. In addition, this paper presents a machine learning framework using an ensemble of different features to improve the overall classification performance. Following the description of this approach, we also study the effects of known feature extraction methods such as TF-IDF and SIMilarity-based sentiment projectiON (SIMON). We perform a thorough evaluation of the proposed features across five different datasets that cover radicalization and hate speech detection tasks. To compare the different approaches fairly, we conducted a statistical test that ranks the studied methods. The obtained results indicate that combining affect-aware features with the studied textual representations effectively improves performance. We also propose a criterion considering both classification performance and computational complexity to select among the different methods.
"Short is the Road that Leads from Fear to Hate": Fear Speech in Indian WhatsApp Groups
2021 Saha, P., Mathew, B., Garimella, K. and Mukherjee, A. Article
WhatsApp is the most popular messaging app in the world. Due to its popularity, WhatsApp has become a powerful and cheap tool for political campaigning being widely used during the 2019 Indian general election, where it was used to connect to the voters on a large scale. Along with the campaigning, there have been reports that WhatsApp has also become a breeding ground for harmful speech against various protected groups and religious minorities. Many such messages attempt to instil fear among the population about a specific (minority) community. According to research on inter-group conflict, such `fear speech' messages could have a lasting impact and might lead to real offline violence. In this paper, we perform the first large scale study on fear speech across thousands of public WhatsApp groups discussing politics in India. We curate a new dataset and try to characterize fear speech from this dataset. We observe that users writing fear speech messages use various events and symbols to create the illusion of fear among the reader about a target community. We build models to classify fear speech and observe that current state-of-the-art NLP models do not perform well at this task. Fear speech messages tend to spread faster and could potentially go undetected by classifiers built to detect traditional toxic speech due to their low toxic nature. Finally, using a novel methodology to target users with Facebook ads, we conduct a survey among the users of these WhatsApp groups to understand the types of users who consume and share fear speech. We believe that this work opens up new research questions that are very different from tackling hate speech which the research community has been traditionally involved in.
1 2 3 63