Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
The Social Structure of Extremist Websites
2020 Bouchard, M., Davies, G., Frank, R., Wu, E. and Joffres, K. Chapter
In this study, we select the official websites of four known extremist groups and map the networks of hyperlinked websites forming a virtual community around them. The networks are constructed using a custom-built webcrawler (TENE: Terrorism and Extremism Network Extractor) that searches the HTML of a website for all the hyperlinks inserted directing to other websites (Bouchard et al., 2014). Following all of these hyperlinks out of the initial website of interest produces the network of websites forming a community that is more or less cohesive, more or less extensive, and more or less devoted to the same cause (Bouchard and Westlake, 2016; Westlake and Bouchard, 2016). The extent to which the official website of a group contains many hyperlinks towards external websites may be an indicator of a more active community, and it may be indicative of a more active social movement.
What is BitChute? Characterizing the “Free Speech” Alternative to YouTube
2020 Trujillo, M., Gruppi, M., Buntain, C. and Horne, B.D. Article
In this paper, we characterize the content and discourse on BitChute, a social video-hosting platform. Launched in 2017 as an alternative to YouTube, BitChute joins an ecosystem of alternative, low content moderation platforms, including Gab, Voat, Minds, and 4chan. Uniquely, BitChute is the first of these alternative platforms to focus on video content and is growing in popularity. Our analysis reveals several key characteristics of the platform. We find that only a handful of channels receive any engagement, and almost all of those channels contain conspiracies or hate speech. This high rate of hate speech on the platform as a whole, much of which is anti-Semitic, is particularly concerning. Our results suggest that BitChute has a higher rate of hate speech than Gab but less than 4chan. Lastly, we find that while some BitChute content producers have been banned from other platforms, many maintain profiles on mainstream social media platforms, particularly YouTube. This paper contributes a first look at the content and discourse on BitChute and provides a building block for future research on low content moderation platforms.
Towards the “olive trees of Rome”: exploitation of propaganda devices in the Islamic State’s flagship magazine “Rumiyah”
2020 Lakomy, M. Article
This paper aims to contribute to understanding how the last flagship magazine of the Islamic State - “Rumiyah” - attempted to influence and manipulate Internet users. Its primary objective is to analyze the propaganda methods exploited in all thirteen issues of this magazine. In order to do so this paper utilises content analysis to investigate “propaganda devices”, a concept developed by the American Institute for Propaganda Analysis. It argues that there were four predominant groups of propaganda devices exploited in this magazine. Two of them, i.e. name-calling and glittering generalities, were utilized to create and promote an artificial, black-and-white vision of the world, composed of the “camp of kufr” (camp of disbelief) and the “camp of iman” (camp of faith), embodied by the Islamic State. The third leading propaganda method, transfer, attempted to legitimize the actions and agenda of the “Caliphate” by using the authority of not only Allah, but also the Prophet Muhammad, his companions (Sahabah), as well as selectively chosen Islamic scholars. Finally, the bandwagon served as a means of creating a sense of community between the editors and readers. Other propaganda devices, such as testimonial or plain folks, played strictly secondary roles in the narration of the magazine.
Shifts in the Visual Media Campaigns of AQAP and ISIS After High Death and High Publicity Attacks
2020 Winkler, C., McMinimy, K., El-Damanhoury, K. and Almahmoud, M. Article
Extreme militant groups use their media campaigns to share information, recruit and radicalize followers, share worldviews, and seek public diplomacy ends. While previous research documents that various on-the-ground events correspond to changes in the groups’ messaging strategies, studies of how competing militant groups influence one another’s media campaigns are nascent. This study helps fill that gap by examining how successful attacks by one militant group correspond to changes in both the perpetrating and competing groups’ visual media messaging strategies. It examines attack success through the lens of violent acts that result in direct impact (measured through death counts) and indirect impact (measured through traditional media coverage levels). The study utilizes a content analysis of 1882 authority-related images in AQAP’s al-Masra newsletter and ISIS’s al-Naba’ newsletter appearing three issues before and after each attack, and a chi-square analysis comparing four ISIS attack conditions (high death/high media, high death/low media, low death/high media, and low death/low media). The findings show that a high number of resulting deaths, rather than a high level of media coverage, correspond to changes in the media campaigns of both the perpetrators and the competing groups, with key differences in visual content based on group identity.
Posterboys und Terrorpropaganda
2020 Bötticher, A Chapter
Mit dem Begriff Cybergrooming wird normalerweise die gezielte Ansprache von meist minderjährigen Personen im Internet zum Zweck der Anbahnung sexueller Kontakte bezeichnet. Die Terrorgruppe ISIS hat eine sehr spezielle Form der Propaganda in Kombination mit persönlicher Ansprache junger Frauen und Mädchen entwickelt, die in Kriegs- und Krisengebiete zwecks Verheiratung gelockt werden sollen. So hat ISIS die Kombination aus terroristischer Propaganda und gezielter Ansprache von jungen Frauen und Mädchen perfektioniert und eine eigene Grooming-Systematik entwickelt, die bei propagandaempfänglichen Mädchen den Wunsch nach einer Djihad-Ehe auslöst. Sind die jungen Frauen oder Mädchen erst ausgereist, werden oft ihre Netzwerke aufgegriffen und die „Daheimgebliebenen“ von ihr zur Ausreise aufgefordert. Das vorliegende Kapitel entwickelt aus dem Begriff des Cybergrooming eine theoriegeleitete Form der Beobachtung extremistischen Handelns im Netz und wendet ihn auf den islamistischen Extremismus an, mit Schwerpunkt auf der Rekrutierung von Frauen und Mädchen.
Wie Extrem ist die Rechte in Europa? Untersuchung von Überschneidungen in der deutschen Rechtsaußenszene auf Twitter
2020 Ahmed, R. and Pisoiu, D. VOX-Pol Publication
Ziel dieser Arbeit ist es, die Überschneidungen in der Rechtsaußenszene auf Twitter zu ermitteln und insbesondere festzustellen, inwieweit verschiedene Gruppen in der Szene tatsächlich auf die gleiche Weise über dieselben Themen sprechen, trotz offensichtlicher Unterschiede im Tonfall und den zugrunde liegenden Ideologien. Wir verwenden einen Mischmethodenansatz: Zunächst wollen wir einen oberflächlichen Einblick in die extrem rechte Szene auf Twitter in ganz Europa gewinnen, und dann führen wir bei drei ausgewählten Gruppen in Deutschland eine detaillierte Frame-Analyse aus, um die impliziten und expliziten Überschneidungen zwischen ihnen zu bestimmen und so die quantitativen Angaben zu ergänzen, damit die Bedeutung detailliert analysiert werden kann.
Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime
2020 Williams, M.L., Burnap, P., Javed, A., Liu, H. and Ozalp, S. Article
National governments now recognize online hate speech as a pernicious social problem. In the wake of political votes and terror attacks, hate incidents online and offline are known to peak in tandem. This article examines whether an association exists between both forms of hate, independent of ‘trigger’ events. Using Computational Criminology that draws on data science methods, we link police crime, census and Twitter data to establish a temporal and spatial association between online hate speech that targets race and religion, and offline racially and religiously aggravated crimes in London over an eight-month period. The findings renew our understanding of hate crime as a process, rather than as a discrete event, for the digital age.
Cyber Swarming, Memetic Warfare and Viral Insurgency: How Domestic Militants Organize on Memes to Incite Violent Insurrection and Terror Against Government and Law Enforcement
2020 Goldenberg, A. and Finkelstein, J. Report
The Report you are about to read, “Cyber Swarming: Memetic Warfare and Viral Insurgency,” represents a breakthrough case study in the capacity to identify cyber swarms and viral insurgencies in nearly real time as they are developing in plain sight. The result of an analysis of over 100 million social media comments, the authors demonstrate how the “boogaloo meme,” “a joke for some, acts as a violent meme that circulates instructions for a violent, viral insurgency for others.” Using it, like turning off the transponders on 9/11, enables the extremists to hide in plain sight, disappearing into the clutter of innocent messages, other data points. It should be of particular concern, the authors note, for the military, for whom “the meme’s emphasis on military language and culture poses a special risk.”

Because most of law enforcement and the military remain ignorant of “memetic warfare,” the authors demonstrate, extremists who employ it “possess a distinct advantage over government officials and law enforcement.” As with the 9/11 terrorists, “they already realize that they are at war. Public servants cannot afford to remain ignorant of this subject because as sites, followers, and activists grow in number, memes can reach a critical threshold and tipping point, beyond which they can suddenly saturate and mainstream across entire cultures.” This Report is at once an urgent call to recognize an emerging threat and a prescription for how to counter it. As such, it offers that rarest of opportunities: the chance to stop history from repeating itself.
Do Platforms Kill?
2020 Lavi, M. Article
This Article analyzes intermediaries’ civil liability for terror attacks under the anti-terror statutes and other doctrines in tort law. It aims to contribute to the literature in several ways. First, it outlines the way intermediaries aid terrorist activities either willingly or unwittingly. By identifying the role online intermediaries play in terrorist activities, one may lay down the first step towards creating a legal policy that would mitigate the harm caused by terrorists’ incitement over the internet. Second, this Article outlines a minimum standard of civil liability that should be imposed on intermediaries for speech made by terrorists on their platforms. Third, it highlights the contradictions between intermediaries’ policies regarding harmful content and the technologies that create personalized experiences for users, which can sometimes recommend unlawful content and connections.
Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation
2020 Ganesh, B. and Bright, J. Article
Extremist exploitation of social media platforms is an important regulatory question for civil society, government, and the private sector. Extremists exploit social media for a range of reasons—from spreading hateful narratives and propaganda to financing, recruitment, and sharing operational information. Policy responses to this question fit under two headings, strategic communication and content moderation. At the center of both of these policy responses is a calculation about how best to limit audience exposure to extremist narratives and maintain the marginality of extremist views, while being conscious of rights to free expression and the appropriateness of restrictions on speech. This special issue on “Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation” focuses on one form of strategic communication, countering violent extremism. In this editorial we discuss the background and effectiveness of this approach, and introduce five articles which develop multiple strands of research into responses and solutions to extremist exploitation of social media. We conclude by suggesting an agenda for future research on how multistakeholder initiatives to challenge extremist exploitation of social media are conceived, designed, and implemented, and the challenges these initiatives need to surmount.
From Inspire to Rumiyah: does instructional content in online jihadist magazines lead to attacks?
2020 Zekulin, M. Article
Considerable time has been spent examining how groups like AQAP and ISIS used their online magazines to reach and radicalize individuals in Western democratic states. This paper continues this investigation but shifts its analysis to focus on the ‘how-to’ or instructional content of these publications, an understudied part of the literature. One of the stated goals of these magazines was to provide tactical know-how and assist supporters conducting terror plots in their home states. The question: did the tactics outlined in the magazines materialize in actual plots/attacks and how quickly were they put into practice? The paper examines this question by creating an overview of the tactics which appear in these publications and cross referencing them with a dataset of 166 Islamist-inspired homegrown terror plots/attacks in 14 Western democratic states to determine if, and when, they first appeared in relation to their publication date. It concluded that while some of the suggested strategies did appear following their publication, often it occurred after considerable time had elapsed. This suggests the instructional content did not resonate with readers in real time.
Extreme Digital Speech: Contexts, Responses and Solutions
2020 Ganesh, B. and Bright, J. (Eds.) VOX-Pol Publication
Extreme digital speech (EDS) is an emerging challenge that requires co-ordination between governments, civil society and the private sector. In this report, a range of experts on countering extremism consider the challenges that EDS presents to these stakeholders, the impact that EDS has and the responses taken by these actors to counter it. By focusing on EDS, consideration of the topic is limited to the forms of extreme speech that take place online, often on social media platforms and multimedia messaging applications such as WhatsApp and Telegram. Furthermore, by focusing on EDS rather than explicitly violent forms of extreme speech online, the report departs from a focus on violence and incorporates a broader range of issues such as hateful and dehumanising speech and the complex cultures and politics that have formed around EDS.
Many Faced Hate: A Cross Platform Study of Content Framing and Information Sharing by Online Hate Groups
2020 Phadke, S, and Mitra, T. Article
Hate groups are increasingly using multiple social media platforms to promote extremist ideologies. Yet we know little about their communication practices across platforms. How do hate groups (or “in-groups”), frame their hateful agenda against the targeted group or the “out-group?” How do they share information? Utilizing “framing” theory from social movement research and analyzing domains in the shared links, we juxtapose the Facebook and Twitter communication of 72 Southern Poverty Law Center (SPLC) designated hate groups spanning five hate ideologies. Our findings show that hate groups use Twitter for educating the audience about problems with the out-group, maintaining positive self-image by emphasizing in-group’s high social status, and for demanding policy changes to negatively affect the out-group. On Facebook, they use fear appeals, call for active participation in group events (membership requests), all while portraying themselves as being oppressed by the out-group and failed by the system. Our study unravels the ecosystem of cross-platform communication by hate groups, suggesting that they use Facebook for group radicalization and recruitment, while Twitter for reaching a diverse follower base.
Interactive Search and Exploration in Discussion Forums Using Multimodal Embeddings
2020 Gornishka, I., Rudinac, S. and Worring, M. Article
In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users. It allows the analyst to identify and select users of interest, and to find similar users in an interactive learning setting. Our approach is based on novel multimodal representations of users, words and concepts, which we simultaneously learn by deploying a general-purpose neural embedding model. The usefulness of the approach is evaluated using artificial actors, which simulate user behavior in a relevance feedback scenario. Multiple experiments were conducted in order to evaluate the quality of our multimodal representations and compare different embedding strategies. We demonstrate the capabilities of the proposed approach on a multimedia collection originating from the violent online extremism forum Stormfront, which is particularly interesting due to the high semantic level of the discussions it features.
Weaponizing white thymos: flows of rage in the online audiences of the alt-right
2020 Ganesh, B. Article
The alt-right is a growing radical right-wing network that is particularly effective at mobilizing emotion through digital communications. Introducing ‘white thymos’ as a framework to theorize the role of rage, anger, and indignation in alt-right communications, this study argues that emotive communication connects alt-right users and mobilizes white thymos to the benefit of populist radical right politics. By combining linguistic, computational, and interpretive techniques on data collected from Twitter, this study demonstrates that the alt-right weaponizes white thymos in three ways: visual documentation of white victimization, processes of legitimization of racialized pride, and reinforcement of the rectitude of rage and indignation. The weaponization of white thymos is then shown to be central to the culture of the alt-right and its connectivity with populist radical right politics.
Understanding the Incel Community on YouTube
2020 Papadamou, K., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G. and Sirivianos, M. Article
YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform also hosts inappropriate, toxic, and/or hateful content. One community that has come into the spotlight for sharing and publishing hateful content are the so-called Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues, who have often been linked to misogynistic views. In this paper, we set out to analyze the Incel community on YouTube. We collect videos shared on Incel-related communities within Reddit, and perform a data-driven characterization of the content posted on YouTube along several axes. Among other things, we find that the Incel community on YouTube is growing rapidly, that they post a substantial number of negative comments, and that they discuss a broad range of topics ranging from ideology, e.g., around the Men Going Their Own Way movement, to discussions filled with racism and/or misogyny. Finally, we quantify the probability that a user will encounter an Incel-related video by virtue of YouTube's recommendation algorithm. Within five hops when starting from a non-Incel-related video, this probability is 1 in 5, which is alarmingly high given the toxicity of said content.
Togetherness after terror: The more or less digital commemorative public atmospheres of the Manchester Arena bombing’s first anniversary
2020 Merrill, S., Sumartojo, S., Closs Stephens, A. and Coward, M. Article
This article examines the forms and feelings of togetherness evident in both Manchester city centre and on social media during the first anniversary of the 22 May 2017 Manchester Arena bombing. To do this, we introduce a conceptual framework that conceives commemorative public atmospheres as composed of a combination of ‘more or less digital’ elements. We also present a methodological approach that combines the computational collection and analysis of Twitter content with short-term team autoethnography. First, the article addresses the concept of public atmospheres before introducing the case study and outlining our methodology. We then analyse the shifting moods of togetherness created by the official programme of commemorative events known as Manchester Together and their digital mediatisation through Twitter. We then explore a grassroots initiative, #LoveMCRBees, and how it relied on the materialisation of social media logics to connect people. Overall, we demonstrate how public atmospheres, as constituted in more and less digital ways, provide a framework for conceptualising commemorative events, and how togetherness is reworked by social media, especially in the context of responses to terrorism.
Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board
2020 Papasavva, A., Zannettou, S., De Cristofaro, E., Stringhini, G. and Blackburn, J. Article
This paper presents a dataset with over 3.3M threads and 134.5M posts from the Politically Incorrect board (/pol/) of the imageboard forum 4chan, posted over a period of almost 3.5 years (June 2016-November 2019). To the best of our knowledge, this represents the largest publicly available 4chan dataset, providing the community with an archive of posts that have been permanently deleted from 4chan and are otherwise inaccessible. We augment the data with a few set of additional labels, including toxicity scores and the named entities mentioned in each post. We also present a statistical analysis of the dataset, providing an overview of what researchers interested in using it can expect, as well as a simple content analysis, shedding light on the most prominent discussion topics, the most popular entities mentioned, and the level of toxicity in each post. Overall, we are confident that our work will further motivate and assist researchers in studying and understanding 4chan as well as its role on the greater Web. For instance, we hope this dataset may be used for cross-platform studies of social media, as well as being useful for other types of research like natural language processing. Finally, our dataset can assist qualitative work focusing on in-depth case studies of specific narratives, events, or social theories.
Digital Extremisms: Readings in Violence, Radicalisation and Extremism in the Online Space
2020 Littler, M. and Lee, B. (Eds.) Book
This book explores the use of the internet by (non-Islamic) extremist groups, drawing together research by scholars across the social sciences and humanities. It offers a broad overview of the best of research in this area, including research contributions that address far-right, (non-Islamic) religious, animal rights, and nationalist violence online, as well as a discussion of the policy and research challenges posed by these unique and disparate groups. It offers an academically rigorous, introductory text that addresses extremism online, making it a valuable resource for students, practitioners and academics seeking to understand the unique characteristics such risks present.
An Approach for Radicalization Detection Based on Emotion Signals and Semantic Similarity
2020 Araque, O. and Iglesias, C.A. Article
The Internet has become an important tool for modern terrorist groups as a means of spreading their propaganda messages and recruitment purposes. Previous studies have shown that the analysis of social signs can help in the analysis, detection, and prediction of radical users. In this work, we focus on the analysis of affect signs in social media and social networks, which has not been yet previously addressed. The article contributions are: (i) a novel dataset to be used in radicalization detection works, (ii) a method for utilizing an emotion lexicon for radicalization detection, and (iii) an application to the radical detection domain of an embedding-based semantic similarity model. Results show that emotion can be a reliable indicator of radicalization, as well as that the proposed feature extraction methods can yield high-performance scores.