Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.
Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.
All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.
We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.
If you have any material you think belongs in the Library—whether your own or another authors—please contact us at firstname.lastname@example.org and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.
Decoding Hate: Using Experimental Text Analysis to Classify Terrorist Content
Islamic State (IS) propaganda at scale. Although we have used a static archive of IS material, the underly...
This is Not a Game: How Steam Harbors Extremists
Proposals for Improved Regulation of Harmful Online Content
Toxic Narratives: Monitoring Alternative-right Actors
|2017||Baldauf, J., Dittrich, M., Hermann, M., Kollberg, B., Lüdecke, R. and Rathje, J.||Report|
|Why do we use the term “toxic narrative”? The concept of “toxic communication” has been established in the English-speaking world since the 1960s. The term has also been borrowed in Germany to refer to linguistic behavior that has a negative influence on its environment. When we speak of toxic narratives, we are referring to accounts of the world that supply the pertinent “events” and interpretations for such communication.
It is necessary to process such narratives – decoding them, examining their core content and classifying them – in order to respond to them cogently and successfully. The present report is intended to make a contribution to this effort.
What are the Responsibilities of Tech Companies in an Age of International Terrorism?
|2016||Brimmer, E., Pielemeier, J., Brunner, L. and Walden, A.||Video|
|Cosponsored by the Software & Information Industry Association (SIIA) and the Greater Washington, DC Chapter of the Internet Society (ISOC-DC). This Policy Forum is convened by Dr. Susan Aaronson (IIEP/GWU) and Dr. Mark MacCarthy (SIIA).
- Professor Esther Brimmer, Professor of Practice of International Affairs, GWU, and former Assistant Secretary of State for International Organizations
- Jason Pielemeier, Business and Human Rights Section Lead, Bureau of Democracy, Human Rights, and Labor, U.S. Department of State
- Lisl Brunner, Policy and Learning Director, Global Network Initiative
- Alexandria Walden, Public Policy & Gov't Relations Counsel, Google
Moderator: Dr. Mark MacCarthy, Senior Vice President for Public Policy, Software & Information Industry Association
Are U.S. Based 'Jihadi' Inspired Terrorists Transitioning Away From Physical Training Camps To Online Training Camps?
|2018||Clayton, A.N.||MA Thesis|
|This thesis is an examination of the backgrounds of twenty-five individuals who conducted a ‘jihad’ inspired terrorist attack within the United States between the years of 2001 and 2016 to determine if terrorists use physical training camps or online training camps as the main method of training to prepare for their attacks. The debate about the existence of online training camps is beneficial to the field of terrorism study. However, the question of what constitutes an online training camp must first be answered before it can be determined if terrorists are using online training camps. This thesis proposes a comprehensive definition for the term ‘online training camp;’ in an attempt to provide an analytical basis for the examination of U.S. based terrorist training to further the academic discussion. Additionally, it is suggested from the empirical examination of U.S.-based terrorist training backgrounds that terrorists appear to be engaging in supplemental self-directed online learning in combination with physical training, rather than a complete abandonment of physical training.|
Mapping The Jihadist Information Ecosystem
|2019||Fisher, A., Prucha, N. and Winterbotham, E.||Report|
|Online disruption efforts generally aim to reduce the availability of jihadist content. Yet, the speed and agility of jihadist movements online – a multiplatform approach which a co-author of this paper has previously described as a ‘swarmcast’ – has allowed groups to evolve in response to disruption efforts and find new ways to distribute content. This paper describes a model of the flow of users between social media platforms and surface web pages to access jihadist content, using data obtained through innovative collection methods. The model provides an approximate picture of the jihadist information ecosystem and how multiple platforms are used to disseminate content.|
Generalized Gelation Theory Describes Onset of Online Extremist Support
|2018||Manrique,P., Zheng, M., Cao,Z., Restrepo, E., Johnson, N.F.||Article|
|We introduce a generalized form of gelation theory that incorporates individual heterogeneity and show that it can explain the asynchronous, sudden appearance and growth of online extremist groups supporting ISIS (so-called Islamic State) that emerged globally post-2014. The theory predicts how heterogeneity impacts their onset times and growth profiles and suggests that online extremist groups present a broad distribution of heterogeneity-dependent aggregation mechanisms centered around homophily. The good agreement between the theory and empirical data suggests that existing strategies aiming to defeat online extremism under the assumption that it is driven by a few “bad apples” are misguided. More generally, this generalized theory should apply to a range of real-world systems featuring aggregation among heterogeneous objects.|
Following The Whack-a-Mole: Britian First's Visual Strategy From Facebook To Gab
|2019||Nouri,L., Lorenzo-Dus, N. and Watkin, A.L.||Report|
|The focus of this paper is on the extremist group Britain First. As such, it does not explore online terrorist activity but rather examines how a group regarded as extremist is subject to online sanctions. The removal of the extremist group Britain First from Facebook in March 2018 successfully disrupted the group’s online activity, leading them to have to start anew on Gab, a different and considerably smaller social media platform. The removal also resulted in the group having to seek new online followers from a much smaller, less diverse recruitment pool. This paper demonstrates the further impact of the group’s platform migration on their online strategy – particularly on their choice of images and the engagement levels generated through them. The paper puts forward a number of key recommendations, most importantly that social-media companies should continue to censor and remove hateful content.|
The Evolution of Online Violent Extremism In Indonesia And The Philippines
|Pro-Daesh (also known as the Islamic State of Iraq and Syria, ISIS) groups in Indonesia and the Philippines have come to rely on social media for propaganda, fundraising and disseminating instructional material, but in different ways. While Indonesian online extremism has deep roots, with local networks exploiting various online platforms over the past decade, extremist social media in the Philippines only really took off as a consequence of the May 2017 siege in the southern Philippine city of Marawi by pro-Daesh militants. This paper outlines the evolving use of online platforms by pro-Daesh groups in both countries and how this has enabled extremists to develop and strengthen their networks. Social media and encrypted chat apps have shaped the development of extremism in Indonesia and the Philippines in four main areas: branding, recruitment, fundraising, and the increasing role of women. For groups in the Philippines, direct communication with Daesh headquarters via Telegram facilitated their rebranding as the face of Daesh in Southeast Asia, more than just a local insurgency group. In both countries, social media facilitates vertical and horizontal recruitment, but not lone-actor terrorism. Extremist use of the internet for fundraising is still rudimentary –sophisticated financial cybercrime is still virtually non-existent. In all these aspects, women’s roles have become much more visible. For a long time, women had been barred from accessing extremist public spaces, let alone taking an active role as combatants. But through social media, women are now able to play more active roles as propagandists, recruiters, financiers, and even suicide bombers. This paper briefly discusses government responses to online extremism, noting that there have been mixed results between Indonesia and the Philippines. Indonesian authorities have undoubtedly been the more successful of the two regimes – both in terms of law enforcement and engagement with the tech sector – but its counter terrorism police now face the problem of how to judiciously use their powers in a democratic manner. The Philippines, meanwhile, is still at the starting line in terms of dealing with online extremism, with the military more accustomed to removing threats than trying to understand them.|
Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems
|2014||O’Callaghan D., Greene D., Conway M., Carthy J. and Cunningham P.||VOX-Pol Publication|
|In addition to hosting user-generated video content, YouTube provides recommendation services, where sets of related and recommended videos are presented to users, based on factors such as co-visitation count and prior viewing history. This article is specifically concerned with extreme right (ER) video content, portions of which contravene hate laws and are thus illegal in certain countries, which are recommended by YouTube to some users. We develop a categorization of this content based on various schema found in a selection of academic literature on the ER, which is then used to demonstrate the political articulations of YouTube’s recommender system, particularly the narrowing of the range of content to which users are exposed and the potential impacts of this. For this purpose, we use two data sets of English and German language ER YouTube channels, along with channels suggested by YouTube’s related video service. A process is observable whereby users accessing an ER YouTube video are likely to be recommended further ER content, leading to immersion in an ideological bubble in just a few short clicks. The evidence presented in this article supports a shift of the almost exclusive focus on users as content creators and protagonists in extremist cyberspaces to also consider online platform providers as important actors in these same spaces.|
Shedding Light On Terrorist And Extremist Content Removal
|2019||Vegt, I.V.D. Gill, P., Macdonald,S. and Kleinberg, B.||Report|
|Social media and tech companies face the challenge of identifying and removing terrorist and extremist content from their platforms. This paper presents the findings of a series of interviews with Global Internet Forum to Counter Terrorism (GIFCT) partner companies and law enforcement Internet Referral Units (IRUs). It offers a unique view on current practices and challenges regarding content removal, focusing particularly on human-based and automated approaches and the integration of the two.|
Competition And Innovation In A Hostile Environment: How Jabhat Al Nusra And Islamic State Moved To Twitter In 2013-2014
|2018||Weimann, G. J.||Article|
|Social media offer unprecedented opportunities to terrorist groups to spread their message and target specific audiences for indoctrination and recruitment. In 2013 and 2014, social media, in particular Twitter, overtook Internet forums as preferred space for jihadist propaganda. This article looks into Arabic statements by Jabhat al Nusra, Islamic State and jihadist forum administrators and online activists to argue that, beside the easier use of social media and disruption and infiltration of the forums, the conflict between the jihadist groups accelerated the migration to social media and the building of a presence on Twitter that provided relative resilience to suspensions.|
The Supremacy of Online White Supremacists – an Analysis of Online Discussions by White Supremacists
|2015||Wong, M.A., Frank, R. and Allsup, R.||Journal|
|A content analysis was conducted on five different white supremacist online forums to observe the discourse and types of activities occurring within. In addition, web link analysis was conducted on the forums to identify the presence of external links being posted and discussed by members. We found that members used the forums primarily for information provision, recruitment and networking. Based on these results, we discuss the implications that online hate speech have within offline settings, and the affects these activities have on Canadian citizens in light of the recent repeal of section 13 of the Canadian Human Rights Act (1985), the primary tool in Canada with which to deal with hate speech and other activities observed. The insights extracted from this research have provided novel insight into the sentiments and activities of the white supremacist movement online, a relatively unexplored venue of hate speech and propaganda online.|
Anders Behring Breivik’s use of the Internet and social media
|2013||Aasland Ravndal, J.||Article|
|Did the Internet play a decisive role in Anders Behring Breivik’s violent radicalization? It has proven difficult to understand if and how the Internet influences radicalization processes leading to political violence (Conway 2012). The Internet constitutes only one out of a wide range of factors with a potential influence on radical and violent behavior. We also lack detailed empirical data about the online lives of modern terrorists.
The case of the Norwegian far-right terrorist Anders Behring Breivik offers unique insights into the online activities of a terrorist who used the Internet and social media in almost every thinkable way. Not only did Breivik compile his 1516-pages long compendium based exclusively on Internet sources. Before the attacks, he was also an active discussant on a number of mainstream and extremist Internet forums, and a highly dedicated online gaming enthusiast.
This article reviews new sources on Breivik’s Internet adventures and road to militancy. It is primarily based on Breivik’s original posts and comments on various Internet discussion forums between 2002 and 2011. In addition, Breivik’s trial hearings introduced a wealth of new information regarding his use of the Internet. Finally, the article draws on a collection of Breivik’s private e-mails which was forwarded by Norwegian hackers to a Norwegian journalist six days after the terrorist attacks. A synthesis of the more than 7000 e-mails was later published as a book (Stormark 2012).
A key finding in this study is that Breivik likely never discussed his terrorist plans with anyone online. Moreover, his comments on various Internet forums do not stand out as particularly when compared to typical far-right online discourse. In other words, Norwegian security authorities would likely not react to his online postings even if he was being monitored.
Breivik’s online posts also indicate that his critical views on Islam and socialism had been established long before the so-called counterjihad blogs were created. This means that these blogs may have played a less decisive role for Breivik’s early radicalization than assumed by many. Later on, however, these blogs certainly strengthened Breivik’s radical thinking, although they come across as far less radical than his own ideological statements after 22 July.
Breivik’s e-mail correspondence shows that he first and foremost wanted to become a professional author and publisher. He proposed to establish a so-called cultural conservative paper journal together with Norwegian bloggers he admired, who were also critical of Islam and multiculturalism. He also tried to impress the Norwegian blogger Peder Are Nøstvold Jensen, better known as Fjordman, with his book project, but was given a cold shoulder. The fact that he was rejected by several of the people he looked up to may have had a decisive influence on his violent radicalization.
Breivik gathered all the necessary information to build his bomb online. He also financed the terrorist attacks through an online company, and used the Internet, in particular e-Bay, to buy materials such as body armor, weapons components and bomb ingredients. Breivik also systematically used social media platforms such as Facebook and Twitter for propaganda purposes.
Finally, Breivik was an extremely dedicated online gaming enthusiast. Playing online games dominated his daily life during the years leading up to the attacks. One cannot dismiss theories that the extreme amount of time spent on playing online games while being isolated from friends and relatives may have had an impact on his disposition to engage in extreme violence.
In the following sections, this article article describes Breivik’s use of the Internet and social media along four dimensions: (1) online radicalization, (2) online gaming, (3) online attack preparations, and (4) online propaganda.
Steganography and Terrorist Communications
|2017||Abdoun, A. and Ibrahim, J.||Journal|
|Steganography is the art and science of
hiding the fact that communication is taking place by
hiding information in other information’ (Johnson).
According to nameless “U.S. officials and experts”
and “U.S. and foreign officials,” terrorist groups
are “hiding maps and photographs of terrorist
targets and posting instructions for terrorist
activities on sports chat rooms, pornographic
bulletin boards and other Web sites”. This paper
informs the reader how an innocent looking digital
image hides a deadly terrorist plan, which can
destroy the world just in a click of a mouse. It also
describes and discusses the process of secret
communication known as steganography. The
crucial argument here is that terrorists are most
likely to be employing digital steganography to
facilitate secret intra-group communication as has
been claimed. This is mainly because the use of
digital steganography by terrorist is both technically
and operationally dubious. The most important part
this paper discusses is that terrorist are likely to
employ low-tech steganography such as semagrams
and null chippers instead. It investigates the
strengths of image steganography and the reasons
why terrorists are relying on it so much.
Digital Terrorism and Hate 2012: The Power of Social Networking in the Digital Age
|2012||Abraham, R. and Rick Eaton, C.||Report|
|Analysis of 'digital terrorism' and hate on the Internet|
Understanding ISIS Myth and Realities
|Published on May 29, 2015
This video was streamed live by DMAPLab MIGS on 26 May 2015.
The Islamic State, also known as ISIS or ISIL, has become a household name because it films its atrocities and posts them online thanks to social media platforms such as Twitter and YouTube.
Western countries and Arab states appear to be united and see the group as a threat to international peace and security. But what do we really know about ISIS? What should the international community do to cripple ISIS' on the battlefield?
Max Abrahms, professor of political science at Northeastern University and member at the Council on Foreign Relations, offers a unique perception of the relationship between the Islamic State’s propaganda and its success as an organisation.
Hashtag Palestine 2018: An Overview of Digital Rights Abuses of Palestinians
|The report notes that in 2018, the Israeli government continued to systematically target Palestinians and the right to freedom of expression via the Internet. In the year 2018, Israeli authorities arrested around 350 Palestinians in the West Bank on charges of “incitement” because of their publications on social media. 7amleh – The Arab Center for the Advancement of Social Media launched its annual report on Palestinian digital rights for the year 2018. The report details violations by governments, authorities, international technology companies and Palestinian society.|
White Supremacists, Oppositional Culture and the World Wide Web
|2005||Adams, J. and Roscigno, V.J.||Journal|
|Over the previous decade, white supremacist organizations have tapped into the ever emerging possibilities offered by the World Wide Web. Drawing from prior sociological work that has examined this medium and its uses by white supremacist organizations, this article advances the understanding of recruitment, identity and action by providing a synthesis of interpretive and more systematic analyses of thematic content, structure and associations within white supremacist discourse. Analyses, which rely on TextAnalyst, highlight semantic networks of thematic content from principal white supremacist websites, and delineate patterns and thematic associations relative to the three requisites of social movement culture denoted in recent research - namely identity, interpretational framing of cause and effect, and political efficacy. Our results suggest that nationalism, religion and definitions of responsible citizenship are interwoven with race to create a sense of collective identity for these groups, their members and potential recruits. Moreover, interpretative frameworks that simultaneously identify threatening social issues and provide corresponding recommendations for social action are employed. Importantly, and relative to prior work, results how how the interpretation of problems, their alleged causes and the call to action are systematically linked. We conclude by discussing the framing of white supremacy issues, the organizations' potential for recruitment, and how a relatively new communication medium, the Internet, has been cheaply and efficiently integrated into the white supremacist repertoire. Broader implications for social movement theory are also explored.|
One Apostate Run Over, Hundreds Repented: Excess, Unthinkability, and Infographics from the War with I.S.I.S.
|Compared to the more spectacular elements of its media repertoire—the slick recruitment campaigns on social media, the artfully composed battlefield footage, the grisly executions—I.S.I.S.’s infographics may seem dull, even trivial. Indeed, these data visualizations have gone largely unremarked, eliciting more bemusement than serious consideration. Against the tendency to discount these images, however, I argue that when I.S.I.S. turns toward charts and diagrams to represent its operations, it launches a stealthy but substantial epistemological challenge to media outlets that depict it as backward and irrational and rely on command of information as an index of Western power. Comparing infographics produced about I.S.I.S. and those produced by the group, I demonstrate that, despite their obvious differences, both types of infographics evince common preoccupations. Like Western news sources, I.S.I.S. creates infographics to map attacks, plot territorial gains, tally and categorize casualties, and track the types of weapons deployed. News media and I.S.I.S. infographics diverge primarily in their affective resonance, as similar information signifies in radically different ways. Ultimately, by producing and circulating these infographics, I.S.I.S. renders simultaneously renders itself more and less intelligible to outsiders: encapsulating its story while confounding prevailing representations as it weaponizes information.|
Using KNN and SVM Based One-Class Classifier for Detecting Online Radicalization on Twitter
|2015||Agarwal, S. and Sureka, A.||Chapter|
|Twitter is the largest and most popular micro-blogging website on Internet. Due to low publication barrier, anonymity and wide penetration, Twitter has become an easy target or platform for extremists to disseminate their ideologies and opinions by posting hate and extremism promoting tweets. Millions of tweets are posted on Twitter everyday and it is practically impossible for Twitter moderators or an intelligence and security analyst to manually identify such tweets, users and communities. However, automatic classification of tweets into pre-defined categories is a non-trivial problem problem due to short text of the tweet (the maximum length of a tweet can be 140 characters) and noisy content (incorrect grammar, spelling mistakes, presence of standard and non-standard abbreviations and slang). We frame the problem of hate and extremism promoting tweet detection as a one-class or unary-class categorization problem by learning a statistical model from a training set containing only the objects of one class . We propose several linguistic features such as presence of war, religious, negative emotions and offensive terms to discriminate hate and extremism promoting tweets from other tweets. We employ a single-class SVM and KNN algorithm for one-class classification task. We conduct a case-study on Jihad, perform a characterization study of the tweets and measure the precision and recall of the machine-learning based classifier. Experimental results on large and real-world dataset demonstrate that the proposed approach is effective with F-score of 0.60 and 0.83 for the KNN and SVM classifier respectively.|
Topic-Specific YouTube Crawling to Detect Online Radicalization
|2015||Agarwal, S. and Sureka, A.||Article|
|Online video sharing platforms such as YouTube contains several videos and users promoting hate and extremism. Due to low barrier to publication and anonymity, YouTube is misused as a platform by some users and communities to post negative videos disseminating hatred against a particular religion, country or person. We formulate the problem of identification of such malicious videos as a search problem and present a focused-crawler based approach consisting of various components performing several tasks: search strategy or algorithm, node similarity computation metric, learning from exemplary profiles serving as training data, stopping criterion, node classifier and queue manager. We implement two versions of the focused crawler: best-first search and shark search. We conduct a series of experiments by varying the seed, number of n-grams in the language model based comparer, similarity threshold for the classifier and present the results of the experiments using standard Information Retrieval metrics such as precision, recall and F-measure. The accuracy of the proposed solution on the sample dataset is 69% and 74% for the best-first and shark search respectively. We perform characterization study (by manual and visual inspection) of the anti-India hate and extremism promoting videos retrieved by the focused crawler based on terms present in the title of the videos, YouTube category, average length of videos, content focus and target audience. We present the result of applying Social Network Analysis based measures to extract communities and identify core and influential users.|