Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.
Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.
All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.
We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.
If you have any material you think belongs in the Library—whether your own or another authors—please contact us at firstname.lastname@example.org and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.
Recommender systems and the amplification of extremist content
Identifying Key Players in Violent Extremist Networks: Using Socio-Semantic Network Analysis as Part of a Program of Content Moderation
Discourse patterns used by extremist Salafists on Facebook: identifying potential triggers to cognitive biases in radicalized content
Toxic Narratives: Monitoring Alternative-right Actors
|2017||Baldauf, J., Dittrich, M., Hermann, M., Kollberg, B., Lüdecke, R. and Rathje, J.||Report|
|Why do we use the term “toxic narrative”? The concept of “toxic communication” has been established in the English-speaking world since the 1960s. The term has also been borrowed in Germany to refer to linguistic behavior that has a negative influence on its environment. When we speak of toxic narratives, we are referring to accounts of the world that supply the pertinent “events” and interpretations for such communication.
It is necessary to process such narratives – decoding them, examining their core content and classifying them – in order to respond to them cogently and successfully. The present report is intended to make a contribution to this effort.
What are the Responsibilities of Tech Companies in an Age of International Terrorism?
|2016||Brimmer, E., Pielemeier, J., Brunner, L. and Walden, A.||Video|
|Cosponsored by the Software & Information Industry Association (SIIA) and the Greater Washington, DC Chapter of the Internet Society (ISOC-DC). This Policy Forum is convened by Dr. Susan Aaronson (IIEP/GWU) and Dr. Mark MacCarthy (SIIA).
- Professor Esther Brimmer, Professor of Practice of International Affairs, GWU, and former Assistant Secretary of State for International Organizations
- Jason Pielemeier, Business and Human Rights Section Lead, Bureau of Democracy, Human Rights, and Labor, U.S. Department of State
- Lisl Brunner, Policy and Learning Director, Global Network Initiative
- Alexandria Walden, Public Policy & Gov't Relations Counsel, Google
Moderator: Dr. Mark MacCarthy, Senior Vice President for Public Policy, Software & Information Industry Association
Are U.S. Based 'Jihadi' Inspired Terrorists Transitioning Away From Physical Training Camps To Online Training Camps?
|2018||Clayton, A.N.||MA Thesis|
|This thesis is an examination of the backgrounds of twenty-five individuals who conducted a ‘jihad’ inspired terrorist attack within the United States between the years of 2001 and 2016 to determine if terrorists use physical training camps or online training camps as the main method of training to prepare for their attacks. The debate about the existence of online training camps is beneficial to the field of terrorism study. However, the question of what constitutes an online training camp must first be answered before it can be determined if terrorists are using online training camps. This thesis proposes a comprehensive definition for the term ‘online training camp;’ in an attempt to provide an analytical basis for the examination of U.S. based terrorist training to further the academic discussion. Additionally, it is suggested from the empirical examination of U.S.-based terrorist training backgrounds that terrorists appear to be engaging in supplemental self-directed online learning in combination with physical training, rather than a complete abandonment of physical training.|
Mapping The Jihadist Information Ecosystem
|2019||Fisher, A., Prucha, N. and Winterbotham, E.||Report|
|Online disruption efforts generally aim to reduce the availability of jihadist content. Yet, the speed and agility of jihadist movements online – a multiplatform approach which a co-author of this paper has previously described as a ‘swarmcast’ – has allowed groups to evolve in response to disruption efforts and find new ways to distribute content. This paper describes a model of the flow of users between social media platforms and surface web pages to access jihadist content, using data obtained through innovative collection methods. The model provides an approximate picture of the jihadist information ecosystem and how multiple platforms are used to disseminate content.|
Generalized Gelation Theory Describes Onset of Online Extremist Support
|2018||Manrique,P., Zheng, M., Cao,Z., Restrepo, E., Johnson, N.F.||Article|
|We introduce a generalized form of gelation theory that incorporates individual heterogeneity and show that it can explain the asynchronous, sudden appearance and growth of online extremist groups supporting ISIS (so-called Islamic State) that emerged globally post-2014. The theory predicts how heterogeneity impacts their onset times and growth profiles and suggests that online extremist groups present a broad distribution of heterogeneity-dependent aggregation mechanisms centered around homophily. The good agreement between the theory and empirical data suggests that existing strategies aiming to defeat online extremism under the assumption that it is driven by a few “bad apples” are misguided. More generally, this generalized theory should apply to a range of real-world systems featuring aggregation among heterogeneous objects.|
Following The Whack-a-Mole: Britian First's Visual Strategy From Facebook To Gab
|2019||Nouri,L., Lorenzo-Dus, N. and Watkin, A.L.||Report|
|The focus of this paper is on the extremist group Britain First. As such, it does not explore online terrorist activity but rather examines how a group regarded as extremist is subject to online sanctions. The removal of the extremist group Britain First from Facebook in March 2018 successfully disrupted the group’s online activity, leading them to have to start anew on Gab, a different and considerably smaller social media platform. The removal also resulted in the group having to seek new online followers from a much smaller, less diverse recruitment pool. This paper demonstrates the further impact of the group’s platform migration on their online strategy – particularly on their choice of images and the engagement levels generated through them. The paper puts forward a number of key recommendations, most importantly that social-media companies should continue to censor and remove hateful content.|
The Evolution of Online Violent Extremism In Indonesia And The Philippines
|Pro-Daesh (also known as the Islamic State of Iraq and Syria, ISIS) groups in Indonesia and the Philippines have come to rely on social media for propaganda, fundraising and disseminating instructional material, but in different ways. While Indonesian online extremism has deep roots, with local networks exploiting various online platforms over the past decade, extremist social media in the Philippines only really took off as a consequence of the May 2017 siege in the southern Philippine city of Marawi by pro-Daesh militants. This paper outlines the evolving use of online platforms by pro-Daesh groups in both countries and how this has enabled extremists to develop and strengthen their networks. Social media and encrypted chat apps have shaped the development of extremism in Indonesia and the Philippines in four main areas: branding, recruitment, fundraising, and the increasing role of women. For groups in the Philippines, direct communication with Daesh headquarters via Telegram facilitated their rebranding as the face of Daesh in Southeast Asia, more than just a local insurgency group. In both countries, social media facilitates vertical and horizontal recruitment, but not lone-actor terrorism. Extremist use of the internet for fundraising is still rudimentary –sophisticated financial cybercrime is still virtually non-existent. In all these aspects, women’s roles have become much more visible. For a long time, women had been barred from accessing extremist public spaces, let alone taking an active role as combatants. But through social media, women are now able to play more active roles as propagandists, recruiters, financiers, and even suicide bombers. This paper briefly discusses government responses to online extremism, noting that there have been mixed results between Indonesia and the Philippines. Indonesian authorities have undoubtedly been the more successful of the two regimes – both in terms of law enforcement and engagement with the tech sector – but its counter terrorism police now face the problem of how to judiciously use their powers in a democratic manner. The Philippines, meanwhile, is still at the starting line in terms of dealing with online extremism, with the military more accustomed to removing threats than trying to understand them.|
Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems
|2014||O’Callaghan D., Greene D., Conway M., Carthy J. and Cunningham P.||VOX-Pol Publication|
|In addition to hosting user-generated video content, YouTube provides recommendation services, where sets of related and recommended videos are presented to users, based on factors such as co-visitation count and prior viewing history. This article is specifically concerned with extreme right (ER) video content, portions of which contravene hate laws and are thus illegal in certain countries, which are recommended by YouTube to some users. We develop a categorization of this content based on various schema found in a selection of academic literature on the ER, which is then used to demonstrate the political articulations of YouTube’s recommender system, particularly the narrowing of the range of content to which users are exposed and the potential impacts of this. For this purpose, we use two data sets of English and German language ER YouTube channels, along with channels suggested by YouTube’s related video service. A process is observable whereby users accessing an ER YouTube video are likely to be recommended further ER content, leading to immersion in an ideological bubble in just a few short clicks. The evidence presented in this article supports a shift of the almost exclusive focus on users as content creators and protagonists in extremist cyberspaces to also consider online platform providers as important actors in these same spaces.|
GIFCT Technical Approaches Working Group: Gap Analysis and Recommendations for deploying technical solutions to tackle the terrorist use of the internet
|2021||Tech Against Terrorism||Report|
|The objective of this report is to provide strategic guidance to tech companies, government policy makers, and solution providers in order to increase and improve investment into effective technical approaches that support platforms in tackling the terrorist use of internet services while respecting human rights. While stopping short of providing a detailed roadmap for development, this report aims to provide the overall framework for prioritization of effort given the complex nature of the terrorist threat and the challenges faced by tech companies in adapting to this threat and increased regulatory pressures from governments.|
Shedding Light On Terrorist And Extremist Content Removal
|2019||Vegt, I.V.D. Gill, P., Macdonald,S. and Kleinberg, B.||Report|
|Social media and tech companies face the challenge of identifying and removing terrorist and extremist content from their platforms. This paper presents the findings of a series of interviews with Global Internet Forum to Counter Terrorism (GIFCT) partner companies and law enforcement Internet Referral Units (IRUs). It offers a unique view on current practices and challenges regarding content removal, focusing particularly on human-based and automated approaches and the integration of the two.|
Competition And Innovation In A Hostile Environment: How Jabhat Al Nusra And Islamic State Moved To Twitter In 2013-2014
|2018||Weimann, G. J.||Article|
|Social media offer unprecedented opportunities to terrorist groups to spread their message and target specific audiences for indoctrination and recruitment. In 2013 and 2014, social media, in particular Twitter, overtook Internet forums as preferred space for jihadist propaganda. This article looks into Arabic statements by Jabhat al Nusra, Islamic State and jihadist forum administrators and online activists to argue that, beside the easier use of social media and disruption and infiltration of the forums, the conflict between the jihadist groups accelerated the migration to social media and the building of a presence on Twitter that provided relative resilience to suspensions.|
Recommender systems and the amplification of extremist content
|2021||Whittaker, J., Looney, S., Reed, A. and Votta, F.||Article|
|Policymakers have recently expressed concerns over the role of recommendation algorithms and their role in forming “filter bubbles”. This is a particularly prescient concern in the context of extremist content online; these algorithms may promote extremist content at the expense of more moderate voices. In this article, we make two contributions to this debate. Firstly, we provide a novel empirical analysis of three platforms’ recommendation systems when interacting with far-right content. We find that one platform—YouTube—does amplify extreme and fringe content, while two—Reddit and Gab—do not. Secondly, we contextualise these findings into the regulatory debate. There are currently few policy instruments for dealing with algorithmic amplification, and those that do exist largely focus on transparency. We argue that policymakers have yet to fully understand the problems inherent in “de-amplifying” legal, borderline content and argue that a co-regulatory approach may offer a route towards tackling many of these challenges.|
The Supremacy of Online White Supremacists – an Analysis of Online Discussions by White Supremacists
|2015||Wong, M.A., Frank, R. and Allsup, R.||Journal|
|A content analysis was conducted on five different white supremacist online forums to observe the discourse and types of activities occurring within. In addition, web link analysis was conducted on the forums to identify the presence of external links being posted and discussed by members. We found that members used the forums primarily for information provision, recruitment and networking. Based on these results, we discuss the implications that online hate speech have within offline settings, and the affects these activities have on Canadian citizens in light of the recent repeal of section 13 of the Canadian Human Rights Act (1985), the primary tool in Canada with which to deal with hate speech and other activities observed. The insights extracted from this research have provided novel insight into the sentiments and activities of the white supremacist movement online, a relatively unexplored venue of hate speech and propaganda online.|
Anders Behring Breivik’s use of the Internet and social media
|2013||Aasland Ravndal, J.||Article|
|Did the Internet play a decisive role in Anders Behring Breivik’s violent radicalization? It has proven difficult to understand if and how the Internet influences radicalization processes leading to political violence (Conway 2012). The Internet constitutes only one out of a wide range of factors with a potential influence on radical and violent behavior. We also lack detailed empirical data about the online lives of modern terrorists.
The case of the Norwegian far-right terrorist Anders Behring Breivik offers unique insights into the online activities of a terrorist who used the Internet and social media in almost every thinkable way. Not only did Breivik compile his 1516-pages long compendium based exclusively on Internet sources. Before the attacks, he was also an active discussant on a number of mainstream and extremist Internet forums, and a highly dedicated online gaming enthusiast.
This article reviews new sources on Breivik’s Internet adventures and road to militancy. It is primarily based on Breivik’s original posts and comments on various Internet discussion forums between 2002 and 2011. In addition, Breivik’s trial hearings introduced a wealth of new information regarding his use of the Internet. Finally, the article draws on a collection of Breivik’s private e-mails which was forwarded by Norwegian hackers to a Norwegian journalist six days after the terrorist attacks. A synthesis of the more than 7000 e-mails was later published as a book (Stormark 2012).
A key finding in this study is that Breivik likely never discussed his terrorist plans with anyone online. Moreover, his comments on various Internet forums do not stand out as particularly when compared to typical far-right online discourse. In other words, Norwegian security authorities would likely not react to his online postings even if he was being monitored.
Breivik’s online posts also indicate that his critical views on Islam and socialism had been established long before the so-called counterjihad blogs were created. This means that these blogs may have played a less decisive role for Breivik’s early radicalization than assumed by many. Later on, however, these blogs certainly strengthened Breivik’s radical thinking, although they come across as far less radical than his own ideological statements after 22 July.
Breivik’s e-mail correspondence shows that he first and foremost wanted to become a professional author and publisher. He proposed to establish a so-called cultural conservative paper journal together with Norwegian bloggers he admired, who were also critical of Islam and multiculturalism. He also tried to impress the Norwegian blogger Peder Are Nøstvold Jensen, better known as Fjordman, with his book project, but was given a cold shoulder. The fact that he was rejected by several of the people he looked up to may have had a decisive influence on his violent radicalization.
Breivik gathered all the necessary information to build his bomb online. He also financed the terrorist attacks through an online company, and used the Internet, in particular e-Bay, to buy materials such as body armor, weapons components and bomb ingredients. Breivik also systematically used social media platforms such as Facebook and Twitter for propaganda purposes.
Finally, Breivik was an extremely dedicated online gaming enthusiast. Playing online games dominated his daily life during the years leading up to the attacks. One cannot dismiss theories that the extreme amount of time spent on playing online games while being isolated from friends and relatives may have had an impact on his disposition to engage in extreme violence.
In the following sections, this article article describes Breivik’s use of the Internet and social media along four dimensions: (1) online radicalization, (2) online gaming, (3) online attack preparations, and (4) online propaganda.
Steganography and Terrorist Communications
|2017||Abdoun, A. and Ibrahim, J.||Journal|
|Steganography is the art and science of
hiding the fact that communication is taking place by
hiding information in other information’ (Johnson).
According to nameless “U.S. officials and experts”
and “U.S. and foreign officials,” terrorist groups
are “hiding maps and photographs of terrorist
targets and posting instructions for terrorist
activities on sports chat rooms, pornographic
bulletin boards and other Web sites”. This paper
informs the reader how an innocent looking digital
image hides a deadly terrorist plan, which can
destroy the world just in a click of a mouse. It also
describes and discusses the process of secret
communication known as steganography. The
crucial argument here is that terrorists are most
likely to be employing digital steganography to
facilitate secret intra-group communication as has
been claimed. This is mainly because the use of
digital steganography by terrorist is both technically
and operationally dubious. The most important part
this paper discusses is that terrorist are likely to
employ low-tech steganography such as semagrams
and null chippers instead. It investigates the
strengths of image steganography and the reasons
why terrorists are relying on it so much.
Islam, Jihad, and Terrorism in Post‐9/11 Arabic Discussion Boards
|This study analyzed the contents of three of the most popular Arabic‐language online message boards regarding the attacks of September 11, 2001 on the United States. Although terrorists claimed that the attacks were committed in the name of Islam, those who posted messages on all three forums rejected this claim. More than 43% of the messages condemned the attacks as a criminal act of terrorism that contradicts the core teachings of Islam. Some 30% saw some justification behind the attacks, even if they felt sorry for the victims and their families. However, those participants viewed the attacks as a political, rather than a religious, issue.|
Extremism in the Digital Era
|This book constitutes a journey into the obscure field of sectarian-guided discourses of radical Islamist groups. It provides new insights into the ideological mechanisms utilized by such organizations to incite sectarian conflicts and recruit local and foreign guardians to their alleged cause. This book examines diverse aspects and dimensions of the discourses of Sunni-based ISIS and Shia-based al-Hashd al-Shaabi and explores manipulative and ideological discursive strategies utilized by media outlets associated with these groups. It delves into linguistic and contextual activities, implicit and explicit messages within the discourses of various media outlets operating in the heart of the Middle East. It also scrutinizes and explains aspects of politicization, religionization and sectarianization within the media discourse of terrorist groups in the digital era.|
Digital Terrorism and Hate 2012: The Power of Social Networking in the Digital Age
|2012||Abraham, R. and Rick Eaton, C.||Report|
|Analysis of 'digital terrorism' and hate on the Internet|
Understanding ISIS Myth and Realities
|Published on May 29, 2015
This video was streamed live by DMAPLab MIGS on 26 May 2015.
The Islamic State, also known as ISIS or ISIL, has become a household name because it films its atrocities and posts them online thanks to social media platforms such as Twitter and YouTube.
Western countries and Arab states appear to be united and see the group as a threat to international peace and security. But what do we really know about ISIS? What should the international community do to cripple ISIS' on the battlefield?
Max Abrahms, professor of political science at Northeastern University and member at the Council on Foreign Relations, offers a unique perception of the relationship between the Islamic State’s propaganda and its success as an organisation.
Hashtag Palestine 2018: An Overview of Digital Rights Abuses of Palestinians
|The report notes that in 2018, the Israeli government continued to systematically target Palestinians and the right to freedom of expression via the Internet. In the year 2018, Israeli authorities arrested around 350 Palestinians in the West Bank on charges of “incitement” because of their publications on social media. 7amleh – The Arab Center for the Advancement of Social Media launched its annual report on Palestinian digital rights for the year 2018. The report details violations by governments, authorities, international technology companies and Palestinian society.|