Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.


Full Listing

The Role of Propaganda in Violent Extremism and how to Counter it
2017 Ritzmann, A. Report
The 8th Euromed Survey conducted by the European Institute of the Mediterranean touches
upon a number of important and complex issues related to violent extremism in the EuroMediterranean
region, including the question of the context and drivers through which violent
extremism can prosper. Echoing some of the results, this article looks into propaganda as a
tool of extremist ideologies and how to counter it.
Fighting Hate Speech And Terrorist Propaganda On Social Media In Germany
2019 Ritzmann, A. Report
Lessons learned after one year of the NetzDG law.
The EU Digital Services Act (DSA): Recommendations For An Effective Regulation Against Terrorist Content Online
2020 Ritzmann, A., Farid, H. and Schindler, H-J. Policy
In 2020, the Counter Extremism Project (CEP) Berlin carried out a sample analysis to test the extent to which YouTube, Facebook and Instagram block “manifestly illegal” terrorist content upon notification. Our findings raise doubts that the currently applied compliance systems of these companies achieve the objectives of making their services safe for users. As a result, CEP proposed requirements for effective and transparent compliance regimes with a focus on automated decision-making tools and lessons learned from the regulation of the financial industry. This Policy Paper combines all the relevant lessons learned and gives concrete recommendations on how to make the DSA also an effective regulation against terrorist content online.
Hate Speech Detection on Twitter: Feature Engineering v.s. Feature Selection
2018 Robinson, D., Zhang, Z. and Tepper, J. Article
The increasing presence of hate speech on social media has drawn significant investment from governments, companies, and empirical research. Existing methods typically use a supervised text classification approach that depends on carefully engineered features. However, it is unclear if these features contribute equally to the performance of such methods. We conduct a feature selection analysis in such a task using Twitter as a case study, and show findings that challenge conventional perception of the importance of manual feature engineering: automatic feature selection can drastically reduce the carefully engineered features by over 90% and selects predominantly generic features often used by many other language related tasks; nevertheless, the resulting models perform better using automatically selected features than carefully crafted task-specific features.
Grading The Quality Of ISIS Videos: A Metric For Assessing The Technical Sophistication Of Digital Video Propaganda
2018 Robinson, M .D. & Dauber, C.E. Article
This article offers a method for systematically grading the quality of Islamic State of Iraq and Syria (ISIS) videos based on technical production criteria. Using this method revealed moments when ISIS production capacity was severely debilitated (Fall 2015) and when they began to rebuild (Spring 2016), which the article details. Uses for
this method include evaluating propaganda video output across time and across groups, and the ability to assess kinetic actions against propaganda organizations. This capacity will be critical as Islamic State media production teams will be pushed out of its territory as the State collapses.
Mobilization and Radicalization Through Persuasion: Manipulative Techniques in ISIS’ Propaganda
2018 Rocca, N.M. Journal
This paper explores the recent findings of some empirical research concerning Islamic State of Iraq and al-Sham’s (ISIS’) communication and tries to synthesize them under the theoretical frame of propaganda’s concept and practices. Many authors demonstrated how ISIS propaganda campaigns, in particular those deployed on cyberspace, proved to be effective in recruiting new members in both western and Muslim countries. However, while most of the researches focused on ISIS’s communication contents and narratives, few works considered other methods and techniques used for actually delivering them. This is a regrettable missing point given the fact that communication’s and neurosciences’ studies demonstrate that not only what is communicated but also the techniques adopted bear important consequences on the receiver’s perceptions and behavior. Therefore, this article analyzes in particular the findings of researches carried out by communication scholars, social psychologists, and neuro-cognitive scientists on ISIS’ persuasive communication techniques and demonstrates their importance for security studies’ analysis of ISIS’ propaganda. It argues that ISIS’ success in mobilizing people and make them prone to violent action relies on—among other factors—its knowledge and exploitation of sophisticated methods of perceptions’ manipulation and behavior’s influence. This, in turn, demonstrates ISIS’ possession of state-like soft power capabilities effectively deployed in propaganda campaigns and therefore calls for a more complex understanding of its agency.
The homogeneity of right-wing populist and radical content in YouTube recommendations
2020 Röchert, D., Weitzel, M. and Ross, B. Article
The use of social media to disseminate extreme political content on the web, especially right-wing populist propaganda, is no longer a rarity in today's life. Recommendation systems of social platforms, which provide personalized filtering of content, can contribute to users forming homogeneous cocoons around themselves. This study investigates YouTube's recommendations system based on 1,663 German political videos in order to analyze the homogeneity of the related content. After examining two datasets (right-wing populist and politically neutral videos), each consisting of ten initial videos and their first and second level recommendations, we show that there is a high degree of homogeneity of right-wing populist and neutral political content in the recommendation network. These findings offer preliminary evidence on the role of YouTube recommendations in fueling the creation of ideologically like-minded information spaces.
A Spatial Analysis Of Boko Haram And Al-Shabaab References In Social Media In Sub-Saharan Africa
2014 Rodriguez Jr., R.M. MA Thesis
This thesis describes the role that social media can play in showing how a terrorist organization can impact people’s conversation via Twitter. The two groups that this thesis focusses on are Boko Haram and Al-Shabaab. We present a new approach to how we can look into how terrorist organization can be analyzed and see what kind of impacts they may have over different cultures. The process used in researching and writing this thesis is we conducted literature search of the social media phenomenon and what social media can provide. We look to build on research by using the social media phenomenon to find what types of impacts terrorist organizations may have over cultures along with seeing how a terrorist event can have an impact over people on social media. This thesis hopes to expand on previous research on the academic uses for social media, as well as add to the expanding role that social media can be used for intelligence purposes.
Jihadism Online: A Study of How al-Qaida and Radical Islamist Groups Use the Internet for Terrorist Purposes
2006 Rogan, H. Report
The Internet is of major importance to the global jihadist movement today. It facilitates ideological cohesion and network-building within a geographically scattered movement, and all levels of the jihadist network are present on the Internet. The jihadist websites differ enormously in nature and are run relatively independently of each other. However, many sites are inter-related in the sense that they frequently redistribute and circulate the same material. This indicates that despite a large number of sites, the scope of new material that appears on these sites every day is not necessarily very large. Concerning the functions of the jihadist Internet, it fulfils different objectives, most importantly of communicative character. The much feared cyber terrorism, i.e. destructive attack on information systems, does not, so far, seem to be a main objective for the jihadist use of the Internet.
Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media
2020 Rogers, R. Article
Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.
Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media
2020 Rogers, R. Article
Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.
Islamic State Propaganda and Attacks: How Are They Connected?
2019 Rosenblatt, N., Winter, C. and Basra, R. Article
What is the relationship between the words and deeds of a terrorist group? Despite frequent speculation in media and policy circles, few studies have tested this relationship. This study aims to verify a potential correlation between the volume of propaganda produced by Islamic State (IS)—including statements by the group’s leadership—and the number of attacks carried out in its name. We examine this issue by comparing two datasets: one of all official propaganda produced by the Islamic State in 2016, and another of the completed, failed, and disrupted plots carried out by the group and its supporters in Europe in the same year. We find no strong and predictable correlation between the volume of propaganda Islamic State produces and the number of attacks the group and its supporters carry out. There is no regular rise in IS propaganda output before or after its attacks. In particular, there is no regular rise in attacks after leadership statements. However, the results may have identified differences in how IS central and regional media offices respond to attacks. The findings suggest that rather than merely looking at the volume of IS propaganda, it is necessary to also examine its content. As such, the deliberately broad premise of this study is intended as the first in a series of papers examining the potential relationship between IS propaganda and IS attacks.
Feature extraction and selection for automatic hate speech detection on Twitter
2019 Routar de Sousa, J. G. MA Thesis
In recent decades, information technology went through an explosive evolution, revolutionizing the way communication takes place, on the one hand enabling the rapid, easy and almost costless digital interaction, but, on the other, easing the adoption of more aggressive communication styles. It is crucial to regulate and attenuate these behaviors, especially in the digital context, where these emerge at a fast and uncontrollable pace and often cause severe damage to the targets. Social networks and other entities tend to channel their efforts into minimizing hate speech, but the way each one handles the issue varies. Thus, in this thesis, we investigate the problem of hate speech detection in social networks, focusing directly on Twitter. Our first goal was to conduct a systematic literature review of the topic, targeting mostly theoretical and practical approaches. We exhaustively collected and critically summarized mostly recent literature addressing the topic, highlighting popular definitions of hate, common targets and different manifestations of such behaviors. Most perspectives tackle the problem by adopting machine learning approaches, focusing mostly on text mining and natural language processing techniques, on Twitter. Other authors present novel features addressing the users themselves. Although most recent approaches target Twitter, we noticed there were few tools available that would address this social network platform or tweets in particular, considering their informal and specific syntax. Thus, our second goal was to develop a tokenizer able to split tweets into their corresponding tokens, taking into account all their particularities. We performed two binary hate identification experiments, having achieved the best f-score in one of them using our tokenizer. We used our tool in the experiments conducted in the following chapters. As our third goal, we proposed to assess which text-based features and preprocessing techniques would produce the best results in hate speech detection. During our literature review, we collected the most common preprocessing, sentiment and vectorization features and extracted the ones we found suitable for Twitter in particular. We concluded that preprocessing the data is crucial to reduce its dimensionality, which is often a problem in small datasets. Additionally, the f-score also improved. Furthermore, analyzing the tweets’ semantics and extracting their character n-grams were the tested features that better improved the detection of hate, enhancing the f-score by 1.5% and the hate recall by almost 5% on unseen testing data. On the other hand, analyzing the tweets’ sentiment didn’t prove to be helpful. Our final goal derived from a lack of user-based features in the literature. Thus, we investigated a set of features based on profiling Twitter users, focusing on several aspects, such as the gender of authors and mentioned users, their tendency towards hateful behaviors and other characteristics related to their accounts (e.g. number of friends and followers). For each user, we also generated an ego network, and computed graph-related statistics (e.g. centrality, homophily), achieving significant improvements - f-score and hate recall increased by 5.7% and 7%, respectively.
Mining Pro-ISIS Radicalisation Signals from Social Media Users
2016 Rowe,M. and Saif, H. Article
The emergence and actions of the so-called Islamic State
of Iraq and the Levant (ISIL/ISIS) has received widespread
news coverage across the World, largely due to their capture
of large swathes of land across Syria and Iraq, and the
publishing of execution and propaganda videos. Enticed by
such material published on social media and attracted to the
cause of ISIS, there have been numerous reports of individuals
from European countries (the United Kingdom and France
in particular) moving to Syria and joining ISIS. In this paper
our aim to understand what happens to Europe-based
Twitter users before, during, and after they exhibit pro-ISIS
behaviour (i.e. using pro-ISIS terms, sharing content from
pro-ISIS accounts), characterising such behaviour as radicalisation
signals. We adopt a data-mining oriented approach
to computationally determine time points of activation (i.e.
when users begin to adopt pro-ISIS behaviour), characterise
divergent behaviour (both lexically and socially), and quantify
influence dynamics as pro-ISIS terms are adopted. Our
findings show that: (i) of 154K users examined only 727 exhibited
signs of pro-ISIS behaviour and the vast majority of
those 727 users became activated with such behaviour during
the summer of 2014 when ISIS shared many beheading
videos online; (ii) users exhibit significant behaviour divergence
around the time of their activation, and; (iii) social homophily
has a strong bearing on the diffusion process of proISIS
terms through Twitter.
Youth Online and at Risk: Radicalisation Facilitated by the Internet
2011 Royal Canadian Mounted Police Report
While the internet provides access to rich educational experiences, great entertainment, and the chance to connect with friends around the clock, it also creates a number of risks that young people, parents, and guardians need to be aware of. There are the commonly known concerns of identity theft, online predators, and cyber-bullying but there is another issue that we need to collectively work to address— Radicalisation to violence. This informational resource strives to increase the awareness of how the internet is being used to radicalise and recruit youth in North America.
Hearing: Countering the Virtual Caliphate: The State Department's Performance
2016 Royce, E. Video
The United States is losing the information war to terrorists like ISIS and Hezbollah. Earlier this year, the administration rebranded the office responsible for counter messaging, but little seems to have changed. A strong, effective information offensive to counter the violent ideology being pushed by ISIS and other terrorists is long overdue. This hearing will give members an opportunity to press the administration’s top public diplomacy official on how the U.S. can be more effective.
Is IS Online Chatter Just Noise?: An Analysis of the Islamic State Strategic Communications
2020 Royo-Vela, M. and McBee, K.A. Article
The objective of this research is to analyze the potential use of strategic communication, and specifically, strategic brand management and online communications directed to a foreign target by the Islamic State (IS). For this purpose, a review of official the IS online media releases was carried out to determine if they reflect characteristics, components, or tools of strategic communication. In this pursuit, a content analysis of a purposive sample of 381 events—photographs, infographics, English written content, and videos—were applied using two independent judges. Statistically relevant results substantiate claims that the IS uses communication strategies and tactics. Descriptive and inferential statistics point out that Delegitimize Opponents, Utopia-Religion, and Military are the three most commonly used themes, in line with the reported IS identity. Also the IS uses breadth of contents, targets, media outlets, and formats, showing content and social media communications such as in online communication. As far as the authors are aware, it is the first content analysis from a strategic communication perspective used to interpret potential branding by the IS. This study is timely, important and lends credibility to the use of communication and marketing terminology by terrorist experts, as well as bridging brand management and strategic communication and terrorism identifying potential targets, messages, channels and tools for counter arguing and positioning.
The Advocacy of Terrorism on the Internet: Freedom of Speech Issues and the Material Support Statutes
2016 Ruane, KA. Report
The development of the Internet has revolutionized communications. It has never been easier to speak to wide audiences or to communicate with people that may be located more than half a world away from the speaker. However, like any neutral platform, the Internet can be used to many different ends, including illegal, offensive, or dangerous purposes. Terrorist groups, such as the Islamic State (IS, also referred to as ISIS or ISIL), Al Qaeda, Hamas, and Al Shabaab, use the Internet to disseminate their ideology, to recruit new members, and to take credit for attacks around the world. In addition, some people who are not members of these groups may view this content and could begin to sympathize with or to adhere to the violent philosophies these groups advocate. They might even act on these beliefs. Several U.S. policymakers, including some Members of Congress, have expressed concern about the influence that terrorist advocacy may have upon those who view or read it. The ease with which such speech may be disseminated over the Internet, using popular social media services, has been highlighted by some observers as potentially increasing the ease by which persons who might otherwise have not been exposed to the ideology or recruitment efforts of terrorist entities may become radicalized. These concerns raise the question of whether it would be permissible for the federal government to restrict or prohibit the publication and distribution of speech that advocates the commission of terrorist acts when that speech appears on the Internet. Significant First Amendment freedom of speech issues are raised by the prospect of government restrictions on the publication and distribution of speech, even speech that advocates terrorism. This report discusses relevant precedent concerning the extent to which advocacy of terrorism may be restricted in a manner consistent with the First Amendment’s Freedom of Speech Clause. The report also discusses the potential application of the federal ban on the provision of material support to foreign terrorist organizations (FTOs) to the advocacy of terrorism, including as it relates to the dissemination of such advocacy via online services like Twitter or Facebook.
Multimodal Classification of Violent Online Political Extremism Content with Graph Convolutional Networks
2017 Rudinac, S., Gornishka, I. and Worring, M. VOX-Pol Publication
In this paper we present a multimodal approach to categorizing user posts based on their discussion topic. To integrate heterogeneous information extracted from the posts, i.e. text, visual content and the information about user interactions with the online platform, we deploy graph convolutional networks that were recently proven effective in classification tasks on knowledge graphs. As the case study we use the analysis of violent online political extremism content, a challenging task due to a particularly high semantic level at which extremist ideas are discussed. Here we demonstrate the potential of using neural networks on graphs for classifying multimedia content and, perhaps more importantly, the effectiveness of multimedia analysis techniques in aiding the domain experts performing qualitative data analysis. Our conclusions are supported by extensive experiments on a large collection of extremist posts. This research was produced with the aid of VOX-Pol Research Mobility Programme funding and supervision by VOX-Pol colleagues at Dublin City University.
“Electronic Jihad”: The Internet as Al Qaeda's Catalyst for Global Terror
2016 Rudner, M. Journal
The Internet has emerged as a key technology for Al Qaeda and other jihadist movements waging their so-called electronic jihad across the Middle East and globally, with digital multiplier effects. This study will examine the evolving doctrine of “electronic jihad” and its impact on the radicalization of Muslims in Western diaspora communities The study describes Internet-based websites that served as online libraries and repositories for jihadist literature, as platforms for extremist preachers and as forums for radical discourse. Furthermore, the study will then detail how Internet connectivity has come to play a more direct operational role for jihadi terrorist-related purposes, most notably for inciting prospective cadres to action; for recruiting jihadist operatives and fighters; for providing virtual training in tactical methods and manufacture of explosives; for terrorism financing; and for actual planning and preparations for specific terror attacks. Whereas contemporary jihadist militants may be shifting from the World Wide Web to social media, such as Facebook, YouTube, and Twitter for messaging and communications, nevertheless the Internet-based electronic jihad remains a significant catalyst for promoting jihadist activism and for facilitating terrorist operations.