Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.


Full Listing

The Online Regulation Series | Brazil
2020 Tech Against Terrorism Report
Brazil represents a major market for online platforms. It is the leading country in terms of internet use in South America, and a key market for Facebook and WhatsApp. WhatsApp’s popularity and the online disinformation campaigns that have been coordinated on the platform are essential to understand Brazil’s approach to online regulation. The messenger app has been accused of being used “for the dissemination of fake news”, whilst critics of the “fake news” bill have said that it served as a “standard” for new regulation in the country based on the app’s existing limitations on forwarding messages and group size.
EEG distinguishes heroic narratives in ISIS online video propaganda
2020 Yoder, K.J., Ruby, K., Pape, R. and Decety, J. Article
The Islamic State (ISIS) was uniquely effective among extremist groups in the Middle East at recruiting Westerners. A major way ISIS accomplished this was by adopting Hollywood-style narrative structures for their propaganda videos. In particular, ISIS utilized a heroic martyr narrative, which focuses on an individual’s personal glory and empowerment, in addition to traditional social martyr narratives, which emphasize duty to kindred and religion. The current work presented adult participants (n = 238) video clips from ISIS propaganda which utilized either heroic or social martyr narratives and collected behavioral measures of appeal, narrative transportation, and psychological dispositions (egoism and empathy) associated with attraction to terrorism. Narrative transportation and the interaction between egoism and empathy predicted video recruitment appeal. A subset of adults (n = 80) underwent electroencephalographic (EEG) measurements while watching a subset of the video-clips. Complementary univariate and multivariate techniques characterized spectral power density differences when perceiving the different types of narratives. Heroic videos show increased beta power over frontal sites, and globally increased alpha. In contrast, social narratives showed greater frontal theta, an index of negative feedback and emotion regulation. The results provide strong evidence that ISIS heroic narratives are specifically processed, and appeal to psychological predispositions distinctly from other recruitment narratives.
Migration Moments: Extremist Adoption of Text‑Based Instant Messaging Applications
2020 Clifford, B. Report
This report examines the patchwork of online text‑based instant messaging applications preferred by jihadist and far‑right extremist groups, with a focus on charting their technical affordances and their host companies’ stances on user privacy, security and regulation. To this end, the report analyses six online messaging services (BCM, Gab Chat, Hoop Messenger,, Rocket.Chat and TamTam) that have been or may be used in conjunction with Telegram by extremist groups.
Open letter on behalf of civil society groups regarding the proposal for a Regulation on Terrorist Content Online
2020 Civil Liberties Union for Europe Letter
The undersigned human rights and digital rights organizations call on the participants of the trialogue meeting on the Proposal for a Regulation of the European Parliament and of Council on preventing/addressing the dissemination of terrorist content online to comply with the Charter of Fundamental Rights and discuss further amendments that fully respect freedom of expression, freedom of information and personal data protection of internet users.
Krise und Kontrollverlust: Digitaler Extremismus im Kontext der Corona-Pandemie
2020 Guhl, J. and Gerster, L. Report
Dieser Report analysiert die Netzwerke und Narrative deutschsprachiger rechtsextremer, linksextremer und islamistisch-extremistischer Akteure auf Mainstream- und alternativen Social-Media-Plattformen sowie extremistischen Websites im Kontext der Corona-Pandemie. Unsere Ergebnisse zeigen: Extremisten aus Deutschland, Österreich und der Schweiz konnten ihre Reichweite seit der Einführung der Lockdown-Maßnahmen vergrößern.
The Online Regulation Series | Morocco
2020 Tech Against Terrorism Report
Morocco’s online regulatory framework consists of different laws and codes that strive to limit the spread of content than can pose a threat to the Kingdom’s “integrity, security and public order”. Central to this framework are the 2003 Anti-Terrorism Law passed in the aftermath of the 2003 Casablanca bombings and the 2016 Press Code that lays out limitations journalisitic publications and public speech. However, the existing regulatory framework is not explicitly clear regarding implications for tech platforms and the government’s powers to filter the online space – something which has been criticised by civil society. According to Freedom House, the government also resorts to “extralegal means” to remove content that it deems “controversial or undesirable” by pressuring media outlets and online figures to delete such content.
The Online Regulation Series | Kenya
2020 Tech Against Terrorism Report
Kenya has “increasingly sought to remove online content”, both through requests and increased regulation, that it deems “immoral” or “defamatory”. Following terrorist attacks on civilian targets in recent years, the country has heightened its efforts around counterterrorism as well as online content regulation. Many of Kenya’s legislations have been criticised by civil society for their “broadness”, “vagueness”, and potential “detrimental implications for freedom of expression”. A proposed social media bill, if enacted, could largely impact social media companies and their users in Kenya, such as through strict regulations on user content.
Covid-19 : la réponse des plateformes en ligne face à l’ultradroite
2020 Deverell, F. and Janin, M. Report
Les terroristes et les extrémistes sont avant tout des manipulateurs qui cherchent à exploiter les facteurs de stress présents dans nos sociétés.La pandémie de Covid-19, les mesures de confinement qui en ont découlé ainsi que la propagation de la mésinformation et de la désinformation en ligne qui y sont liées sont donc des conditions idéales à exploiter pour certains acteurs malveillants. Les partisans de l’ultradroite, en particulier, ont rapidement saisi l’occasion offerte par la crise du Covid-19 pour ancrer leurs idées dans le débat public et recruter de nouveaux membres. Bien qu’elle se manifeste principalement en ligne, cette exploitation ne s’est pas limitée à la sphère virtuelle et s’est matérialisée dans des événements réels, par exemple lorsque des extrémistes violents se sont mêlés aux protestations contre le confinement et les restrictions sanitaires et que des plans d’attaques terroristes ont été contrecarrés. Alors que le secteur technologique a rapidement réagi à la vague de désinformation, les changements rapidement apportés aux politiques de modération des contenus en ligne ont déjà des conséquences importantes pour l’avenir de la modération de contenus et de la liberté d’expression en ligne. En effet, le secteur technologique mondial, notamment les réseaux sociaux et les plateformes d’hébergement de contenus, a réagi particulièrement rapidement à la propagation des théories de désinformation et de conspiration liées au Covid-19. Dans cette note, Tech Against Terrorism analyse comment les partisans de l’ultradroite ont exploité l’instabilité causée par la pandémie de Covid-19, et ce que la réponse du secteur technologique signifie pour la liberté d’expression en ligne et la responsabilité des plateformes.
Covid-19: far right violent extremism and tech platforms’ response
2020 Deverell, F. and Janin, M. Report
Terrorists and violent extremists are manipulators seeking to exploit stress factors in our societies. The Covid-19 pandemic, its ensuing lockdown measures, as well as the spread of related mis- and disinformation online, thus represented an almost ideal opportunity for malevolent actors to exploit. Far-right violent extremists, in particular, quickly jumped on the opportunity offered by the Covid-19 crisis to anchor their hateful ideas into the mainstream and recruit new members. Whilst manifesting itself mostly online, this exploitation was not limited to the online sphere. It materialised in real world events as violent extremists blended themselves into anti-lockdown protests and as terrorists’ plans were eventually thwarted. Whilst the tech sector promptly responded to the wave of mis- and disinformation, the rapid changes in content moderation policy bear important consequences for the future of content moderation and freedom of expression online. Indeed, the global tech sector, especially social media and content-hosting platforms, was particularly quick to respond to the spread of Covid-19 disinformation and conspiracy theories. In this insight piece, Tech Against Terrorism analyses how far-right violent extremists exploited the instability caused by the Covid-19 pandemic and what the tech sector’s response means for online freedom of expression and platforms’ accountability.
A Better Web: Regulating to Reduce Far-Right Hate Online
2020 HOPE not hate Report
Though aiming to remedy genuine harms, government regulation of our online lives also raises legitimate concerns over privacy and freedom of expression. We must address online harms whilst ensuring harms are not also inflicted through unfairly infringing on people’s freedoms. HOPE not hate recognises the importance of this balancing act, and encourages a form of regulation of platforms that places democratic rights front-and-centre. In a world increasingly infused with the web, the significance of this legislation cannot be overstated and it is undoubtedly the case that getting it right will take rigorous reflection. To that end, we encourage debate of the recommendations proposed in this report.
Cross-national level report on digital sociability and drivers of self-radicalisation in Europe
2020 DARE: Dialogue about Radicalisation and Equality Report
In this report, we present an empirical cross-national study of supporters of right-wing extremists’ (RWE) and Islamist extremists’ (ISE) activities and interactions on Twitter. The study is based on
ethnographic and automatic text and network analyses of data from Belgian, British, Dutch, French, German, Greek and Norwegian female and male Twitter accounts.
Stop the virus of disinformation: the risk of malicious use of social media during COVID-19 and the technology options to fight it
2020 United Nations Interregional Crime and Justice Research Institute (UNICRI) Report
This report describes how terrorist, violent extremist and organized criminal groups are trying to take advantage of the Coronavirus disease (COVID-19) pandemic to expand their activities and jeopardize the efficacy and credibility of response measures by governments.
Content Regulation and Human Rights: Analysis and Recommendations
2020 Global Network Initiative Report
The multistakeholder Global Network Initiative (GNI) reviewed more than twenty recent governmental initiatives that claim to address various forms of online harm related to user-generated content — a practice we refer to broadly as “content regulation.” We focused on proposals that could shift existing responsibilities and incentives related to user-generated content. Our analysis illustrates the ways that good governance and human rights principles provide time-tested guidance for
how laws, regulations, and policy actions can be most appropriately and effectively designed and carried out. Because content regulation is primarily focused on and likely to impact digital communication and content, we use international human rights principles related to freedom of expression and privacy as our primary lens.
Mitigating the Impact of Media Reporting of Terrorism: Case Study of the #BringBackOurGirls Campaign
2020 Adebiyi, K. Report
This report looks at journalism and social media reporting in Nigeria. The author raises key implications in journalistic reporting by looking at the 2014 #BringBackOurGirls social media campaign. This study importantly takes both a local and a global perspective on Nigeria’s media reporting.

This report is part of a wider project, led by the International Centre for Counter- Terrorism (ICCT) – the Hague, and funded by the EU Devco on “Mitigating the Impact of Media Reporting of Terrorism”. This project aims to produce evidence-based guidance and capacity building outputs based on original, context-sensitive research into the risks and opportunities in media reporting of terrorism and terrorist incidents. The role of media reporting on terrorism has been under investigated and is an underutilised dimension of a holistic counter-terrorism strategy. How the media reports on terrorism has the potential to impact counter-terrorism (CT) perspective positively or negatively.
Examining the Developmental Pathways of Online Posting Behavior in Violent Right-Wing Extremist Forums
2020 Scrivens, R. Article
Many researchers, practitioners, and policymakers are concerned about online communities that are known to facilitate violent right-wing extremism, but little is empirically known about these digital spaces in general and the developmental posting behaviors that make up these spaces in particular. In this study, group-based trajectory modeling—derived from a criminal career paradigm—was used to identify posting trajectories found in the open-access sections of the Iron March and Fascist Forge forums, both of which have gained notoriety for their members’ online advocacy of violence and acts of violence carried out by them. Multinomial logistic regression and analysis of variance were then used to assess whether posters’ time of entry into the violent forums predicted trajectory group assignment. Overall, the results highlight a number of similarities and differences in posting behaviors within and across platforms, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online. We conclude with a discussion of the implications of this analysis, followed by a discussion of study limitations and avenues for future research.
The Online Regulation Series | Turkey
2020 Tech Against Terrorism Report
Online content regulation in Turkey is characterised by extensive removal of material that has resulted in a large number of Turkish and international websites being blocked in recent years. Further, the Turkish government recently introduced a Social Media Bill, implementing a wide range of new regulations and steep penalties for social media companies, which critics say poses further threats to online freedom of expression in the country.
The Online Regulation Series | The United Kingdom
2020 Tech Against Terrorism Report
The United Kingdom has set out an ambitious online regulatory framework in its Online Harms White Paper, aiming to make the UK “the safest place in the world to be online” by countering various online harms ranging from cyberbullying to terrorist content. This is yet to come into effect, but the UK has approved an interim regime to fulfil obligations under the European Union Directive, which the UK needs to comply with during Brexit negotiations. The UK also has extensive counterterrorism legislation criminalising the viewing and sharing of terrorist content online.
Tracking far-right extremist searches in Bosnia & Herzegovina
2020 Moonshot CVE Report
Between 20 March and 14 September 2020, Moonshot investigated online far-right extremist searches in Bosnia & Herzegovina by analysing at-risk audience engagement with far-right extremist themes.

Our results show a significant number of searches were for far-right extremist themes relating to the region’s history of ethnic conflict, as well searches for international far-right memes and narratives. Interestingly, we found that at-risk users primarily search for and engage with far-right extremist terms in the English language, seeking out terms which have their roots in the region but are now used internationally, such as ‘Serbia Strong’ and ‘Remove Kebab’.
The Online Regulation Series | Germany
2020 Tech Against Terrorism Report
Germany has an extensive framework for regulating online content, particularly with regards to hate speech and violent extremist and terrorist material. Experts also note that Germany’s regulatory framework has to some extent helped set the standard for the European, and possibly global, regulatory landscape.
Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels
2020 Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G. and West, R. Report
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.