Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.


Full Listing

The Online Regulation Series | Pakistan
2020 Tech Against Terrorism Report
Over the last five years, Pakistan has introduced various measures aimed at regulating terrorist content online, including the 2020 Citizen Protection (Against Online Harm) Rules which directly targets content posted on social media, and the 2016 Prevention of Electronic Crimes Act which prohibits use of the internet for terrorist purposes.

These regulations supplement the Anti-Terrorism Act of 1997 (ATA) that provides the baseline legal framework for counterterrorism measures in the country. The ATA does not specifically target terrorist use of the internet, however, it considers the dissemination of digital content “which glorifies terrorists or terrorist activities” to be an offence – under section 11W. The same section also prohibits the dissemination of content that incite to hatred or “gives projection” to a terrorist actor.
The Online Regulation Series | Australia
2020 Tech Against Terrorism Report
Harmful and illegal online content have been regulated in Australia since the late-1990s via the Broadcasting Services Amendment (Online Services) Act of 1999, which established the legislative framework for online content regulation in the country.
The Online Regulation Series | The United States
2020 Tech Against Terrorism Report
Online regulation and content moderation in the United States is defined by the First Amendment right to freedom of speech and Section 230 of the Communication Decency Act 1996, which establishes a unique level of immunity from legal liability for tech platforms. It has broadly impacted the innovation of the modern Internet, causing global effects beyond the US. Recently, however, the Trump Administration administered an executive order directing independent rules-making agencies to consider regulations that narrow the scope of Section 230 and investigate companies engaging in “unfair or deceptive” content moderation practices. This shook the online regulation framework and resulted in a wave of proposed bills and Section 230 amendments from both government and civil society.
The Online Regulation Series | Canada
2020 Tech Against Terrorism Report
Canada’s approach to online regulation has, so far, been characterised by its support for tech sector self-regulation as opposed to government-led regulation of online content. However, concerns over foreign interference in Canadian politics and online hate speech and extremism, have led to public discussions considering the introduction of a legislation on harmful online content, and the possibility to make tech companies liable for content shared on their platforms.
Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels
2020 Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G. and West, R. Report
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.
The Online Regulation Series | The United Kingdom
2020 Tech Against Terrorism Report
The United Kingdom has set out an ambitious online regulatory framework in its Online Harms White Paper, aiming to make the UK “the safest place in the world to be online” by countering various online harms ranging from cyberbullying to terrorist content. This is yet to come into effect, but the UK has approved an interim regime to fulfil obligations under the European Union Directive, which the UK needs to comply with during Brexit negotiations. The UK also has extensive counterterrorism legislation criminalising the viewing and sharing of terrorist content online.
The Online Regulation Series | Germany
2020 Tech Against Terrorism Report
Germany has an extensive framework for regulating online content, particularly with regards to hate speech and violent extremist and terrorist material. Experts also note that Germany’s regulatory framework has to some extent helped set the standard for the European, and possibly global, regulatory landscape.
The Online Regulation Series | France
2020 Tech Against Terrorism Report
France is, alongside New Zealand, an initiator of the Christchurch Call to Action to eliminate terrorist and violent extremist content online. Prior to the Christchurch Call, France has elevated tackling terrorist use of the internet as a key pillar of its counterterrorism policy,[1] supporting the EU proposal on Preventing the Dissemination of Terrorist Content Online, including the requirement for tech platforms to remove flagged terrorist content within one hour.
Migration Moments: Extremist Adoption of Text‑Based Instant Messaging Applications
2020 Clifford, B. Report
This report examines the patchwork of online text‑based instant messaging applications preferred by jihadist and far‑right extremist groups, with a focus on charting their technical affordances and their host companies’ stances on user privacy, security and regulation. To this end, the report analyses six online messaging services (BCM, Gab Chat, Hoop Messenger,, Rocket.Chat and TamTam) that have been or may be used in conjunction with Telegram by extremist groups.
The Online Regulation Series | Morocco
2020 Tech Against Terrorism Report
Morocco’s online regulatory framework consists of different laws and codes that strive to limit the spread of content than can pose a threat to the Kingdom’s “integrity, security and public order”. Central to this framework are the 2003 Anti-Terrorism Law passed in the aftermath of the 2003 Casablanca bombings and the 2016 Press Code that lays out limitations journalisitic publications and public speech. However, the existing regulatory framework is not explicitly clear regarding implications for tech platforms and the government’s powers to filter the online space – something which has been criticised by civil society. According to Freedom House, the government also resorts to “extralegal means” to remove content that it deems “controversial or undesirable” by pressuring media outlets and online figures to delete such content.
Krise und Kontrollverlust: Digitaler Extremismus im Kontext der Corona-Pandemie
2020 Guhl, J. and Gerster, L. Report
Dieser Report analysiert die Netzwerke und Narrative deutschsprachiger rechtsextremer, linksextremer und islamistisch-extremistischer Akteure auf Mainstream- und alternativen Social-Media-Plattformen sowie extremistischen Websites im Kontext der Corona-Pandemie. Unsere Ergebnisse zeigen: Extremisten aus Deutschland, Österreich und der Schweiz konnten ihre Reichweite seit der Einführung der Lockdown-Maßnahmen vergrößern.
Covid-19: far right violent extremism and tech platforms’ response
2020 Deverell, F. and Janin, M. Report
Terrorists and violent extremists are manipulators seeking to exploit stress factors in our societies. The Covid-19 pandemic, its ensuing lockdown measures, as well as the spread of related mis- and disinformation online, thus represented an almost ideal opportunity for malevolent actors to exploit. Far-right violent extremists, in particular, quickly jumped on the opportunity offered by the Covid-19 crisis to anchor their hateful ideas into the mainstream and recruit new members. Whilst manifesting itself mostly online, this exploitation was not limited to the online sphere. It materialised in real world events as violent extremists blended themselves into anti-lockdown protests and as terrorists’ plans were eventually thwarted. Whilst the tech sector promptly responded to the wave of mis- and disinformation, the rapid changes in content moderation policy bear important consequences for the future of content moderation and freedom of expression online. Indeed, the global tech sector, especially social media and content-hosting platforms, was particularly quick to respond to the spread of Covid-19 disinformation and conspiracy theories. In this insight piece, Tech Against Terrorism analyses how far-right violent extremists exploited the instability caused by the Covid-19 pandemic and what the tech sector’s response means for online freedom of expression and platforms’ accountability.
Covid-19 : la réponse des plateformes en ligne face à l’ultradroite
2020 Deverell, F. and Janin, M. Report
Les terroristes et les extrémistes sont avant tout des manipulateurs qui cherchent à exploiter les facteurs de stress présents dans nos sociétés.La pandémie de Covid-19, les mesures de confinement qui en ont découlé ainsi que la propagation de la mésinformation et de la désinformation en ligne qui y sont liées sont donc des conditions idéales à exploiter pour certains acteurs malveillants. Les partisans de l’ultradroite, en particulier, ont rapidement saisi l’occasion offerte par la crise du Covid-19 pour ancrer leurs idées dans le débat public et recruter de nouveaux membres. Bien qu’elle se manifeste principalement en ligne, cette exploitation ne s’est pas limitée à la sphère virtuelle et s’est matérialisée dans des événements réels, par exemple lorsque des extrémistes violents se sont mêlés aux protestations contre le confinement et les restrictions sanitaires et que des plans d’attaques terroristes ont été contrecarrés. Alors que le secteur technologique a rapidement réagi à la vague de désinformation, les changements rapidement apportés aux politiques de modération des contenus en ligne ont déjà des conséquences importantes pour l’avenir de la modération de contenus et de la liberté d’expression en ligne. En effet, le secteur technologique mondial, notamment les réseaux sociaux et les plateformes d’hébergement de contenus, a réagi particulièrement rapidement à la propagation des théories de désinformation et de conspiration liées au Covid-19. Dans cette note, Tech Against Terrorism analyse comment les partisans de l’ultradroite ont exploité l’instabilité causée par la pandémie de Covid-19, et ce que la réponse du secteur technologique signifie pour la liberté d’expression en ligne et la responsabilité des plateformes.
Beyond Limiting and Countering: How to Promote Quality Content to Prevent Violent Extremism and Terrorism on Online Platforms
2020 Barata, J. Report
This paper analyses the policy and legal implications related to the promotion of quality online content that supports and reinforces institutional and societal efforts to prevent, counteract and deflate radical discourses leading to violent behaviour. This analysis will focus on content disseminated via online platforms or intermediaries, and in particular on the intermediaries providing hosting services, who offer a relatively wide range of services for online storage, distribution, and sharing; social networking, collaborating and gaming; or searching and referencing.
A Better Web: Regulating to Reduce Far-Right Hate Online
2020 HOPE not hate Report
Though aiming to remedy genuine harms, government regulation of our online lives also raises legitimate concerns over privacy and freedom of expression. We must address online harms whilst ensuring harms are not also inflicted through unfairly infringing on people’s freedoms. HOPE not hate recognises the importance of this balancing act, and encourages a form of regulation of platforms that places democratic rights front-and-centre. In a world increasingly infused with the web, the significance of this legislation cannot be overstated and it is undoubtedly the case that getting it right will take rigorous reflection. To that end, we encourage debate of the recommendations proposed in this report.
The Online Regulation Series | Brazil
2020 Tech Against Terrorism Report
Brazil represents a major market for online platforms. It is the leading country in terms of internet use in South America, and a key market for Facebook and WhatsApp. WhatsApp’s popularity and the online disinformation campaigns that have been coordinated on the platform are essential to understand Brazil’s approach to online regulation. The messenger app has been accused of being used “for the dissemination of fake news”, whilst critics of the “fake news” bill have said that it served as a “standard” for new regulation in the country based on the app’s existing limitations on forwarding messages and group size.
The Online Regulation Series | Colombia
2020 Tech Against Terrorism Report
With a growing internet penetration rate (69%) and an increasing number of active social media users (35 million, at a growth rate of 11% between 2019 and 2020), the online space in Colombia remains governed by the principle of net neutrality.
Content Regulation and Human Rights: Analysis and Recommendations
2020 Global Network Initiative Report
The multistakeholder Global Network Initiative (GNI) reviewed more than twenty recent governmental initiatives that claim to address various forms of online harm related to user-generated content — a practice we refer to broadly as “content regulation.” We focused on proposals that could shift existing responsibilities and incentives related to user-generated content. Our analysis illustrates the ways that good governance and human rights principles provide time-tested guidance for
how laws, regulations, and policy actions can be most appropriately and effectively designed and carried out. Because content regulation is primarily focused on and likely to impact digital communication and content, we use international human rights principles related to freedom of expression and privacy as our primary lens.
Mitigating the Impact of Media Reporting of Terrorism: Case Study of the #BringBackOurGirls Campaign
2020 Adebiyi, K. Report
This report looks at journalism and social media reporting in Nigeria. The author raises key implications in journalistic reporting by looking at the 2014 #BringBackOurGirls social media campaign. This study importantly takes both a local and a global perspective on Nigeria’s media reporting.

This report is part of a wider project, led by the International Centre for Counter- Terrorism (ICCT) – the Hague, and funded by the EU Devco on “Mitigating the Impact of Media Reporting of Terrorism”. This project aims to produce evidence-based guidance and capacity building outputs based on original, context-sensitive research into the risks and opportunities in media reporting of terrorism and terrorist incidents. The role of media reporting on terrorism has been under investigated and is an underutilised dimension of a holistic counter-terrorism strategy. How the media reports on terrorism has the potential to impact counter-terrorism (CT) perspective positively or negatively.
The Online Regulation Series | Tech Sector Initiatives
2020 Tech Against Terrorism Report
Although regulation frameworks of terrorist and harmful content online have been passed by governments in recent years, regulation in practice remains mostly a matter of solo or self-regulation by the tech sector. That is, when companies draft and apply their own rules for moderating user-generated content on their platforms or when they voluntarily comply with standards shared amongst the tech sector (the Global Internet Forum to Counter Terrorism is one example), without such standards being enforced by law. This, coupled with increased public pressure to address the potential harmful impact of certain online content – in particular terrorist material – has led major tech companies to develop their own councils, consortiums, and boards to oversee their content moderation and its impact on freedom of speech online. In this blogpost, we provide an overview of some of the prominent tech sector initiatives in this area.