Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
The Online Regulation Series | Kenya
2020 Tech Against Terrorism Report
Kenya has “increasingly sought to remove online content”, both through requests and increased regulation, that it deems “immoral” or “defamatory”. Following terrorist attacks on civilian targets in recent years, the country has heightened its efforts around counterterrorism as well as online content regulation. Many of Kenya’s legislations have been criticised by civil society for their “broadness”, “vagueness”, and potential “detrimental implications for freedom of expression”. A proposed social media bill, if enacted, could largely impact social media companies and their users in Kenya, such as through strict regulations on user content.
Covid-19 : la réponse des plateformes en ligne face à l’ultradroite
2020 Deverell, F. and Janin, M. Report
Les terroristes et les extrémistes sont avant tout des manipulateurs qui cherchent à exploiter les facteurs de stress présents dans nos sociétés.La pandémie de Covid-19, les mesures de confinement qui en ont découlé ainsi que la propagation de la mésinformation et de la désinformation en ligne qui y sont liées sont donc des conditions idéales à exploiter pour certains acteurs malveillants. Les partisans de l’ultradroite, en particulier, ont rapidement saisi l’occasion offerte par la crise du Covid-19 pour ancrer leurs idées dans le débat public et recruter de nouveaux membres. Bien qu’elle se manifeste principalement en ligne, cette exploitation ne s’est pas limitée à la sphère virtuelle et s’est matérialisée dans des événements réels, par exemple lorsque des extrémistes violents se sont mêlés aux protestations contre le confinement et les restrictions sanitaires et que des plans d’attaques terroristes ont été contrecarrés. Alors que le secteur technologique a rapidement réagi à la vague de désinformation, les changements rapidement apportés aux politiques de modération des contenus en ligne ont déjà des conséquences importantes pour l’avenir de la modération de contenus et de la liberté d’expression en ligne. En effet, le secteur technologique mondial, notamment les réseaux sociaux et les plateformes d’hébergement de contenus, a réagi particulièrement rapidement à la propagation des théories de désinformation et de conspiration liées au Covid-19. Dans cette note, Tech Against Terrorism analyse comment les partisans de l’ultradroite ont exploité l’instabilité causée par la pandémie de Covid-19, et ce que la réponse du secteur technologique signifie pour la liberté d’expression en ligne et la responsabilité des plateformes.
Covid-19: far right violent extremism and tech platforms’ response
2020 Deverell, F. and Janin, M. Report
Terrorists and violent extremists are manipulators seeking to exploit stress factors in our societies. The Covid-19 pandemic, its ensuing lockdown measures, as well as the spread of related mis- and disinformation online, thus represented an almost ideal opportunity for malevolent actors to exploit. Far-right violent extremists, in particular, quickly jumped on the opportunity offered by the Covid-19 crisis to anchor their hateful ideas into the mainstream and recruit new members. Whilst manifesting itself mostly online, this exploitation was not limited to the online sphere. It materialised in real world events as violent extremists blended themselves into anti-lockdown protests and as terrorists’ plans were eventually thwarted. Whilst the tech sector promptly responded to the wave of mis- and disinformation, the rapid changes in content moderation policy bear important consequences for the future of content moderation and freedom of expression online. Indeed, the global tech sector, especially social media and content-hosting platforms, was particularly quick to respond to the spread of Covid-19 disinformation and conspiracy theories. In this insight piece, Tech Against Terrorism analyses how far-right violent extremists exploited the instability caused by the Covid-19 pandemic and what the tech sector’s response means for online freedom of expression and platforms’ accountability.
A Better Web: Regulating to Reduce Far-Right Hate Online
2020 HOPE not hate Report
Though aiming to remedy genuine harms, government regulation of our online lives also raises legitimate concerns over privacy and freedom of expression. We must address online harms whilst ensuring harms are not also inflicted through unfairly infringing on people’s freedoms. HOPE not hate recognises the importance of this balancing act, and encourages a form of regulation of platforms that places democratic rights front-and-centre. In a world increasingly infused with the web, the significance of this legislation cannot be overstated and it is undoubtedly the case that getting it right will take rigorous reflection. To that end, we encourage debate of the recommendations proposed in this report.
Mapping The Extremist Narrative Landscape In Afghanistan
2020 Winter, C. and Alrhmoun, A. Report
This report, which maps how Violent Extremist Organisations (VEOs) are seeking to influence and shape the trajectory of Afghan politics today, aims to inform and support the development
of strategic communications programming that meaningfully counters extremist narratives and enable more targeted, effective responses to the long-term challenges posed by VEO appeals.
Cross-national level report on digital sociability and drivers of self-radicalisation in Europe
2020 DARE: Dialogue about Radicalisation and Equality Report
In this report, we present an empirical cross-national study of supporters of right-wing extremists’ (RWE) and Islamist extremists’ (ISE) activities and interactions on Twitter. The study is based on
ethnographic and automatic text and network analyses of data from Belgian, British, Dutch, French, German, Greek and Norwegian female and male Twitter accounts.
Stop the virus of disinformation: the risk of malicious use of social media during COVID-19 and the technology options to fight it
2020 United Nations Interregional Crime and Justice Research Institute (UNICRI) Report
This report describes how terrorist, violent extremist and organized criminal groups are trying to take advantage of the Coronavirus disease (COVID-19) pandemic to expand their activities and jeopardize the efficacy and credibility of response measures by governments.
Content Regulation and Human Rights: Analysis and Recommendations
2020 Global Network Initiative Report
The multistakeholder Global Network Initiative (GNI) reviewed more than twenty recent governmental initiatives that claim to address various forms of online harm related to user-generated content — a practice we refer to broadly as “content regulation.” We focused on proposals that could shift existing responsibilities and incentives related to user-generated content. Our analysis illustrates the ways that good governance and human rights principles provide time-tested guidance for
how laws, regulations, and policy actions can be most appropriately and effectively designed and carried out. Because content regulation is primarily focused on and likely to impact digital communication and content, we use international human rights principles related to freedom of expression and privacy as our primary lens.
Mitigating the Impact of Media Reporting of Terrorism: Case Study of the #BringBackOurGirls Campaign
2020 Adebiyi, K. Report
This report looks at journalism and social media reporting in Nigeria. The author raises key implications in journalistic reporting by looking at the 2014 #BringBackOurGirls social media campaign. This study importantly takes both a local and a global perspective on Nigeria’s media reporting.

This report is part of a wider project, led by the International Centre for Counter- Terrorism (ICCT) – the Hague, and funded by the EU Devco on “Mitigating the Impact of Media Reporting of Terrorism”. This project aims to produce evidence-based guidance and capacity building outputs based on original, context-sensitive research into the risks and opportunities in media reporting of terrorism and terrorist incidents. The role of media reporting on terrorism has been under investigated and is an underutilised dimension of a holistic counter-terrorism strategy. How the media reports on terrorism has the potential to impact counter-terrorism (CT) perspective positively or negatively.
Examining the Developmental Pathways of Online Posting Behavior in Violent Right-Wing Extremist Forums
2020 Scrivens, R. Article
Many researchers, practitioners, and policymakers are concerned about online communities that are known to facilitate violent right-wing extremism, but little is empirically known about these digital spaces in general and the developmental posting behaviors that make up these spaces in particular. In this study, group-based trajectory modeling—derived from a criminal career paradigm—was used to identify posting trajectories found in the open-access sections of the Iron March and Fascist Forge forums, both of which have gained notoriety for their members’ online advocacy of violence and acts of violence carried out by them. Multinomial logistic regression and analysis of variance were then used to assess whether posters’ time of entry into the violent forums predicted trajectory group assignment. Overall, the results highlight a number of similarities and differences in posting behaviors within and across platforms, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online. We conclude with a discussion of the implications of this analysis, followed by a discussion of study limitations and avenues for future research.
The Online Regulation Series | Turkey
2020 Tech Against Terrorism Report
Online content regulation in Turkey is characterised by extensive removal of material that has resulted in a large number of Turkish and international websites being blocked in recent years. Further, the Turkish government recently introduced a Social Media Bill, implementing a wide range of new regulations and steep penalties for social media companies, which critics say poses further threats to online freedom of expression in the country.
The Online Regulation Series | The United Kingdom
2020 Tech Against Terrorism Report
The United Kingdom has set out an ambitious online regulatory framework in its Online Harms White Paper, aiming to make the UK “the safest place in the world to be online” by countering various online harms ranging from cyberbullying to terrorist content. This is yet to come into effect, but the UK has approved an interim regime to fulfil obligations under the European Union Directive, which the UK needs to comply with during Brexit negotiations. The UK also has extensive counterterrorism legislation criminalising the viewing and sharing of terrorist content online.
Tracking far-right extremist searches in Bosnia & Herzegovina
2020 Moonshot CVE Report
Between 20 March and 14 September 2020, Moonshot investigated online far-right extremist searches in Bosnia & Herzegovina by analysing at-risk audience engagement with far-right extremist themes.

Our results show a significant number of searches were for far-right extremist themes relating to the region’s history of ethnic conflict, as well searches for international far-right memes and narratives. Interestingly, we found that at-risk users primarily search for and engage with far-right extremist terms in the English language, seeking out terms which have their roots in the region but are now used internationally, such as ‘Serbia Strong’ and ‘Remove Kebab’.
The Online Regulation Series | Germany
2020 Tech Against Terrorism Report
Germany has an extensive framework for regulating online content, particularly with regards to hate speech and violent extremist and terrorist material. Experts also note that Germany’s regulatory framework has to some extent helped set the standard for the European, and possibly global, regulatory landscape.
Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels
2020 Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G. and West, R. Report
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.
The Online Regulation Series | France
2020 Tech Against Terrorism Report
France is, alongside New Zealand, an initiator of the Christchurch Call to Action to eliminate terrorist and violent extremist content online. Prior to the Christchurch Call, France has elevated tackling terrorist use of the internet as a key pillar of its counterterrorism policy,[1] supporting the EU proposal on Preventing the Dissemination of Terrorist Content Online, including the requirement for tech platforms to remove flagged terrorist content within one hour.
Turning the Tap Off: The Impacts of Social Media Shutdown After Sri Lanka’s Easter Attacks
2020 Amarasingam, A. and Rizwie, R. Report
This report examines the social media shutdown in the wake of the Easter Attacks in Sri Lanka, and its impacts on journalists and post-incident communal violence. By highlighting the shutdown’s limitations, social costs and impact on misinformation, this report presents key recommendations for policy-makers, journalists and other key stakeholders. This report is part of a wider project, led by the International Centre for Counter- Terrorism (ICCT) – the Hague, and funded by the EU Devco on “Mitigating the Impact of Media Reporting of Terrorism”. This project aims to produce evidence-based guidance and capacity building outputs based on original, context-sensitive research into the risks and opportunities in media reporting of terrorism and terrorist incidents. The role of media reporting on terrorism has been under investigated and is an underutilised dimension of a holistic counter-terrorism strategy. How the media reports on terrorism has the potential to impact counter-terrorism (CT) perspective positively or negatively.
The Online Regulation Series | The European Union
2020 Tech Against Terrorism Report
The European Union (EU) is an influential voice in the global debate on regulation of online speech. For that reason, two upcoming regulatory regimes might – in addition to shaping EU digital policy – create global precedents for how to regulate both online speech generally and terrorist content specifically.
Youth Exposure to Hate in the Online Space: An Exploratory Analysis
2020 Harriman, N., Shortland, N., Su, M., Cote, T., Testa, M.A. and Savoia, E. Article
Today’s youth have almost universal access to the internet and frequently engage in social networking activities using various social media platforms and devices. This is a phenomenon that hate groups are exploiting when disseminating their propaganda. This study seeks to better understand youth exposure to hateful material in the online space by exploring predictors of such exposure including demographic characteristics (age, gender and race), academic performance, online behaviours, online disinhibition, risk perception, and parents/guardians’ supervision of online activities. We implemented a cross-sectional study design, using a paper questionnaire, in two high schools in Massachusetts (USA), focusing on students 14 to 19 years old. Logistic regression models were used to study the association between independent variables (demographics, online behaviours, risk perception, parental supervision) and exposure to hate online. Results revealed an association between exposure to hate messages in the online space and time spent online, academic performance, communicating with a stranger on social media, and benign online disinhibition. In our sample, benign online disinhibition was also associated with students’ risk of encountering someone online that tried to convince them of racist views. This study represents an important first step in understanding youth’s risk factors of exposure to hateful material online.
The Online Regulation Series | Canada
2020 Tech Against Terrorism Report
Canada’s approach to online regulation has, so far, been characterised by its support for tech sector self-regulation as opposed to government-led regulation of online content. However, concerns over foreign interference in Canadian politics and online hate speech and extremism, have led to public discussions considering the introduction of a legislation on harmful online content, and the possibility to make tech companies liable for content shared on their platforms.