Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
The Online Regulation Series | Colombia
2020 Tech Against Terrorism Report
With a growing internet penetration rate (69%) and an increasing number of active social media users (35 million, at a growth rate of 11% between 2019 and 2020), the online space in Colombia remains governed by the principle of net neutrality.
The Online Regulation Series | Brazil
2020 Tech Against Terrorism Report
Brazil represents a major market for online platforms. It is the leading country in terms of internet use in South America, and a key market for Facebook and WhatsApp. WhatsApp’s popularity and the online disinformation campaigns that have been coordinated on the platform are essential to understand Brazil’s approach to online regulation. The messenger app has been accused of being used “for the dissemination of fake news”, whilst critics of the “fake news” bill have said that it served as a “standard” for new regulation in the country based on the app’s existing limitations on forwarding messages and group size.
EEG distinguishes heroic narratives in ISIS online video propaganda
2020 Yoder, K.J., Ruby, K., Pape, R. and Decety, J. Article
The Islamic State (ISIS) was uniquely effective among extremist groups in the Middle East at recruiting Westerners. A major way ISIS accomplished this was by adopting Hollywood-style narrative structures for their propaganda videos. In particular, ISIS utilized a heroic martyr narrative, which focuses on an individual’s personal glory and empowerment, in addition to traditional social martyr narratives, which emphasize duty to kindred and religion. The current work presented adult participants (n = 238) video clips from ISIS propaganda which utilized either heroic or social martyr narratives and collected behavioral measures of appeal, narrative transportation, and psychological dispositions (egoism and empathy) associated with attraction to terrorism. Narrative transportation and the interaction between egoism and empathy predicted video recruitment appeal. A subset of adults (n = 80) underwent electroencephalographic (EEG) measurements while watching a subset of the video-clips. Complementary univariate and multivariate techniques characterized spectral power density differences when perceiving the different types of narratives. Heroic videos show increased beta power over frontal sites, and globally increased alpha. In contrast, social narratives showed greater frontal theta, an index of negative feedback and emotion regulation. The results provide strong evidence that ISIS heroic narratives are specifically processed, and appeal to psychological predispositions distinctly from other recruitment narratives.
Migration Moments: Extremist Adoption of Text‑Based Instant Messaging Applications
2020 Clifford, B. Report
This report examines the patchwork of online text‑based instant messaging applications preferred by jihadist and far‑right extremist groups, with a focus on charting their technical affordances and their host companies’ stances on user privacy, security and regulation. To this end, the report analyses six online messaging services (BCM, Gab Chat, Hoop Messenger, Riot.im, Rocket.Chat and TamTam) that have been or may be used in conjunction with Telegram by extremist groups.
Krise und Kontrollverlust: Digitaler Extremismus im Kontext der Corona-Pandemie
2020 Guhl, J. and Gerster, L. Report
Dieser Report analysiert die Netzwerke und Narrative deutschsprachiger rechtsextremer, linksextremer und islamistisch-extremistischer Akteure auf Mainstream- und alternativen Social-Media-Plattformen sowie extremistischen Websites im Kontext der Corona-Pandemie. Unsere Ergebnisse zeigen: Extremisten aus Deutschland, Österreich und der Schweiz konnten ihre Reichweite seit der Einführung der Lockdown-Maßnahmen vergrößern.
The Online Regulation Series | Morocco
2020 Tech Against Terrorism Report
Morocco’s online regulatory framework consists of different laws and codes that strive to limit the spread of content than can pose a threat to the Kingdom’s “integrity, security and public order”. Central to this framework are the 2003 Anti-Terrorism Law passed in the aftermath of the 2003 Casablanca bombings and the 2016 Press Code that lays out limitations journalisitic publications and public speech. However, the existing regulatory framework is not explicitly clear regarding implications for tech platforms and the government’s powers to filter the online space – something which has been criticised by civil society. According to Freedom House, the government also resorts to “extralegal means” to remove content that it deems “controversial or undesirable” by pressuring media outlets and online figures to delete such content.
The Online Regulation Series | Kenya
2020 Tech Against Terrorism Report
Kenya has “increasingly sought to remove online content”, both through requests and increased regulation, that it deems “immoral” or “defamatory”. Following terrorist attacks on civilian targets in recent years, the country has heightened its efforts around counterterrorism as well as online content regulation. Many of Kenya’s legislations have been criticised by civil society for their “broadness”, “vagueness”, and potential “detrimental implications for freedom of expression”. A proposed social media bill, if enacted, could largely impact social media companies and their users in Kenya, such as through strict regulations on user content.
Covid-19 : la réponse des plateformes en ligne face à l’ultradroite
2020 Deverell, F. and Janin, M. Report
Les terroristes et les extrémistes sont avant tout des manipulateurs qui cherchent à exploiter les facteurs de stress présents dans nos sociétés.La pandémie de Covid-19, les mesures de confinement qui en ont découlé ainsi que la propagation de la mésinformation et de la désinformation en ligne qui y sont liées sont donc des conditions idéales à exploiter pour certains acteurs malveillants. Les partisans de l’ultradroite, en particulier, ont rapidement saisi l’occasion offerte par la crise du Covid-19 pour ancrer leurs idées dans le débat public et recruter de nouveaux membres. Bien qu’elle se manifeste principalement en ligne, cette exploitation ne s’est pas limitée à la sphère virtuelle et s’est matérialisée dans des événements réels, par exemple lorsque des extrémistes violents se sont mêlés aux protestations contre le confinement et les restrictions sanitaires et que des plans d’attaques terroristes ont été contrecarrés. Alors que le secteur technologique a rapidement réagi à la vague de désinformation, les changements rapidement apportés aux politiques de modération des contenus en ligne ont déjà des conséquences importantes pour l’avenir de la modération de contenus et de la liberté d’expression en ligne. En effet, le secteur technologique mondial, notamment les réseaux sociaux et les plateformes d’hébergement de contenus, a réagi particulièrement rapidement à la propagation des théories de désinformation et de conspiration liées au Covid-19. Dans cette note, Tech Against Terrorism analyse comment les partisans de l’ultradroite ont exploité l’instabilité causée par la pandémie de Covid-19, et ce que la réponse du secteur technologique signifie pour la liberté d’expression en ligne et la responsabilité des plateformes.
Covid-19: far right violent extremism and tech platforms’ response
2020 Deverell, F. and Janin, M. Report
Terrorists and violent extremists are manipulators seeking to exploit stress factors in our societies. The Covid-19 pandemic, its ensuing lockdown measures, as well as the spread of related mis- and disinformation online, thus represented an almost ideal opportunity for malevolent actors to exploit. Far-right violent extremists, in particular, quickly jumped on the opportunity offered by the Covid-19 crisis to anchor their hateful ideas into the mainstream and recruit new members. Whilst manifesting itself mostly online, this exploitation was not limited to the online sphere. It materialised in real world events as violent extremists blended themselves into anti-lockdown protests and as terrorists’ plans were eventually thwarted. Whilst the tech sector promptly responded to the wave of mis- and disinformation, the rapid changes in content moderation policy bear important consequences for the future of content moderation and freedom of expression online. Indeed, the global tech sector, especially social media and content-hosting platforms, was particularly quick to respond to the spread of Covid-19 disinformation and conspiracy theories. In this insight piece, Tech Against Terrorism analyses how far-right violent extremists exploited the instability caused by the Covid-19 pandemic and what the tech sector’s response means for online freedom of expression and platforms’ accountability.
A Better Web: Regulating to Reduce Far-Right Hate Online
2020 HOPE not hate Report
Though aiming to remedy genuine harms, government regulation of our online lives also raises legitimate concerns over privacy and freedom of expression. We must address online harms whilst ensuring harms are not also inflicted through unfairly infringing on people’s freedoms. HOPE not hate recognises the importance of this balancing act, and encourages a form of regulation of platforms that places democratic rights front-and-centre. In a world increasingly infused with the web, the significance of this legislation cannot be overstated and it is undoubtedly the case that getting it right will take rigorous reflection. To that end, we encourage debate of the recommendations proposed in this report.
Stop the virus of disinformation: the risk of malicious use of social media during COVID-19 and the technology options to fight it
2020 United Nations Interregional Crime and Justice Research Institute (UNICRI) Report
This report describes how terrorist, violent extremist and organized criminal groups are trying to take advantage of the Coronavirus disease (COVID-19) pandemic to expand their activities and jeopardize the efficacy and credibility of response measures by governments.
Examining the Developmental Pathways of Online Posting Behavior in Violent Right-Wing Extremist Forums
2020 Scrivens, R. Article
Many researchers, practitioners, and policymakers are concerned about online communities that are known to facilitate violent right-wing extremism, but little is empirically known about these digital spaces in general and the developmental posting behaviors that make up these spaces in particular. In this study, group-based trajectory modeling—derived from a criminal career paradigm—was used to identify posting trajectories found in the open-access sections of the Iron March and Fascist Forge forums, both of which have gained notoriety for their members’ online advocacy of violence and acts of violence carried out by them. Multinomial logistic regression and analysis of variance were then used to assess whether posters’ time of entry into the violent forums predicted trajectory group assignment. Overall, the results highlight a number of similarities and differences in posting behaviors within and across platforms, many of which may inform future risk factor frameworks used by law enforcement and intelligence agencies to identify credible threats online. We conclude with a discussion of the implications of this analysis, followed by a discussion of study limitations and avenues for future research.
The Online Regulation Series | Turkey
2020 Tech Against Terrorism Report
Online content regulation in Turkey is characterised by extensive removal of material that has resulted in a large number of Turkish and international websites being blocked in recent years. Further, the Turkish government recently introduced a Social Media Bill, implementing a wide range of new regulations and steep penalties for social media companies, which critics say poses further threats to online freedom of expression in the country.
The Online Regulation Series | The United Kingdom
2020 Tech Against Terrorism Report
The United Kingdom has set out an ambitious online regulatory framework in its Online Harms White Paper, aiming to make the UK “the safest place in the world to be online” by countering various online harms ranging from cyberbullying to terrorist content. This is yet to come into effect, but the UK has approved an interim regime to fulfil obligations under the European Union Directive, which the UK needs to comply with during Brexit negotiations. The UK also has extensive counterterrorism legislation criminalising the viewing and sharing of terrorist content online.
The Online Regulation Series | Germany
2020 Tech Against Terrorism Report
Germany has an extensive framework for regulating online content, particularly with regards to hate speech and violent extremist and terrorist material. Experts also note that Germany’s regulatory framework has to some extent helped set the standard for the European, and possibly global, regulatory landscape.
Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels
2020 Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G. and West, R. Report
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.
The Online Regulation Series | France
2020 Tech Against Terrorism Report
France is, alongside New Zealand, an initiator of the Christchurch Call to Action to eliminate terrorist and violent extremist content online. Prior to the Christchurch Call, France has elevated tackling terrorist use of the internet as a key pillar of its counterterrorism policy,[1] supporting the EU proposal on Preventing the Dissemination of Terrorist Content Online, including the requirement for tech platforms to remove flagged terrorist content within one hour.
Turning the Tap Off: The Impacts of Social Media Shutdown After Sri Lanka’s Easter Attacks
2020 Amarasingam, A. and Rizwie, R. Report
This report examines the social media shutdown in the wake of the Easter Attacks in Sri Lanka, and its impacts on journalists and post-incident communal violence. By highlighting the shutdown’s limitations, social costs and impact on misinformation, this report presents key recommendations for policy-makers, journalists and other key stakeholders. This report is part of a wider project, led by the International Centre for Counter- Terrorism (ICCT) – the Hague, and funded by the EU Devco on “Mitigating the Impact of Media Reporting of Terrorism”. This project aims to produce evidence-based guidance and capacity building outputs based on original, context-sensitive research into the risks and opportunities in media reporting of terrorism and terrorist incidents. The role of media reporting on terrorism has been under investigated and is an underutilised dimension of a holistic counter-terrorism strategy. How the media reports on terrorism has the potential to impact counter-terrorism (CT) perspective positively or negatively.
The Online Regulation Series | The European Union
2020 Tech Against Terrorism Report
The European Union (EU) is an influential voice in the global debate on regulation of online speech. For that reason, two upcoming regulatory regimes might – in addition to shaping EU digital policy – create global precedents for how to regulate both online speech generally and terrorist content specifically.
Youth Exposure to Hate in the Online Space: An Exploratory Analysis
2020 Harriman, N., Shortland, N., Su, M., Cote, T., Testa, M.A. and Savoia, E. Article
Today’s youth have almost universal access to the internet and frequently engage in social networking activities using various social media platforms and devices. This is a phenomenon that hate groups are exploiting when disseminating their propaganda. This study seeks to better understand youth exposure to hateful material in the online space by exploring predictors of such exposure including demographic characteristics (age, gender and race), academic performance, online behaviours, online disinhibition, risk perception, and parents/guardians’ supervision of online activities. We implemented a cross-sectional study design, using a paper questionnaire, in two high schools in Massachusetts (USA), focusing on students 14 to 19 years old. Logistic regression models were used to study the association between independent variables (demographics, online behaviours, risk perception, parental supervision) and exposure to hate online. Results revealed an association between exposure to hate messages in the online space and time spent online, academic performance, communicating with a stranger on social media, and benign online disinhibition. In our sample, benign online disinhibition was also associated with students’ risk of encountering someone online that tried to convince them of racist views. This study represents an important first step in understanding youth’s risk factors of exposure to hateful material online.
1 2 3 60