Library

Welcome to VOX-Pol’s online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Hezbollah’s “Virtual Entrepreneurs” - How Hezbollah Is Using The Internet To Incite Violence In Israel
The Internet Police
2019 Breinholt, J. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus
Unraveling The Impact Of Social Media On Extremism
2019 Susarla, A. Report
Social media has been remarkably effective in bringing together groups of individuals at a scale and speed unthinkable just a few years ago. While there is a positive aspect of digital activism in raising awareness and mobilizing for equitable societal outcomes, it is equally true that social media has a dark side in enabling political polarization and radicalization. This paper highlights that algorithmic bias and algorithmic manipulation accentuate these developments. We review some of the key technological aspects of social media and its impact on society, while also outlining remedies and implications for regulation. For the purpose of this paper we will define a digital platform as a technology intermediary that enables interaction between groups of users (such as Amazon or Google) and a social media platform as a digital platform for social media.
Disinformation In Terrorist Content Online
2019 Jankowicz, N. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
EU Policy - Preventing The Dissemination Of Terrorist Content Online
2019 Krasenberg, J. Report
The use of the internet for recruitment and the dissemination of violent extremist materials raises significant policy challenges for the European Union (EU), its Member States, and content sharing platforms (CSPs) 1 alike. This problem requires – through the eyes of the EU – a combination of legislative, non-legislative, and voluntary measures based on collaboration between authorities and CSPs with respect for fundamental (human) rights.
Social Media, Terrorist Content Prohibitions, And The Rule Of Law
2019 MacDonald, S. Report
The importance of the rule of law to an effective counterterrorism strategy is widely accepted. Adherence to rule of law values protects both the legitimacy and moral authority of counterterrorism policies and legislation. This paper focuses on two specific rule of law values: minimalism and certainty. Minimalism is concerned with issues of scope. Laws should be as narrowly drawn as possible in order to preserve individuals’ autonomy and freedom to choose, to the fullest extent possible. Certainty is concerned with issues of clarity. Laws should be worded as clearly as possible so that individuals are aware of their responsibilities and able to make informed choices about their actions. Narrowly, clearly drawn laws also limit the discretion vested in officials, thus providing protection against inconsistent or inappropriate decision-making by those tasked with implementing the law.
The rule of law is traditionally associated with public institutions, not private technology companies. In the contemporary realm of counterterrorism, however, a steadfast public private distinction is difficult to maintain. Indeed, many have urged the importance of public-private partnership in responding to terrorists’ use of the internet. One specific issue that has generated much discussion has been social media companies’ regulation of extremist content on their platforms. Facebook’s Community Standards, the Twitter Rules and YouTube’s Community Guidelines all expressly prohibit content that promotes terrorism. Most of the discussion of these prohibitions has focused on the speed with which they are enforced, particularly following the attacks in Christchurch, New Zealand.2 This paper seeks instead to evaluate the prohibitions from the different, but equally important, perspective of the rule of law values of minimalism and certainty.
To inform the discussion, the paper draws on the debates that have surrounded the U.K. ‘Encouragement of Terrorism’ criminal offence. Created by the Terrorism Act 2006, and recently amended by the Counter-Terrorism and Border Security Act 2019, this offence has proved controversial from its inception for two principal reasons. First, the offence expressly encompasses both direct and indirect encouragement. Critics have argued that the concept of indirect encouragement is too nebulous and gives the offence too wide a scope. Second, the framing of the offence focuses not on the purpose of the speaker, but on whether the potential effect of the statement is to encourage terrorism.
This too, it has been argued, gives the offence too wide a scope. In terms of the social media companies’ prohibitions on terrorism-promoting content, this paper accordingly asks two questions. Do the prohibitions encompass indirect, as well as direct, encouragement? And, for the prohibitions to apply, must the encouragement of terrorism have been the purpose and/or the likely effect of the relevant content? The answer to neither question is clear from the wording of the prohibitions themselves. The paper will argue that, in terms of the values of minimalism and certainty, it is important that the answers to both questions are made explicit. It will also suggest how both questions should be answered and provide a proposed reformulation of the social media companies’ prohibitions on terrorism-promoting content.
Lessons from the Information War: Applying Effective Technological Solutions to the Problems of Online Disinformation and Propaganda
2019 Maddox, J. D. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
Counterterrorism is a Public Function: Resetting the Balance Between Public and Private Sectors in Preventing Terrorist use of the Internet
2019 Guittard, A. Report
In the closing scene of The Social Network, one of Mark Zuckerberg’s lawyers marveled at Facebook’s global expansion, asking “In Bosnia, they don’t have roads, but they have Facebook?” While the statement (and much of the film) was factually incorrect, it captured the "move fast and break things” mentality of companies like Facebook as they revolutionized the way people around the world communicate. Despite its benefits, this revolutionary shift in communications has posed several public policy challenges, from election integrity to the erosion of local journalism to terrorism. As someone who has worked in counterterrorism for nearly a decade, first in government and now from the private sector, I’ve seen this evolution firsthand. To date, most efforts to deny terrorists the benefits of a free and open internet are voluntary and industry-led. These include the Global Internet Forum to Counter Terrorism and its Hash Sharing Consortium, the expansion of dedicated counterterrorism teams at Facebook and Google and the launch of initiatives such as YouTube Creators for Change. These are positive and socially responsible initiatives that should be encouraged to grow.
However, the U.S. government – both its political leadership and its CT experts – should not take the convenient route of outsourcing difficult public policy issues to private companies. These issues should be addressed legislatively and in partnership with industry. Curbing terrorists’ use of the internet begs important social questions about the limits of free speech, the definition of terrorism, and national sovereignty over the internet at a time when the U.S. public is increasingly skeptical of the ability of internet companies to act in the public interest.
By examining similar experiences balancing security with technological advancement, CT policy makers will see that cooperation with the private sector is often contentious at first, with industry eschewing new regulation. This paper will examine three such cases: the restriction of radio in WWI, the introduction of counter-money laundering requirements on banks and the introduction of airline passenger screening. These cases show when the government acts within its Constitutional authorities to set clear expectations and work with industry in good faith, industry, government and the public benefit.
Fighting Hate Speech And Terrorist Propaganda On Social Media In Germany
2019 Ritzmann, A. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus
Three Constitutional Thickets: Why Regulating Online Violent Extremism is Hard
2019 Keller, D. Report
In May of 2019, two months after an attacker horrified the world by livestreaming his massacre of worshippers in two New Zealand mosques, leaders of Internet platforms and governments around the world convened in Paris to formulate their response. In the resulting agreement, known as the Christchurch Call, they committed “to eliminate terrorist and violent extremist content online,” while simultaneously protecting freedom of expression. The exact parameters of the commitment, and the means to balance its two goals, were left vague – unsurprising in a document embraced by signatories from such divergent legal cultures as Canada, Indonesia, and Senegal. The U.S. did not sign, though it endorsed similar language through G7 as recently as 2018, and will be asked to do so again in 2019.

In this paper, I review U.S. constitutional considerations for lawmakers seeking to
balance terrorist threats against free expression online. The point is not to advocate for any particular rule. In particular, I do not seek to answer moral or norms-based questions about what content Internet platforms should take down. I do, however, note the serious tensions between calls for platforms to remove horrific but FirstAmendment-protected extremist content – a category that probably includes the Christchurch shooter’s video – and calls for them to function as “public squares” by leaving up any speech the First Amendment permits. To lay out the issue, I draw on analysis developed at greater length in previous publications. This analysis concerns large user-facing platforms like Facebook and Google, and the word “platform” as used here refers to those large companies, not their smaller counterparts.

The paper’s first section covers territory relatively familiar to U.S. lawyers concerning the speech Congress can limit under anti-terrorism laws. This law is well-summarized elsewhere, so my discussion is quite brief. The second section explores a less widely understood issue: Congress’s power to hold Internet platforms liable for their users’ speech. The third section ventures farthest afield, reviewing constitutional implications when platforms themselves set the speech rules, prohibiting legal speech under their Terms of Service (TOS). I will conclude that paths forward for U.S. lawmakers who want to both restrict violent extremist content and protect free expression are rocky, and that non-U.S. laws are likely to be primary drivers of platform behavior in this area in the coming years.
Leveraging CDA 230 to Counter Online Extremism
2019 Bridy, A. M. Report
Current events make it plain that social media platforms have become vectors for the global spread of extremism, including the most virulent forms of racial and religious hatred. In October 2018, a white supremacist murdered 11 people at a synagogue in Pittsburgh, Pennsylvania. The shooter was an active user of the far-right social network Gab, on which he had earlier complained that a refugee-aid organization linked to the synagogue was importing foreign “invaders” to fight a “war against #WhitePeople.” Journalists searching the shooter’s social media accounts for a motive discovered a trail of anti-Semitic posts, including notorious Jewish conspiracy memes widely shared within the far-right’s online ecosystem. In March 2019, another white supremacist massacred 51 people at two mosques in Christchurch, New Zealand. Minutes before the attack, he shared links on 8chan to his Facebook page and a rambling racist manifesto.
Then he live-streamed the carnage to Facebook, which didn’t intervene in time to keep the footage from going viral on YouTube and elsewhere.3 To say that extremist content online caused the Pittsburgh and Christchurch tragedies would be a gross oversimplification. At the same time, however, we must reckon with the fact that both shooters were enmeshed in extremist online communities whose members have cultivated expertise in using social media to maximize the reach of their messages. YouTube’s Chief Product Officer described the Christchurch massacre as “a tragedy…designed for the purpose of going viral.”5
As offline violence with demonstrable links to online extremism escalates, regulators have made it clear that they expect the world’s largest social media platforms to more actively police harmful online speech, including that of terrorist organizations and organized hate groups. In the aftermath of the Christchurch shooting, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron urged governments and tech companies to join together in the Christchurch Call, a “commitment…to eliminate terrorist and violent extremist content online.” As their part of the bargain, Facebook, YouTube, Twitter, and several other tech companies agreed to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”
A Plan for Preventing and Countering Terrorist and Violent Extremist Exploitation of Information and Communications Technology in America
2019 Alexander, A. Report
Policymakers in the United States know that terrorists and violent extremists exploit information and communications technologies (ICTs), but the government still struggles to prevent and counter these threats. Although the U.S. does not face these challenges alone, the strategies and policies emphasized by some of its greatest allies are not viable or suitable frameworks for domestic policymakers. Since these threats persist, however, the U.S. government must develop a cohesive strategy to prevent and counter-terrorist and violent extremist exploitation of ICTs. The approach should rest on the pillars of pragmatism, proportionality, and respect for the rule of law, and aim to disrupt terrorist and violent extremist networks in the digital sphere. To pursue this objective, the following brief calls for political leaders to create an interagency working group to formalize leadership and conduct a comprehensive assessment of terrorist and violent extremist abuse of ICTs. The evaluation must also weigh the costs and benefits associated with responses to these threats. Then, government officials should work to enhance the capability and coordination of government-led efforts, pursue partnerships with non-governmental entities, and facilitate productive engagements with the technology industry. In short, this approach would allow the government to use legislation, redress, and strategic outreach to empower more players to responsibly prevent and counter terrorist and violent extremist exploitation of ICTs.
Online Terrorist Propaganda, Recruitment, and Radicalization
2019 Vacca, J. R. Book
Online Terrorist Propaganda, Recruitment, and Radicalization is most complete treatment of the rapidly growing phenomenon of how terrorists' online presence is utilized for terrorism funding, communication, and recruitment purposes. The book offers an in-depth coverage of the history and development of online "footprints" to target new converts, broaden their messaging, and increase their influence. Chapters present the emergence of various groups; the advancement of terrorist groups' online presences; their utilization of video, chat room, and social media; and the current capability for propaganda, training, and recruitment.

With contributions from leading experts in the field-including practitioners and terrorism researchers-the coverage moves from general factors to specific groups practices as relate to Islamic State of Iraq and the Levant (ISIL), and numerous other groups. Chapters also examine the lone wolf phenomenon as a part of the disturbing trend of self-radicalization. A functional, real-world approach is used regarding the classification of the means and methods by which an online presence is often utilized to promote and support acts of terrorism.

Online Terrorist Propaganda, Recruitment, and Radicalization examines practical solutions in identifying the threat posed by terrorist propaganda and U.S. government efforts to counter it, with a particular focus on ISIS, the Dark Web, national and international measures to identify, thwart, and prosecute terrorist activities online. As such, it will be an invaluable resources for intelligence professionals, terrorism and counterterrorism professionals, those researching terrorism funding, and policy makers looking to restrict the spread of terrorism propaganda online.
From “Incel” To “Saint” - Analyzing The Violent Worldview Behind The 2018 Toronto Attack
2019 Baele, S. J., Brace, L. & Coan, T. G. Journal
This paper combines qualitative and quantitative content analysis to map and analyze the “Incel” worldview shared by members of a misogynistic online community ideologically linked to several recent acts of politically motivated violence, including Alek Minassian’s van attack in Toronto (2018) and Elliot Rodger’s school shooting in Isla Vista (2014). Specifically, the paper analyses how support and motivation for violence results from the particular structure this worldview presents in terms of social categories and causal narratives.
Social Media and Terrorist Financing: What are the Vulnerabilities and How Could Public and Private Sectors Collaborate Better?
2019 Keatinge, T. and Keen, F. Article
Social media companies should recognise the political importance of counterterrorist financing (CTF) by explicitly reflecting the priorities of the UN Security Council and the Financial Action Task Force (FATF) in their policies, strategies and transparency reports.
• Furthermore, social media companies identified as being at high risk of exploitation should update their terms of service and community standards to explicitly reference and outlaw terrorist financing (consistent with universally applicable international law and standards such as those of the FATF) and actions that contravene related UN Security Council resolutions and sanctions.
• Social media companies should clearly demonstrate that they understand and apply appropriate sanctions designations; at the same time, policymakers should ensure that sanctions designations include, where possible, information such as email addresses, IP addresses and social media handles that can support sanctions implementation by social media companies. The more granular the information provided by governments on designated entities, the more efficiently the private sector can comply with sanctions designations.
• Social media companies should more tightly control functionality to ensure that raising terrorist funding through social media videos, such as big-brand advertising and Super Chat payments, is disabled.
• Researchers and policymakers should avoid generalisations and make a clear distinction between forms of social media and the various terrorist-financing vulnerabilities that they pose, recognising the different types of platforms available, and the varied ways in which terrorist financiers could abuse them.
• Policymakers should encourage both inter-agency and cross-border collaboration on the threat of using social media for terrorist financing, ensuring that agencies involved are equipped with necessary social media investigative capabilities.
• International law enforcement agencies such as Interpol and Europol should facilitate the development of new investigation and prosecution standard operating procedures for engaging with operators of servers and cloud services based in overseas jurisdictions to ensure that necessary evidence can be gathered in a timely fashion. This would also encourage an internationally harmonised approach to using social media as financial intelligence.
• Policymakers should encourage the building of new, and leveraging of existing, public–private partnerships to ensure social media company CTF efforts are informed and effective.
The Conflict In Jammu And Kashmir And The Convergence Of Technology And Terrorism
2019 Taneja, K. and Shah, K. M. Article
This paper provides recommendations for what government and social media companies can do in the context of Jammu and Kashmir’s developing online theatre of both potential radicalisation and recruitment
Hidden Resilience And Adaptive Dynamics Of The
2019 Johnson, N. F., Leahy, R., Johnson Restrepo, N., Velasquez, N., Zheng, M., Manrique, P., Devkota, P. and Wuchty, S. Article
Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes1–6 and an alarming increase in youth suicides that result from social media vitriol⁷; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings8–11; recruitment of extremists12–16, including entrapment and sex-trafficking of girls as fighter brides¹⁷; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family¹⁸; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate19,20 and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours21,22 such as financial fraud.
Defining Online Hate And Its Public Lives- What Is The Place For Extreme Speech?
2019 GAGLIARDONE, I. Article
Following Sahana Udupa and Matti Pohjonen’s (2019) invitation to move the debate beyond a normative understanding of hate speech, this article seeks to build a foundation for conceptual and empirical inquiry of speech commonly considered deviant and disturbing. It develops in three stages. It first maps the public lives of terms that refer to online vitriol and how they have been used by different communities of researchers, politicians, advocacy groups, and national organizations. Second, it shows how different types of “haters” have been interpreted as parts of “swarms” or “armies,” depending on whether their violent potential emerges around critical incidents or whether they respond to longer-term strategies through which communities and their leaders tie their speech acts to explicit narratives. The article concludes by locating “extreme speech” within this broader conceptual tapestry, arguing that the paternalistic the gaze that characterizes a lot of research on online hate speech is tied to what Chantal Mouffe has referred to as the “moralization of politics,” a phenomenon that cannot be matched by responses that are themselves moral.
Engaging With Online Extremist Material: Experimental Evidence
2019 Reeve, Z. VOX-Pol Publication
Despite calls from governments to clamp down on violent extremist material in the online sphere, in the name of preventing radicalisation and therefore terrorism research investigating how people engage with extremist material online is surprisingly scarce. The current paper addresses this gap in knowledge with an online experiment. A fictional extremist webpage was designed and (student) participants chose how to engage with it. A mortality salience prime (being primed to think of death) was also included. Mortality salience did not influence engagement with the material but the material itself may have led to disidentification with the ingroup. Whilst interaction with the material was fairly low, those that did engage tended to indicate preference for hierarchy and dominance in society, stronger identification with the ingroup, higher levels of radicalism, and outgroup hostility. More engagement with the online extremist material was also associated with increased likelihood of explicitly supporting the extremist group. These findings show that indoctrination, socialisation, and ideology are not necessarily required for individuals to engage attitudinally or behaviourally with extremist material. This study is not conducted on the dependent variable, therefore shedding light on individuals who do not
engage with extremist material.
Challenging Extremist Views on Social: Media Developing a Counter-Messaging Response
2019 Eerten, J. van and Doosje, B. Book
This book is a timely and significant examination of the role of counter-messaging via social media as a potential means of preventing or countering radicalization to violent extremism. In recent years, extremist groups have developed increasingly sophisticated online communication strategies to spread their propaganda and promote their cause, enabling messages to be spread more rapidly and effectively. Countermessaging has been promoted as one of the most important measures to neutralize online radicalizing influences and is intended to undermine the appeal of messages disseminated by violent extremist groups. While many such initiatives have been launched by Western governments, civil society actors, and private companies, there are many questions regarding their efficacy. Focusing predominantly on efforts countering Salafi-Jihadi extremism, this book examines how feasible it is to prevent or counter radicalization and violent extremism with counter-messaging efforts. It investigates important principles to consider when devising such a program. The authors provide both a comprehensive theoretical overview and a review of the available literature, as well as policy recommendations for governments and the role they can play in counter-narrative efforts. As this is the first book to critically examine the possibilities and pitfalls of using counter-messaging to prevent radicalization or stimulate de-radicalization, it is essential reading for policymakers and professionals dealing with this issue, as well as researchers in the field.
1 2 3 47