Library

Welcome to VOX-Pol’s online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Predicting Behavioural Patterns in Discussion Forums using Deep Learning on Hypergraphs
2019 Arya, D., Rudinac, S. and Worring, M. VOX-Pol Publication
Online discussion forums provide open workspace allowing users to share information, exchange ideas, address problems, and form groups. These forums feature multimodal posts and analyzing them requires a framework that can integrate heterogeneous information extracted from the posts, i.e. text, visual content and the information about user interactions with the online platform and each other. In this paper, we develop a generic framework that can be trained to identify communication behavior and patterns in relation to an entity of interest, be it user, image or text in internet forums. As the case study we use the analysis of violent online political extremism content, which has been a major challenge for domain experts. We demonstrate the generalizability and flexibility of our framework in predicting relational information between multimodal entities by conducting extensive experimentation around four practical use cases.
Hezbollah’s “Virtual Entrepreneurs” - How Hezbollah Is Using The Internet To Incite Violence In Israel
2019 Shkolnik, M. and Corbeil, A. Article
In recent years, Hezbollah has used social media to recruit Israeli Arabs and West Bank-based Palestinians to attack Israeli targets. A recent innovation in terrorist tactics has given rise to “virtual entrepreneurs,” which to date have been largely associated with the Islamic State’s online recruitment efforts. Hezbollah’s virtual planners, similar to those in the Islamic State, use social media to establish contact with potential recruits before transitioning to more encrypted communications platforms, transferring funds, and issuing instructions to form cells, conduct surveillance, and carry out terrorist attacks. Online recruitment presents a low-cost option that offers plausible deniability for Hezbollah. While every virtual plot led by Hezbollah that targeted Israel has been foiled thus far, Israeli authorities spend time and resources disrupting these schemes at the expense of other more pressing threats. By digitally recruiting Palestinians to attack Israel, Hezbollah and its patron Iran are seeking to cultivate a new front against Israel amid rising regional hostilities.
What Do Closed Source Data Tell Us About Lone Actor Terrorist Behavior? A Research Note
2019 Gill, P., Corner, E., McKeeb, A., Hitchen, P. and Betley, P. Article
This article contributes to the growing body of knowledge on loneactor terrorism with the incorporation of closed-source data. The analyses presented investigate the antecedent behaviors of U.K.- based lone-actor terrorists leading up to their planning or conducting a terrorist event. The results suggest that prior to their attack or arrest the vast majority of lone-actor terrorists each demonstrated elements concerning (a) their grievance, (b) an escalation in their intent to act, (c) gaining capability—both psychologically and technically and (d) attack planning. The results also disaggregate our understanding of lone-actor terrorists in two ways. First, we compare the behaviors of the jihadist actors to those of the extreme-right. Second, we visualize Borum’s (2012) continuums of loneness, direction, and motivation. Collectively the results provide insight into the threat assessment and management of potential lone actors
Online news media and propaganda influence on radicalized individuals: Finding from interviews with Islamist prisoners and former Islamists
2019 Neumann, K. and Baugut, P. Article
This study is the first to explore the twin influences of online propaganda and news media on Islamists. We conducted 44 in-depth interviews with cognitively and behaviorally radicalized Islamist prisoners in Austria as well as former Islamists in Germany and Austria. We found that online propaganda and news media had interdependent influences on Islamists’ rejections of non-Muslims and Western politics, as well as on their willingness to use violence and commit suicide. Cognitively radicalized individuals were influenced by propaganda that blamed non-Muslims for opposing Islam; this was reinforced by online mainstream news reports of right-wing populism and extremism that propagandists selectively distributed via social media. Among behaviorally radicalized individuals, exposure to propaganda and news reports depicting Muslim war victims contributed to the radicalized individuals’ willingness to use violence. Moreover, propaganda and media reports that extensively personalized perpetrators of violence strengthened radicalized individuals’ motivations to imitate the use of violence.
Report of the Special Rapporteur on the promotion and protection of the freedom of opinion and expression
2019 United Nations Report
The Secretary-General has the honour to transmit to the General Assembly the report prepared by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, submitted in accordance with Human Rights Council resolution 34/18. In this report, the Special Rapporteur evaluates the human rights law that applies to the regulation of online ‘hate speech’.
Islamic State Propaganda and Attacks: How Are They Connected?
2019 Rosenblatt, N., Winter, C. and Basra, R. Article
What is the relationship between the words and deeds of a terrorist group? Despite frequent speculation in media
and policy circles, few studies have tested this relationship. This study aims to verify a potential correlation between
the volume of propaganda produced by Islamic State (IS)—including statements by the group’s leadership—and
the number of attacks carried out in its name. We examine this issue by comparing two datasets: one of all
official propaganda produced by the Islamic State in 2016, and another of the completed, failed, and disrupted
plots carried out by the group and its supporters in Europe in the same year. We find no strong and predictable
correlation between the volume of propaganda Islamic State produces and the number of attacks the group and its
supporters carry out. There is no regular rise in IS propaganda output before or after its attacks. In particular, there
is no regular rise in attacks after leadership statements. However, the results may have identified differences in how
IS central and regional media offices respond to attacks. The findings suggest that rather than merely looking at
the volume of IS propaganda, it is necessary to also examine its content. As such, the deliberately broad premise of
this study is intended as the first in a series of papers examining the potential relationship between IS propaganda
and IS attacks.
A Philosophical and Historical Analysis of “Generation Identity”: Fascism, Online Media, and the European New Right
2019 Richards, I. Article
This article analyzes ideological and organizational characteristics of the pan-European youth movement, “Generation Identity” (GI), through a philosophical and historical lens. With a synoptic perspective on existing and original research, it outlines an analysis of key GI literature as well as its ideological influences, activist behavior, and media strategies. This research reveals that, like other twentieth and twenty-first century examples of neo-fascism, the movement is syncretic and attempts to legitimize its political aims through reference to historical quasi- and proto-fascist cases, in combination with popular left and right-wing political ideals. A reflection on GI’s activist behavior, on the other hand, demonstrates that the movement is relatively unique in the field of current far-right politics; particularly in the extent to which it draws practical inspiration from the tactics and propagandizing strategies of contemporary left-wing movements. GI’s online presence, including its leaders’ promotion of gamification, also illustrates its distinctive appeal to young, relatively affluent, countercultural and digitally literate populations. Finally, while in many respects GI is characteristic of the “European New Right” (ENR), the analysis finds that its spokespersons’ various promotion of capitalism and commodification, including through their advocacy of international trade and sale of merchandise, diverges from the anti-capitalist philosophizing of contemporary ENR thinkers.
Personal Statement from James Watkins to Committee on Homeland Security 8Chan enquiry
2019 Watkins, J. Article
Chairman Thompson and Members of the Committee: Today, James Watkins appears for a congressional deposition addressing your Committee’s concern over social media companies’ efforts to address online extremist content. We have prepared this statement in an effort to assist the Committee in understanding how careful and responsible a platform 8chan is. While Mr. Watkins is empathetic to the victims of mass shootings in America, 8chan has never tolerated illegal speech and has a consistent track record of working with law enforcement agencies when appropriate. After the current disruption of service, 8chan has taken steps to improve its ability to identify illegal content and to act more quickly in doing so. To these ends, it hopes to be of continued assistance to law enforcement officers in times of need. Mindful of tragedies America has faced, Mr. Watkins also believes in the exceptional promise of the First Amendment. 8chan is the only platform featuring a full commitment to free speech—a one-of-a-kind discussion board where anonymous users shared tactics about French democracy protests, how to circumvent censorship in repressive countries, and the best way to beat a classic video game. In this hodgepodge of chaotic discussion, down-home recipes are traded, sorrows lifted, and a small minority of users post hateful and ignorant items. As Justice Hugo Black once noted, the “First Amendment provides the only kind of security system that can preserve a free government – one that leaves the way wide open for people to favor, discuss, advocate, or incite causes and doctrines however obnoxious and antagonistic such views may be to the rest of us.”1 It is with this in mind that Mr. Watkins is proud to host the only platform compatible with the First Amendment.
Elites and foreign actors among the alt-right: The Gab social media platform
2019 Zhou, Y., Dredze, M., Broniatowski, D. A. and Adler, W. D. Article
Content regulation and censorship of social media platforms is increasingly discussed by governments and the platforms themselves. To date, there has been little data-driven analysis of the effects of regulated content deemed inappropriate on online user behavior. We therefore compared Twitter — a popular social media platform that occasionally removes content in violation of its Terms of Service — to Gab — a platform that markets itself as completely unregulated. Launched in mid 2016, Gab is, in practice, dominated by individuals who associate with the “alt right” political movement in the United States. Despite its billing as “The Free Speech Social Network,” Gab users display more extreme social hierarchy and elitism when compared to Twitter. Although the framing of the site welcomes all people, Gab users’ content is more homogeneous, preferentially sharing material from sites traditionally associated with the extremes of American political discourse, especially the far right. Furthermore, many of these sites are associated with state-sponsored propaganda from foreign governments. Finally, we discovered a significant presence of German language posts on Gab, with several topics focusing on German domestic politics, yet sharing significant amounts of content from U.S. and Russian sources. These results indicate possible emergent linkages between domestic politics in European and American far right political movements. Implications for regulation of social media platforms are discussed.
The Internet Police
2019 Breinholt, J. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus
Unraveling The Impact Of Social Media On Extremism
2019 Susarla, A. Report
Social media has been remarkably effective in bringing together groups of individuals at a scale and speed unthinkable just a few years ago. While there is a positive aspect of digital activism in raising awareness and mobilizing for equitable societal outcomes, it is equally true that social media has a dark side in enabling political polarization and radicalization. This paper highlights that algorithmic bias and algorithmic manipulation accentuate these developments. We review some of the key technological aspects of social media and its impact on society, while also outlining remedies and implications for regulation. For the purpose of this paper we will define a digital platform as a technology intermediary that enables interaction between groups of users (such as Amazon or Google) and a social media platform as a digital platform for social media.
Disinformation In Terrorist Content Online
2019 Jankowicz, N. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
EU Policy - Preventing The Dissemination Of Terrorist Content Online
2019 Krasenberg, J. Report
The use of the internet for recruitment and the dissemination of violent extremist materials raises significant policy challenges for the European Union (EU), its Member States, and content sharing platforms (CSPs) 1 alike. This problem requires – through the eyes of the EU – a combination of legislative, non-legislative, and voluntary measures based on collaboration between authorities and CSPs with respect for fundamental (human) rights.
Social Media, Terrorist Content Prohibitions, And The Rule Of Law
2019 MacDonald, S. Report
The importance of the rule of law to an effective counterterrorism strategy is widely accepted. Adherence to rule of law values protects both the legitimacy and moral authority of counterterrorism policies and legislation. This paper focuses on two specific rule of law values: minimalism and certainty. Minimalism is concerned with issues of scope. Laws should be as narrowly drawn as possible in order to preserve individuals’ autonomy and freedom to choose, to the fullest extent possible. Certainty is concerned with issues of clarity. Laws should be worded as clearly as possible so that individuals are aware of their responsibilities and able to make informed choices about their actions. Narrowly, clearly drawn laws also limit the discretion vested in officials, thus providing protection against inconsistent or inappropriate decision-making by those tasked with implementing the law.
The rule of law is traditionally associated with public institutions, not private technology companies. In the contemporary realm of counterterrorism, however, a steadfast public private distinction is difficult to maintain. Indeed, many have urged the importance of public-private partnership in responding to terrorists’ use of the internet. One specific issue that has generated much discussion has been social media companies’ regulation of extremist content on their platforms. Facebook’s Community Standards, the Twitter Rules and YouTube’s Community Guidelines all expressly prohibit content that promotes terrorism. Most of the discussion of these prohibitions has focused on the speed with which they are enforced, particularly following the attacks in Christchurch, New Zealand.2 This paper seeks instead to evaluate the prohibitions from the different, but equally important, perspective of the rule of law values of minimalism and certainty.
To inform the discussion, the paper draws on the debates that have surrounded the U.K. ‘Encouragement of Terrorism’ criminal offence. Created by the Terrorism Act 2006, and recently amended by the Counter-Terrorism and Border Security Act 2019, this offence has proved controversial from its inception for two principal reasons. First, the offence expressly encompasses both direct and indirect encouragement. Critics have argued that the concept of indirect encouragement is too nebulous and gives the offence too wide a scope. Second, the framing of the offence focuses not on the purpose of the speaker, but on whether the potential effect of the statement is to encourage terrorism.
This too, it has been argued, gives the offence too wide a scope. In terms of the social media companies’ prohibitions on terrorism-promoting content, this paper accordingly asks two questions. Do the prohibitions encompass indirect, as well as direct, encouragement? And, for the prohibitions to apply, must the encouragement of terrorism have been the purpose and/or the likely effect of the relevant content? The answer to neither question is clear from the wording of the prohibitions themselves. The paper will argue that, in terms of the values of minimalism and certainty, it is important that the answers to both questions are made explicit. It will also suggest how both questions should be answered and provide a proposed reformulation of the social media companies’ prohibitions on terrorism-promoting content.
Lessons from the Information War: Applying Effective Technological Solutions to the Problems of Online Disinformation and Propaganda
2019 Maddox, J. D. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
Counterterrorism is a Public Function: Resetting the Balance Between Public and Private Sectors in Preventing Terrorist use of the Internet
2019 Guittard, A. Report
In the closing scene of The Social Network, one of Mark Zuckerberg’s lawyers marveled at Facebook’s global expansion, asking “In Bosnia, they don’t have roads, but they have Facebook?” While the statement (and much of the film) was factually incorrect, it captured the "move fast and break things” mentality of companies like Facebook as they revolutionized the way people around the world communicate. Despite its benefits, this revolutionary shift in communications has posed several public policy challenges, from election integrity to the erosion of local journalism to terrorism. As someone who has worked in counterterrorism for nearly a decade, first in government and now from the private sector, I’ve seen this evolution firsthand. To date, most efforts to deny terrorists the benefits of a free and open internet are voluntary and industry-led. These include the Global Internet Forum to Counter Terrorism and its Hash Sharing Consortium, the expansion of dedicated counterterrorism teams at Facebook and Google and the launch of initiatives such as YouTube Creators for Change. These are positive and socially responsible initiatives that should be encouraged to grow.
However, the U.S. government – both its political leadership and its CT experts – should not take the convenient route of outsourcing difficult public policy issues to private companies. These issues should be addressed legislatively and in partnership with industry. Curbing terrorists’ use of the internet begs important social questions about the limits of free speech, the definition of terrorism, and national sovereignty over the internet at a time when the U.S. public is increasingly skeptical of the ability of internet companies to act in the public interest.
By examining similar experiences balancing security with technological advancement, CT policy makers will see that cooperation with the private sector is often contentious at first, with industry eschewing new regulation. This paper will examine three such cases: the restriction of radio in WWI, the introduction of counter-money laundering requirements on banks and the introduction of airline passenger screening. These cases show when the government acts within its Constitutional authorities to set clear expectations and work with industry in good faith, industry, government and the public benefit.
Fighting Hate Speech And Terrorist Propaganda On Social Media In Germany
2019 Ritzmann, A. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus
Three Constitutional Thickets: Why Regulating Online Violent Extremism is Hard
2019 Keller, D. Report
In May of 2019, two months after an attacker horrified the world by livestreaming his massacre of worshippers in two New Zealand mosques, leaders of Internet platforms and governments around the world convened in Paris to formulate their response. In the resulting agreement, known as the Christchurch Call, they committed “to eliminate terrorist and violent extremist content online,” while simultaneously protecting freedom of expression. The exact parameters of the commitment, and the means to balance its two goals, were left vague – unsurprising in a document embraced by signatories from such divergent legal cultures as Canada, Indonesia, and Senegal. The U.S. did not sign, though it endorsed similar language through G7 as recently as 2018, and will be asked to do so again in 2019.

In this paper, I review U.S. constitutional considerations for lawmakers seeking to
balance terrorist threats against free expression online. The point is not to advocate for any particular rule. In particular, I do not seek to answer moral or norms-based questions about what content Internet platforms should take down. I do, however, note the serious tensions between calls for platforms to remove horrific but FirstAmendment-protected extremist content – a category that probably includes the Christchurch shooter’s video – and calls for them to function as “public squares” by leaving up any speech the First Amendment permits. To lay out the issue, I draw on analysis developed at greater length in previous publications. This analysis concerns large user-facing platforms like Facebook and Google, and the word “platform” as used here refers to those large companies, not their smaller counterparts.

The paper’s first section covers territory relatively familiar to U.S. lawyers concerning the speech Congress can limit under anti-terrorism laws. This law is well-summarized elsewhere, so my discussion is quite brief. The second section explores a less widely understood issue: Congress’s power to hold Internet platforms liable for their users’ speech. The third section ventures farthest afield, reviewing constitutional implications when platforms themselves set the speech rules, prohibiting legal speech under their Terms of Service (TOS). I will conclude that paths forward for U.S. lawmakers who want to both restrict violent extremist content and protect free expression are rocky, and that non-U.S. laws are likely to be primary drivers of platform behavior in this area in the coming years.
Leveraging CDA 230 to Counter Online Extremism
2019 Bridy, A. M. Report
Current events make it plain that social media platforms have become vectors for the global spread of extremism, including the most virulent forms of racial and religious hatred. In October 2018, a white supremacist murdered 11 people at a synagogue in Pittsburgh, Pennsylvania. The shooter was an active user of the far-right social network Gab, on which he had earlier complained that a refugee-aid organization linked to the synagogue was importing foreign “invaders” to fight a “war against #WhitePeople.” Journalists searching the shooter’s social media accounts for a motive discovered a trail of anti-Semitic posts, including notorious Jewish conspiracy memes widely shared within the far-right’s online ecosystem. In March 2019, another white supremacist massacred 51 people at two mosques in Christchurch, New Zealand. Minutes before the attack, he shared links on 8chan to his Facebook page and a rambling racist manifesto.
Then he live-streamed the carnage to Facebook, which didn’t intervene in time to keep the footage from going viral on YouTube and elsewhere.3 To say that extremist content online caused the Pittsburgh and Christchurch tragedies would be a gross oversimplification. At the same time, however, we must reckon with the fact that both shooters were enmeshed in extremist online communities whose members have cultivated expertise in using social media to maximize the reach of their messages. YouTube’s Chief Product Officer described the Christchurch massacre as “a tragedy…designed for the purpose of going viral.”5
As offline violence with demonstrable links to online extremism escalates, regulators have made it clear that they expect the world’s largest social media platforms to more actively police harmful online speech, including that of terrorist organizations and organized hate groups. In the aftermath of the Christchurch shooting, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron urged governments and tech companies to join together in the Christchurch Call, a “commitment…to eliminate terrorist and violent extremist content online.” As their part of the bargain, Facebook, YouTube, Twitter, and several other tech companies agreed to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”
A Plan for Preventing and Countering Terrorist and Violent Extremist Exploitation of Information and Communications Technology in America
2019 Alexander, A. Report
Policymakers in the United States know that terrorists and violent extremists exploit information and communications technologies (ICTs), but the government still struggles to prevent and counter these threats. Although the U.S. does not face these challenges alone, the strategies and policies emphasized by some of its greatest allies are not viable or suitable frameworks for domestic policymakers. Since these threats persist, however, the U.S. government must develop a cohesive strategy to prevent and counter-terrorist and violent extremist exploitation of ICTs. The approach should rest on the pillars of pragmatism, proportionality, and respect for the rule of law, and aim to disrupt terrorist and violent extremist networks in the digital sphere. To pursue this objective, the following brief calls for political leaders to create an interagency working group to formalize leadership and conduct a comprehensive assessment of terrorist and violent extremist abuse of ICTs. The evaluation must also weigh the costs and benefits associated with responses to these threats. Then, government officials should work to enhance the capability and coordination of government-led efforts, pursue partnerships with non-governmental entities, and facilitate productive engagements with the technology industry. In short, this approach would allow the government to use legislation, redress, and strategic outreach to empower more players to responsibly prevent and counter terrorist and violent extremist exploitation of ICTs.