Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Branding A Caliphate In Decline: The Islamic State’s Video Output (2015-2018)
2019 Nanninga, P. Report
Although video releases have been central to the Islamic State’s efforts to represent itself to its audiences, an extensive quantitative and qualitative study of these sources over a longer period of time is still lacking. This paper therefore provides an overview and analysis of the entire corpus of official videos released by the Islamic State between 1 July 2015 and 30 June 2018. It particularly focuses on how the Islamic State’s decline in Iraq and Syria during this period is reflected in its video output and how the group has responded to its setbacks. The paper demonstrates a strong correlation between the group’s mounting troubles and its video production: the numbers of videos decreased dramatically and their content reflects the Islamic State’s (re)transformation from a territory-based ‘state’ to an insurgent group relying on guerrilla tactics and terrorist attacks. Nevertheless, this paper argues that the Islamic State’s multi-faceted response to its setbacks might ensure the groups’ appeal to its target audience in the years to come.
Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance
2014 United Nations Human Rights Council Report
The unprecedented, rapid development of new communication and information technologies, such as the Internet and social media, has enabled wider dissemination of racist and xenophobic content that incites racial hatred and violence. In response, States, international and regional organizations, civil society and the private sector have undertaken a variety of legal and policy initiatives. In the present report, the Special Rapporteur examines the context, key trends and the manifestations of racism on the Internet and social media, and provides an overview of the legal and policy frameworks and the measures taken at international, regional and national levels, as well as some of the regulatory norms adopted by Internet and social network providers. He presents examples of measures taken to respond to the use of the Internet and social media to propagate racism, hatred, xenophobia and related intolerance, while highlighting the overall positive contribution of the Internet and social media as an effective tool for combating racism, discrimination, xenophobia and related intolerance.
Media And Information Literacy: Reinforcing Human Rights, Countering Radicalization And Extremism
2016 Singh, J., Kerr, P. and Hamburger, E. Report
2016 is the first year of the implementation of the sustainable development goals. A renewed emphasis on a Human Rights-Based Approach to all forms of development is apt and timely. While migration and peace building as development challenges are not new to humankind, the world is faced with ongoing wars and conflicts as well as new forms of violent extremism triggering levels of migration, that rival only the one that occurred during the Second World War. As a negative and undesirable consequence, all over the world, there has been a sudden rise in incidents of individuals using hate speech against migrants, forced migration and minority communities or social groups, blaming them for their nations’ struggles. The words used in politics, in the news, in social media, in research studies, national reports and general literature or debate about these human phenomena have consequences. History has shown that rhetorical excesses and unbalanced or biased historical accounts of certain events in relation to any ethnic group, place, culture or religion can give rise to a climate of prejudice, discrimination, and violence. It is these prejudices, discrimination and violence that often compromise individual rights or equal rights to all – the right to cultural and religious expressions, the right to security and peace, the right to freedom of expression, the right to education, the right to information, the right to associate or connect et al. Here, Article 1 of the Universal Declaration of Human Rights, “All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood” is breached. It is this reasoning and conscience that the acquisition of media and information literacy (MIL) competencies can stir in all peoples. Furthermore, the ideological beliefs and dogmas that we firmly hold emanate from our socialization. Socialization is embedded in information and communication and increasingly taking place through technological platforms, media and all forms of learning environments. When taken together and coupled with the incidents of the use of social media by extremist and violent organizations to radicalize and recruit especially young minds, the relevance of MIL to enable citizens to challenge their own beliefs effectively and critically engage in these topics, and thus the integration of MIL in formal, non-formal and informal settings becomes more urgent. A rights-based approach to media and information literacy and to sustainable development – including countering hate, radicalization and violent extremism - can play a crucial role in perceptions of the “other” by encouraging reporting, research and analysis as well as the design and implementation of development interventions that are objective, evidence-based, inclusive, reliable, ethical and accurate, and by encouraging individuals to take sound actions based on their rights and the rights of others.
Beyond The "Big Three": Alternative Platforms For Online Hate Speech
2019 European Union’s Rights, Equality and Citizenship Programme (2014-2020) Report
In recent years, most international studies on hate speech online have focused on the three platforms traditionally considered the most influential: Facebook, YouTube and Twitter. However, their predominance as the biggest international social networks is no longer uncontested. Other networks are on the rise and young users especially lose interest in the ‘old’ platforms. In April 2019, Instagram had more active accounts globally than Twitter and came fifth in terms of global page impressions, after Facebook, Pinterest, Twitter and YouTube. Additionally, recent studies into the social media use of minors and young adults showed that Instagram is more important than Facebook to users younger than 30 in several countries. Since hate groups and extremists move their propaganda to the channels where they can reach their target audience most easily, it is important to take those changes in the social media landscape into consideration. Facebook, Twitter, YouTube and Instagram are all parties to the Code of Conduct on countering illegal hate speech online, established by the European Commission in 20164, agreeing to take stronger and swifter action against hate speech on their platforms. Google+ has also joined the Code of Conduct in 2018. However, as the network was shut down in April 2019, it is no longer included in this analysis. As hate speech moderation increases on the major social media platforms, hate groups and extremists turn to other networks where community guidelines against hate speech are less strictly enforced. Some of those alternative platforms, like VK.com or Gab.ai, have acquired a broad international audience and are considered ‘safe havens’ by far-right or right-wing extremist activists. Other platforms have a more local audience or are only relevant in specific countries. This analysis offers an overview of the most prevalent social media platforms and websites used for disseminating hate speech in the countries of the sCAN project partners.
A Study Of Outlinks Contained In Tweets Monitoring Rumiya
2019 Macdonald, S., Grinnell, D., Kinzel A., and Lorenzo-Dus, A. Report
This paper focuses on the attempts by Daesh (also known as the Islamic State of Iraq and Syria, ISIS) to use Twitter to disseminate its online magazine, Rumiyah. It examines a data set of 11,520 tweets mentioning Rumiyah that contained an out link, to evaluate the success of Daesh’s attempts to use Twitter as a gateway to issues of its magazine.
Following The Whack-a-Mole: Britian First's Visual Strategy From Facebook To Gab
2019 Nouri,L., Lorenzo-Dus, N. and Watkin, A.L. Report
The focus of this paper is on the extremist group Britain First. As such, it does not explore online terrorist activity but rather examines how a group regarded as extremist is subject to online sanctions. The removal of the extremist group Britain First from Facebook in March 2018 successfully disrupted the group’s online activity, leading them to have to start anew on Gab, a different and considerably smaller social media platform. The removal also resulted in the group having to seek new online followers from a much smaller, less diverse recruitment pool. This paper demonstrates the further impact of the group’s platform migration on their online strategy – particularly on their choice of images and the engagement levels generated through them. The paper puts forward a number of key recommendations, most importantly that social-media companies should continue to censor and remove hateful content.
Mapping The Jihadist Information Ecosystem
2019 Fisher, A., Prucha, N. and Winterbotham, E. Report
Online disruption efforts generally aim to reduce the availability of jihadist content. Yet, the speed and agility of jihadist movements online – a multiplatform approach which a co-author of this paper has previously described as a ‘swarmcast’ – has allowed groups to evolve in response to disruption efforts and find new ways to distribute content. This paper describes a model of the flow of users between social media platforms and surface web pages to access jihadist content, using data obtained through innovative collection methods. The model provides an approximate picture of the jihadist information ecosystem and how multiple platforms are used to disseminate content.
The Evolution of Online Violent Extremism In Indonesia And The Philippines
2019 Nuraniyah, N. Report
Pro-Daesh (also known as the Islamic State of Iraq and Syria, ISIS) groups in Indonesia and the Philippines have come to rely on social media for propaganda, fundraising and disseminating instructional material, but in different ways. While Indonesian online extremism has deep roots, with local networks exploiting various online platforms over the past decade, extremist social media in the Philippines only really took off as a consequence of the May 2017 siege in the southern Philippine city of Marawi by pro-Daesh militants. This paper outlines the evolving use of online platforms by pro-Daesh groups in both countries and how this has enabled extremists to develop and strengthen their networks. Social media and encrypted chat apps have shaped the development of extremism in Indonesia and the Philippines in four main areas: branding, recruitment, fundraising, and the increasing role of women. For groups in the Philippines, direct communication with Daesh headquarters via Telegram facilitated their rebranding as the face of Daesh in Southeast Asia, more than just a local insurgency group. In both countries, social media facilitates vertical and horizontal recruitment, but not lone-actor terrorism. Extremist use of the internet for fundraising is still rudimentary –sophisticated financial cybercrime is still virtually non-existent. In all these aspects, women’s roles have become much more visible. For a long time, women had been barred from accessing extremist public spaces, let alone taking an active role as combatants. But through social media, women are now able to play more active roles as propagandists, recruiters, financiers, and even suicide bombers. This paper briefly discusses government responses to online extremism, noting that there have been mixed results between Indonesia and the Philippines. Indonesian authorities have undoubtedly been the more successful of the two regimes – both in terms of law enforcement and engagement with the tech sector – but its counter terrorism police now face the problem of how to judiciously use their powers in a democratic manner. The Philippines, meanwhile, is still at the starting line in terms of dealing with online extremism, with the military more accustomed to removing threats than trying to understand them.
Shedding Light On Terrorist And Extremist Content Removal
2019 Vegt, I.V.D. Gill, P., Macdonald,S. and Kleinberg, B. Report
Social media and tech companies face the challenge of identifying and removing terrorist and extremist content from their platforms. This paper presents the findings of a series of interviews with Global Internet Forum to Counter Terrorism (GIFCT) partner companies and law enforcement Internet Referral Units (IRUs). It offers a unique view on current practices and challenges regarding content removal, focusing particularly on human-based and automated approaches and the integration of the two.
Caught In The Net: The Impact Of "Extremist" Speech Regulations On Human Rights Content
2019 Jaloud, A. R. A., Al Khatib, K., Deutch, J., Kayyali, D. and York, J. C. Report
Social media companies have long struggled with what to do about extremist content on their platforms. While most companies include provisions about “extremist” content in their community standards, until recently, such content was often vaguely defined, providing policymakers and content moderators a wide berth in determining what to remove, and what to allow. Unfortunately, companies have responded with overbroad and vague policies and practices that have led to mistakes at scale that are decimating human rights content.
The Conflict In Jammu And Kashmir And The Convergence Of Technology And Terrorism
2019 Taneja, K. and Shah, K. M. Report
This paper provides recommendations for what government and social media companies can do in the context of Jammu and Kashmir’s developing online theatre of both potential radicalisation and recruitment
Social Media and Terrorist Financing: What are the Vulnerabilities and How Could Public and Private Sectors Collaborate Better?
2019 Keatinge, T. and Keen, F. Report
• Social media companies should recognise the political importance of counterterrorist financing (CTF) by explicitly reflecting the priorities of the UN Security Council and the Financial Action Task Force (FATF) in their policies, strategies and transparency reports.
• Furthermore, social media companies identified as being at high risk of exploitation should update their terms of service and community standards to explicitly reference and outlaw terrorist financing (consistent with universally applicable international law and standards such as those of the FATF) and actions that contravene related UN Security Council resolutions and sanctions.
• Social media companies should clearly demonstrate that they understand and apply appropriate sanctions designations; at the same time, policymakers should ensure that sanctions designations include, where possible, information such as email addresses, IP addresses and social media handles that can support sanctions implementation by social media companies. The more granular the information provided by governments on designated entities, the more efficiently the private sector can comply with sanctions designations.
• Social media companies should more tightly control functionality to ensure that raising terrorist funding through social media videos, such as big-brand advertising and Super Chat payments, is disabled.
• Researchers and policymakers should avoid generalisations and make a clear distinction between forms of social media and the various terrorist-financing vulnerabilities that they pose, recognising the different types of platforms available, and the varied ways in which terrorist financiers could abuse them.
• Policymakers should encourage both inter-agency and cross-border collaboration on the threat of using social media for terrorist financing, ensuring that agencies involved are equipped with necessary social media investigative capabilities.
• International law enforcement agencies such as Interpol and Europol should facilitate the development of new investigation and prosecution standard operating procedures for engaging with operators of servers and cloud services based in overseas jurisdictions to ensure that necessary evidence can be gathered in a timely fashion. This would also encourage an internationally harmonised approach to using social media as financial intelligence.
• Policymakers should encourage the building of new, and leveraging of existing, public–private partnerships to ensure social media company CTF efforts are informed and effective.
Briefing Note ‘El Rubio’ Lives: The Challenge Of Arabic Language Extremist Content On Social Media Platforms
2019 Ayad, M. Report
This briefing outlines research uncovering thousands of users viewing extremist content in Arabic language across mainstream social platforms including Facebook and YouTube. The findings emerged as world leaders, policymakers, and technology companies gathered in Jordan earlier this month to discuss counter-terrorism and extremism as part of the Aqaba Process and the convening of the Global Internet Forum for Countering Terrorism (GIFCT).

Researchers identified:

• More than 77 pieces of Arabic content promoting influential Islamist extremists from al-Qaeda, the Islamic State of Iraq and Syria (ISIS) as well as affiliates for both organizations, and precursors to both groups on both YouTube and Facebook;
• More than 275,000 users have watched the videos on both Facebook and YouTube;
• The research finds evidence of Islamist extremist supporters sharing content between sites, spreading the content further than their primary YouTube Channels and/or Facebook pages and groups. Approximately 138 individual users have shared links from the YouTube to their networks on Facebook.
Cyber Swarming, Memetic Warfare and Viral Insurgency: How Domestic Militants Organize on Memes to Incite Violent Insurrection and Terror Against Government and Law Enforcement
2020 Goldenberg, A. and Finkelstein, J. Report
The Report you are about to read, “Cyber Swarming: Memetic Warfare and Viral Insurgency,” represents a breakthrough case study in the capacity to identify cyber swarms and viral insurgencies in nearly real time as they are developing in plain sight. The result of an analysis of over 100 million social media comments, the authors demonstrate how the “boogaloo meme,” “a joke for some, acts as a violent meme that circulates instructions for a violent, viral insurgency for others.” Using it, like turning off the transponders on 9/11, enables the extremists to hide in plain sight, disappearing into the clutter of innocent messages, other data points. It should be of particular concern, the authors note, for the military, for whom “the meme’s emphasis on military language and culture poses a special risk.”

Because most of law enforcement and the military remain ignorant of “memetic warfare,” the authors demonstrate, extremists who employ it “possess a distinct advantage over government officials and law enforcement.” As with the 9/11 terrorists, “they already realize that they are at war. Public servants cannot afford to remain ignorant of this subject because as sites, followers, and activists grow in number, memes can reach a critical threshold and tipping point, beyond which they can suddenly saturate and mainstream across entire cultures.” This Report is at once an urgent call to recognize an emerging threat and a prescription for how to counter it. As such, it offers that rarest of opportunities: the chance to stop history from repeating itself.
Counterterrorism is a Public Function: Resetting the Balance Between Public and Private Sectors in Preventing Terrorist use of the Internet
2019 Guittard, A. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
Daesh Propaganda, Before and After its Collapse
2019 Winter, C. Report
This report compares two archives of official Daesh media that were compiled four years apart. It explores the nuances of the group’s worldview and tracks how external and internal situational exigencies impacted them during its formative years as a caliphate. It finds that the organisation’s media infrastructure was about one-tenth as productive in mid-2019 as it was in mid-2015. The data also show that it was spending more time covering the pursuits of its global network in 2019 than in 2015. Finally, the data point towards a substantial thematic rearrangement in the organisation’s overarching propaganda narrative that manifested in it shifting its story away from millenarian utopianism and towards military denialism. In sum, the data indicate that by 2019 Daesh’s propagandists were far less productive and their aggregate product was more international and less focused on civilian issues. This shift points towards a new phase in the group’s political marketing trajectory, one focused more on survival than on expansion.
A Plan for Preventing and Countering Terrorist and Violent Extremist Exploitation of Information and Communications Technology in America
2019 Alexander, A. Report
Policymakers in the United States know that terrorists and violent extremists exploit information and communications technologies (ICTs), but the government still struggles to prevent and counter these threats. Although the U.S. does not face these challenges alone, the strategies and policies emphasized by some of its greatest allies are not viable or suitable frameworks for domestic policymakers. Since these threats persist, however, the U.S. government must develop a cohesive strategy to prevent and counter-terrorist and violent extremist exploitation of ICTs. The approach should rest on the pillars of pragmatism, proportionality, and respect for the rule of law, and aim to disrupt terrorist and violent extremist networks in the digital sphere. To pursue this objective, the following brief calls for political leaders to create an interagency working group to formalize leadership and conduct a comprehensive assessment of terrorist and violent extremist abuse of ICTs. The evaluation must also weigh the costs and benefits associated with responses to these threats. Then, government officials should work to enhance the capability and coordination of government-led efforts, pursue partnerships with non-governmental entities, and facilitate productive engagements with the technology industry. In short, this approach would allow the government to use legislation, redress, and strategic outreach to empower more players to responsibly prevent and counter terrorist and violent extremist exploitation of ICTs.
Leveraging CDA 230 to Counter Online Extremism
2019 Bridy, A. M. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
Three Constitutional Thickets: Why Regulating Online Violent Extremism is Hard
2019 Keller, D. Report
In this paper, I review U.S. constitutional considerations for lawmakers seeking to balance terrorist threats against free expression online. The point is not to advocate for any particular rule. In particular, I do not seek to answer moral or norms-based questions about what content Internet platforms should take down. I do, however, note the serious tensions between calls for platforms to remove horrific but First Amendment-protected extremist content – a category that probably includes the Christchurch shooter’s video – and calls for them to function as “public squares” by leaving up any speech the First Amendment permits. To lay out the issue, I draw on analysis developed at greater length in previous publications. This analysis concerns large user-facing platforms like Facebook and Google, and the word “platform” as used here refers to those large companies, not their smaller counterparts.
Lessons from the Information War: Applying Effective Technological Solutions to the Problems of Online Disinformation and Propaganda
2019 Maddox, J. D. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.