Welcome to VOX-Pol’s online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.
Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.
All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.
We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.
If you have any material you think belongs in the Library—whether your own or another authors—please contact us at email@example.com and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.
Radikal Online - Das Internet und die Radikalisierung von Jugendlichen: eine Metaanalyse zum Forschungsfeld
Vorgehensweise das Forschungsfeld zu Online-Radikalisierung im deutschen Forschungsdiskurs, ergänzt durch re...
Countering Online Propaganda and Extremism: The Dark Side of Digital Diplomacy
Digital diplomacy is now part of the regular conduct of ...
Social Media in Africa - A double-edged sword for security and development
More Support Needed for Smaller Technology Platforms to Counter Terrorist Content
|The present Trends Alert was prepared by CTED in accordance with Security Council resolution 2395 (2017). This reaffirms the essential role of CTED within the United Nations|
to identify and assess issues, trends and developments relating to the implementation of Council resolutions 1373 (2001), 1624 (2005) and 2178 (2014) and other relevant
resolutions. CTED Trends Alerts are designed to increase awareness, within the Security Council Counter Terrorism Committee, and among United Nations agencies and policymakers, of emerging trends identified through CTED’s engagement with Member States on their implementation of the relevant Council resolutions. The Alerts also include relevant evidence-based research conducted by members of the CTED Global Research Network (GRN)1 and other researchers.
OK Google, Show Me Extremism: Analysis of YouTube’s Extremist Video Takedown Policy and Counter-Narrative Program
|2018||Counter Extremism Project||Report|
|ISIS and other extremist groups, as well as their online supporters, have continued to exploit and misuse Google’s platforms to disseminate propaganda material, despite the company having repeatedly announced increased measures to combat online extremism.1 On July 21, 2017, Google announced the launch of one such measure––its Redirect Method Pilot Program. The program is intended to target individuals searching for ISIS-related content on YouTube and direct them to counter-narrative videos, which try to undermine the messaging of extremist|
groups.2 The Counter Extremism Project (CEP) monitors and tracks ISIS and other terrorist organizations’ material on YouTube. Between August 2 and August 3, 2018, CEP reviewed a
total of 649 YouTube videos for extremist and counter-narrative content. The result of CEP’s searches highlights the extent of the enduring problem of terrorist content on YouTube and
undermines claims touting the efficacy of the company’s efforts to combat online extremism.
Artificial or Human: A New Era of Counterterrorism Intelligence?
|A new revolution has begun in counterterrorism—the Artificial Intelligence (AI) revolution. The AI revolution has had a significant|
impact on many areas of security and intelligence. The use of AI and big data in general, and in the field of intelligence and counterterrorism in particular, has led to intense debates between supporters of the continuation and expansion of the use of this technology and those who oppose it. The traditional delicate balance between effectiveness in the fight against terrorism and the liberal democratic values of society becomes even more crucial when counterterrorism engages in AI and big data technology
Applying Local Image Feature Descriptions to Aid the Detection of Radicalization Processes in Twitter
|2018||López-Sánchez, D., Corchado, J.||Report|
|This paper was presented at the 2nd European Counter-Terrorism Centre (ECTC) Advisory Group conference, 17-18 April 2018, at Europol Headquarters, The Hague.|
The views expressed are the authors’ own and do not necessarily represent those of Europol.
All You Need Is “Love” Evading Hate Speech Detection
|2018||Grondahl, T., Pajola, L., Juuti, M., Conti, M. and Asokan, N.||Article|
|With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brile against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech. A combination of these methods is also effective against Google Perspective – a cutting edge solution from industry. Our experiments demonstrate that adversarial training does not completely mitigate the attacks, and using character-level features makes the models systematically more attack-resistant than using word-level features.|
Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts
|2018||Conway, M., Khawaja, M., Lakhani, S., Reffin, J., Robertson, A., & Weir, D.||VOX-Pol Publication|
|This article contributes to public and policy debates on the value of social media disruption activity with respect to terrorist material. In particular, it explores aggressive account and content takedown, with the aim of accurately measuring this activity and its impacts. The major emphasis of the analysis is the so-called Islamic State (IS) and disruption of their online activity, but a catchall “Other Jihadi” category is also utilized for comparison purposes. Our findings challenge the notion that Twitter remains a conducive space for pro-IS accounts and communities to flourish. However, not all jihadists on Twitter are subject to the same high levels of disruption as IS, and we show that there is differential disruption taking place. IS’s and other jihadists’ online activity was never solely restricted to Twitter; it is just one node in a wider jihadist social media ecology. This is described and some preliminary analysis of disruption trends in this area supplied too.|
Social Media in Africa - A double-edged sword for security and development
|2018||Cox, K., Marcellino, W., Bellasio, J., Ward, A., Galai, K., Meranto, S., Paoli, P.G.||Featured|
|There is an on-going debate over the role of online activities in the radicalisation process. However, much of this debate has focused on Western countries, particularly in relation to ISIL’s online influence of homegrown terrorism and of foreign fighter travel to Iraq and Syria. Less is known about patterns of online radicalisation in Africa and about the extent to which African national governmental strategies focus on addressing this issue.|
To address this gap in knowledge, the United Nations Development Programme (UNDP) commissioned RAND Europe to explore social media use and online radicalisation in Africa.
Germany's NetzDG: A key test for combatting online hate
|2018||Echikson, W. and Knodt, O.||Article|
|Germany’s Network Enforcement Act, or NetzDG law represents a key test for combatting hate speech on the internet. Under the law, which came into effect on January 1, 2018, online platforms face fines of up to €50 million for systemic failure to delete illegal content. Supporters see the legislation as a necessary and efficient response to the threat of online hatred and extremism. Critics view it as an attempt to privatise a new ‘draconian’ censorship regime, forcing social media platforms to respond to this new painful liability with unnecessary takedowns.|
This study shows that the reality is in between these extremes. NetzDG has not provoked mass requests for takedowns. Nor has it forced internet platforms to adopt a ‘take down, ask later’ approach. At the same time, it remains uncertain whether NetzDG has achieved significant results in reaching its stated goal of preventing hate speech.
This paper begins by explaining the background that led to the development and passage of NetzDG. It examines the reaction to the law by civil society, platforms and the government. It concludes with suggestions, for platforms, civil society and the authorities, on ways to improve the law to be effective in the fight against online hate while keeping the internet open and free.
CEPS acknowledges the Counter Extremism Project’s support for this research. The study was conducted in complete independence. It is based on interviews with regulators, company representatives, and civil society activists. The authors take full responsibility for its findings.
Government Responses to Malicious Use of Social Media
|2018||Bradshaw, S., Neudert, L. M. and Howard, P. N.||Article|
Since 2016, at least 43 countries around the globe have proposed or implemented regulations specifically designed to tackle different aspects of influence campaigns, including both real and perceived threats of fake news, social media abuse, and election interference. Some governments are in the early stages of designing regulatory measures specifically for digital contexts so they can tackle issues related to the malicious use of social media. For others, existing legal mechanisms regulating speech and information are already well established, and the digital aspect merely adds an additional dimension to law enforcement.
Our research team conducted an analysis of proposed or implemented regulations and identified a number of interventions. Some measures target social media platforms, requiring them to take down content, improve transparency, or tighten data protection mechanisms. Other measures focus on civil actors and media organisations, on supporting literacy and advocacy efforts, and on improving standards for journalistic content production and dissemination. A third group of interventions target governments themselves, obligating them to invest in security and defence programs that combat election interference, or to initiate formal inquiries into such matters. Finally, a fourth group of interventions take aim at the criminalisation of automated message generation and disinformation.
Netwar in Cyberia: Decoding the Media Mujahidin
|At the dawn of mass access to the internet, Douglas Rushkof wrote Cyberia: Life in the Trenches of Hyperspace. In his book, he observed a very special moment in our recent history in which it was possible to imagine the path ahead, before most of what daily users of the internet now|
experience even existed.
Measuring the Impact of ISIS Social Media Strategy
|2018||Alfifi, M., Kaghazgaran P., Caverlee, J., Morstatter F.||Report|
|Terrorist groups like the Islamic State of Iraq and Syria (ISIS) have exploited social media such as Twitter to spread their propaganda and to recruit new members. In this work we study the extent to which ISIS is able to spread their message beyond their immediate supporters. Are they operating in their own sphere with limited interaction with the overall community? Or are they well rooted among normal users? We find that three-quarters of the interactions ISIS received on Twitter in 2015 actually came from eventually suspended accounts raising questions about the potential number of ISIS-related accounts and how organic ISIS audience is. Towards tackling these questions, we have created a unique dataset of 17 million ISIS-related tweets posted in 2015. This dataset is available for research purposes upon request.|
New EU Proposal on the Prevention of Terrorist Content Online
|In the course of the last years, the European Union (EU) institutions, and the Commission (EC) in particular, have shown a growing concern regarding the use of online intermediary platforms for the dissemination of illegal content, particularly content of terrorist nature. Despite lack of complete certainty and differences between member States about what terrorist content the law prohibits, or even can prohibit consistent with fundamental rights to free expression, the truth is that there is a broad consensus among the national authorities that legislative and regulatory measures should be enacted both at the European and national levels in order to guarantee the swift and almost automatic detection and removal of content related to the commission of acts of terrorism. The political positions and non-binding documents produced so far have progressively incorporated the notion of “responsibility” for intermediaries, although this could not necessarily be equated to a straightforward intention to impose conventional legal liability obligations on such actors. In particular, the initiatives undertaken so far by the EU institutions basically aimed at promoting platforms’ voluntary cooperation with public authorities to detect and remove online illegal content (including terrorist|
content). Such initiatives include the Code of Conduct on Countering Illegal Hate Speech Online1, the Recommendation on measures to effectively tackle illegal speech online2, the Guidelines on Freedom of Expression Online and Offline3, and the EU Internet Forum4. Above all these recommendations, agreements and soft-law standards, the general legally applicable regime has remained so far intact since its approval in 2000:Directive 2000/31/EC, known as the e-commerce Directive, establishes liability exemptions for intermediaries under certain conditions of lack of knowledge of illegal activity or information and expeditious removal and disabling upon knowledge (article 14). The Directive also includes an important provision regarding the absence of any legal obligation for providers to monitor content (article 15).The new proposed Regulation on preventing the dissemination of terrorist content
online, which was the object of a first discussion on September 19-20 during the meeting of EU leaders in Salzburg under the Austrian Presidency, may represent a change in the above mentioned approach.
Promoting Extreme Violence: Visual and Narrative Analysis of Select Ultraviolent Terror Propaganda Videos Produced by the Islamic State of Iraq and Syria (ISIS) in 2015 and 2016
|2018||Venkatesh,V., Podoshen, J., Wallin, J., Rabah, J., Glass, D.||Article|
|This paper examines aspects of violent, traumatic terrorist video propaganda produced by the Islamic State of Iraq and Syria (ISIS) within the theoretical confines of abjection and the use of utopian/dystopian themes. These themes have been present in a number of studies that have examined consumption of the dark dystopic variety. We seek to elucidate on the use of specific techniques and narratives that are relatively new to the global propaganda consumerspace and that relate to horrific violence. Our work here is centered on interpretative analysis and theory building that we believe can assist in understanding and interpreting post-apocalyptic and abject-oriented campaigns in the age of social media and rapid transmission of multimedia communications. In the present analysis, we examine eight ISIS videos created and released in 2015 and 2016. All of the videos chosen for analysis have utilized techniques related to abjection, shock, and horror, often culminating in the filming of the murder of ISIS’s enemies or place-based destruction of holy sites in the Middle East. We use inductive content analytic techniques in the contexts of consumer culture, “cinemas of attraction,” and pornography of violence to propose an extension of existing frameworks of terrorism and propaganda theory.|
Exploring the “Demand Side” of Online Radicalization: Evidence from the Canadian Context
|2018||Bastung, M., Douai, A., Akca, D.||Journal|
|We examined whether and how social media play a role in the process of radicalization, and whether and for what purposes extremists use social media after they become radicalized within a sample of fifty-one Canadian extremists. Differences between converts and non-converts in terms of their radicalization process, involvement in terrorism, and social media usage were also investigated. Data were collected from a combination of media reports via an in-depth LexisNexis search and court records obtained from The Canadian Legal Information Institute database. The results confirm that social media played a role either during or after the radicalization process of the majority of the sample and converts are more vulnerable to online radicalization than non-converts.|
Online discontent: comparing Western European far-right groups on Facebook
|2018||Klein, O., Muis, J.||Journal|
|Far-right groups increasingly use social media to interact with other groups and reach their followers. Social media also enable ‘ordinary’ people to participate in online discussions and shape political discourse. This study compares the networks and discourses of Facebook pages of Western European far-right parties, movements and communities. Network analyses of pages indicate that the form of far-right mobilization is shaped by political opportunities. The absence of a strong far-right party offline seems to be reflected in an online network in which non-institutionalized groups are the most prominent actors, rather than political parties. In its turn, the discourse is shaped by the type of actor. Content analyses of comments of followers show that parties address the political establishment more often than immigration and Islam, compared to non-institutionalized groups. Furthermore, parties apply less extreme discursive practices towards ‘the other’ than non-institutionalized groups.|
Taking North American White Supremacist Groups Seriously: The Scope and Challenge of Hate Speech on the Internet
|This article aims to address two questions: how does hate speech manifest on North American white supremacist websites; and is there a connection between online hate speech and hate crime? Firstly, hate speech is defined and the research methodology upon which the article is based is explained. The ways that ‘hate’ groups utilize the Internet and their purposes in doing so are then analysed, with the content and the functions of their websites as well as their agenda examined. Finally, the article explores the connection between hate speech and hate crime. I argue that there is sufficient evidence to suggest that speech can and does inspire crime. The article is based in the main on primary sources: a study of many ‘hate’ websites; and interviews and discussions with experts in the field.|
The Alt- Right Twitter Census: Defining and Describing the Audience for Alt-Right Content on Twitter
|2018||Berger, J.||VOX-Pol Publication|
|The so-called ‘alt-right’ is an amorphous but synchronized collection of far-right people and movements, an umbrella label for a number of loosely affiliated social movements around the world, although its centre of gravity is in the United States. Many factors have contributed to the alt-right’s rise to prominence, but one of the most visible is its online presence. Alt-right views have been promoted online by a small army of trolls and activists staging harassment campaigns, pushing hashtags and posting links to extremist content and conspiracy theories on social media. Since 2016, the alt-right and its allies have held an increasingly prominent place in American and European politics, rallying support behind a variety of causes and candidates.|
This study seeks to evaluate the alt-right’s online presence with robust metrics and an analysis of content shared by adherents. The alt-right has many components online; this report will primarily examine its presence on Twitter, in part because the movement is particularly active on that platform, and in part because Twitter’s data access policies allow for more robust evaluation than is possible on other platforms.
This report will:
• Create a demographic and identity snapshot of a representative
portion of the audience for alt-right supporters on Twitter
• Examine content shared within the dataset
• Describe the methodology used to derive these findings
• Propose avenues for further research based on this
Alternative Influence: Broadcasting the Reactionary Right on YouTube
|This report identifies and names the Alternative Influence Network (AIN): an|
assortment of scholars, media pundits, and internet celebrities who use YouTube to
promote a range of political positions, from mainstream versions of libertarianism
and conservatism, all the way to overt white nationalism. Content creators in the AIN
claim to provide an alternative media source for news and political commentary. They
function as political influencers who adopt the techniques of brand influencers to
build audiences and “sell” them on far-right ideology.
This report presents data from approximately 65 political influencers across 81 channels.
This network is connected through a dense system of guest appearances, mixing content
from a variety of ideologies. This cross-promotion of ideas forms a broader “reactionary”
position: a general opposition to feminism, social justice, or left-wing politics.
Members of the AIN cast themselves as an alternative media system by:
• Establishing an alternative sense of credibility based on relatability,
authenticity, and accountability.
• Cultivating an alternative social identity using the image of a social
underdog, and countercultural appeal.
Members of the AIN use the proven engagement techniques of brand influencers to
spread ideological content:
• Ideological Testimonials
• Political Self-Branding
• Search Engine Optimization
• Strategic Controversy
The AIN as a whole facilitates radicalization through social networking practices:
• Audiences are able to easily move from mainstream to extreme content
through guest appearances and other links.
• Political influencers themselves often shift to more radical positions
following interactions with other influencers or their own audiences.
When viewers engage with this content, it is framed as lighthearted, entertaining,
rebellious, and fun. This fundamentally obscures the impact that issues have on
vulnerable and underrepresented populations—the LGBTQ community, women,
immigrants, and people of color. And in many ways, YouTube is built to incentivize
this behavior. The platform needs to not only assess what channels say in their
content, but also who they host and what their guests say. In a media environment
consisting of networked influencers, YouTube must respond with policies that
account for influence and amplification, as well as social networks.
Ideological Transmission III Political and Religious Organisations
|2018||Lee, B., Knott, K.||Journal|
|This is the third and final research review in the CREST series on ideological transmission (the first was on the family, and the second on peers, education and prisons). It focuses on the process by which religious and political groups – from small cells and organisations to large movements, networks and milieus – pass on ideas, beliefs and values. Academic research on how, where and why these are transmitted, and by whom, is considered. Ideological transmission is interpreted as the passing on of ideology from one person to another, or from|
a group to its internal and external audiences. We treat ideology as a broad concept, encompassing both political and religious ideas, and including beliefs,values, and their related practices.
Two main persuasive orientations were considered in this review: (i) external awareness-raising by groups,and (ii) their internal attempts to influence members
and supporters. Three analytical concepts provided the focus: propaganda, framing and learning.
1. How do ideological groups make potential supporters and other outsiders aware of their views (awareness-raising/persuasion/propaganda)?
2. How is ideological material (beliefs, events, issues etc) framed by groups as they seek to raise awareness, gain recruits and energise followers?
3. How do members and other supporters acquire ideological knowledge within groups (learning/indoctrination)?
These questions are interconnected by the concept of ‘persuasion’, more specifically the active attempts used by external agents to persuade individuals. The review draws on a range of evidence from multiple disciplines and contexts. Extremist groups– violent and non-violent – provide the principal examples, including a case study on the jihadist group, al-Muhajiroun. However, it is clear that an understanding of how such groups communicate internally and externally needs to be set in the broader context of research on why organisations in
general transmit ideas, beliefs and values (e.g. for group survival, recruitment, solidarity or coercion), how they go about doing so (formally or informally, top-down or peer-to-peer), what role ideological transmission plays in their goals, and how effective
it is. In the case of extremist groups, the relationship between ideological transmission and radicalisation, recruitment, mobilisation and the move to violence are also important.
Online-Radicalisation: Myth or Reality?
|The proliferation of extremist, jihadist and violence-inciting websites, blogs and channels|
in social media has long since become a major theme in security policy. Extremists and
terrorists use the new technological tools to communicate with each other, to organise
themselves and to publicise their ideas. Whereas terrorists in the previous millennium
were still dependent on journalists to report their acts and to draw attention to their
group and their ideology, potentially violent groups today are in a position to publish
their story and their intentions unfiltered on the web, and to communicate with each
other swiftly and effectively across national borders. Ever since the case of Australian
teenager Jake Bilardi1
, who travelled to the territories of the so-called Islamic State (IS)
and in 2015, at the age of 19, committed a suicide attack in Ramadi (Iraq), however, it is
not just online communication by extremists that is in focus, but also the phenomenon
of online radicalisation. According to the current state of information, Bilardi converted
to Islam without any direct influences from his immediate environment, radicalised himself
exclusively via online media, and travelled to Syria with the help of online contacts.
His case, and many other cases of Western recruits, raised the question of whether a
process of radicalisation can take place exclusively online or if online propaganda is only
one facilitating factor that promotes and perhaps accelerates radicalization, but is in itself
not sufficient to explain the whole process. Unfortunately, there are still not enough
systematic, empirical studies on this subject area and our knowledge is generally limited
to known perpetrator profiles. Nevertheless, some general statements can be made
regarding online radicalisation.