Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Diaspora Involvement in Insurgencies: Insights from the Khalistan and Tamil Eelam Movements
2005 Fair, C.C. Journal
This article exposits and contrasts the roles of two diasporas in ethnic conflict waged in their homelands, namely the Sikh diaspora's involvement in the Punjab insurgency in north India and the Sri Lankan Tamil diaspora's role in Sri Lanka's Tamil insurgency. It draws out the various similarities and distinctions between the two in their use of technology, means of mobilization and identity production, and the geographical and political reach of their institutional arrangements. The article argues that the varying means by which these diasporas came into being affected the ways in which they mobilized and the positions they espoused towards homeland politics. It finds that the abilities of the two diasporas to contribute to events “back home” differed in part because of the scope of their respective institutional arrangements.
Developing an Explanatory Model for the Process of Online Radicalisation and Terrorism
2013 Torok, R. Journal
While the use of the internet and social media as a tool for extremists and terrorists has been well documented, understanding the mechanisms at work has been much more elusive. This paper begins with a grounded theory approach guided by a new theoretical approach to power that utilises both terrorism cases and extremist social media groups to develop an explanatory model of radicalisation. Preliminary hypotheses are developed, explored and refined in order to develop a comprehensive model which is then presented. This model utilises and applies concepts from social theorist Michel Foucault, including the use of discourse and networked power relations in order to normalise and modify thoughts and behaviors. The internet is conceptualised as a type of institution in which this framework of power operates and seeks to recruit and radicalise. Overall, findings suggest that the explanatory model presented is a well suited, yet still incomplete in explaining the process of online radicalisation.
Determining The Role Of The Internet In Violent Extremism And Terrorism Six Suggestions For Progressing Research
2016 Conway, M. VOX-Pol Publication
Some scholars and others are sceptical of a significant role for the Internet in processes of violent radicalisation. There is increasing concern on the part of other scholars, and increasingly also policymakers and publics, that easy availability of violent extremist content online may have violent radicalising effects. This article identifies a number of core questions regarding the interaction of violent extremism and terrorism and the Internet, particularly social media, that have yet to be adequately addressed and supplies a series of six follow-up suggestions, flowing from these questions, for progressing research in this area. These suggestions relate to (1) widening the range of types of violent online extremism being studied beyond violent jihadis; (2) engaging in more comparative research, not just across ideologies, but also groups, countries, languages, and social media platforms; (3) deepening our analyses to include interviewing and virtual ethnographic approaches; (4) up-scaling or improving our capacity to undertake “big data” collection and analysis; (5) outreaching beyond terrorism studies to become acquainted with, for example, the Internet Studies literature and engaging in interdisciplinary research with, for example, computer scientists; and (6) paying more attention to gender as a factor in violent online extremism. This research was produced with the aid of VOX-Pol Research Mobility Programme funding and supervision by VOX-Pol colleagues at Dublin City University.
Detection Of Jihadism In Social Networks Using Big Data
2019 Rebollo, C. S., Puente, C., Palacios, R., Piriz, C., Fuentes, J. P. and Jarauta, J. Article
Social networks are being used by terrorist organizations to distribute messages with the intention of influencing people and recruiting new members. The research presented in this paper focuses on the analysis of Twitter messages to detect the leaders orchestrating terrorist networks and their followers. A big data architecture is proposed to analyze messages in real time in order to classify users according to diferent parameters like level of activity, the ability to infuence other users, and the contents of their messages. Graphs have been used to analyze how the messages propagate through the network, and this involves a study of the followers based on retweets and general impact on other users. Ten, fuzzy clustering techniques were used to classify users in profiles, with the advantage over other classifcations techniques of providing a probability for each profile instead of a binary categorization. Algorithms were tested using public database from Kaggle and other Twitter extraction techniques. The resulting profiles detected automatically by the system were manually analyzed, and the parameters that describe each profile correspond to the type of information that any expert may expect. Future applications are not limited to detecting terrorist activism. Human resources departments can apply the power of profle identification to automatically classify candidates, security teams can detect undesirable clients in the financial or insurance sectors, and immigration officers can extract additional insights with these
techniques.
Detection And Monitoring Of Improvised Explosive Device Education Networks Through The World Wide Web
2009 Stinson, R.T. MA Thesis
As the information age comes to fruition, terrorist networks have moved mainstream by promoting their causes via the World Wide Web. In addition to their standard rhetoric, these organizations provide anyone with an Internet connection the ability to access dangerous information involving the creation and implementation of Improvised Explosive Devices (IEDs). Unfortunately for governments combating terrorism, IED education networks can be very difficult to find and even harder to monitor. Regular commercial search engines are not up to this task, as they have been optimized to catalog infor mation quickly and e fficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses the potential evaluation and monitoring techniques to be used in conjunction with the proposed algorithm.
Detection And Classification Of Social Media Based Extremist Affiliations Using Sentiment Analysis Techniques
2019 Ahmad, S., Asghar, M. Z., Alotaibi, F. M. and Awan, I. Article
Identification and classification of extremist-related tweets is a hot issue. Extremist gangs have been involved in using social media sites like Facebook and Twitter for propagating their ideology and recruitment of individuals. This work aims at proposing a terrorism-related content analysis framework with the focus on classifying tweets into extremist and non-extremist classes. Based on user-generated social media posts on Twitter, we develop a tweet classifcation system using deep learning-based sentiment analysis techniques to classify the tweets as extremist or non-extremist. The experimental results are encouraging and provide a gateway for future researchers.
Detection And Analysis Of Online Extremist Communities
2017 Benigni, M.C. PhD Thesis
Online social networks have become a powerful venue for political activism. In many cases large, insular online communities form that have been shown to be powerful diffusion mechanisms of both misinformation and propaganda. In some cases, these groups users advocate actions or policies that could be construed as extreme along nearly any distribution of opinion and are thus called Online Extremist Communities (OECs). Although these communities appear increasingly common, little is known about how these groups form or the methods used to influence them. The work in this thesis provides researchers a methodological framework to study these groups by answering three critical research questions:
• How can we detect large dynamic online activist or extremist communities?
• What automated tools are used to build, isolate, and influence these communities?
• What methods can be used to gain novel insight into large online activist or extremist communities?
This group members social ties can be inferred based on the various affordances offered by OSNs for group curation. By developing heterogeneous, annotated graph representations of user behavior I can efficiently extract online activist discussion cores using an ensemble of unsupervised machine learning methods. I call this technique Ensemble Agreement Clustering. Through manual inspection, these discussion cores can then often be used as training data to detect the larger community. I present a novel supervised learning algorithm called Multiplex Vertex Classification for network bipartition on heterogeneous, annotated graphs. This methodological pipeline has also proven useful for social botnet detection, and a study of large, complex social botnets used for propaganda dissemination is provided as well. Throughout this thesis, I provide Twitter case studies including communities focused on the Islamic State of Iraq and al-Sham (ISIS), the ongoing Syrian Revolution, the Euromaidan Movement in Ukraine, as well as the alt-Right.
Detecting Weak and Strong Islamophobic Hate Speech on Social Media
2019 Vidgen, B. Article
Islamophobic hate speech on social media is a growing concern in contemporary Western politics and society. It can inflict considerable harm on any victims who are targeted, create a sense of fear and exclusion amongst their communities, toxify public discourse and motivate other forms of extremist and hateful behavior. Accordingly, there is a pressing need for automated tools to detect and classify Islamophobic hate speech robustly and at scale, thereby enabling quantitative analyses of large textual datasets, such as those collected from social media. Previous research has mostly approached the automated detection of hate speech as a binary task. However, the varied nature of Islamophobia means that this is often inappropriate for both theoretically informed social science and effective monitoring of social media platforms. Drawing on in-depth conceptual work we build an automated software tool which distinguishes between non-Islamophobic, weak Islamophobic and strong Islamophobic content. Accuracy is 77.6% and balanced accuracy is 83%. Our tool enables future quantitative research into the drivers, spread, prevalence and effects of Islamophobic hate speech on social media.
Detecting the Hate Code on Social Media
2017 Magu, R., Joshi, K. and Luo, J. Article
Social media has become an indispensable part of the everyday lives of millions of people around the world. It provides a platform for expressing opinions and beliefs, communicated to a massive audience. However, this ease with which people can express themselves has also allowed for the large scale spread of propaganda and hate speech. To prevent violating the abuse policies of social media platforms and also to avoid detection by automatic systems like Google’s Conversation AI, racists have begun to use a code (a movement termed Operation Google). This involves substituting references to communities by benign words that seem out of context, in hate filled posts or Tweets. For example, users have used the words Googles and Bings to represent the African-American and Asian communities, respectively. By generating the list of users who post such content, we move a step forward from classifying tweets by allowing us to study the usage pattern of these concentrated set of users.
Detecting Potential Warning Behaviors of Ideological Radicalization in an Alt-Right Subreddit
2019 Grover, T. and Mark, G. Article
Over the past few years, new ideological movements like the Alt-Right have captured the attention and concern of both mainstream media, policy makers, and scholars alike. Today, the methods by which right-wing extremists are radicalized are increasingly taking place within social media platforms and online communities. However, no research has yet investigated methods for proactively detecting online communities that may be displaying overall warning signs of mass ongoing ideological and political radicalization. In our work, we use a variety of text analysis methods to investigate the behavioral patterns of a radical right-wing community on Reddit (r/altright) over a 6-month period until right before it was banned for violation of Reddit terms of service. We find that this community showed aggregated behavioral patterns that aligned with past literature on warning behaviors of individual extremists in online environments, and that these behavioral patterns were not seen in a comparison group of eight other online political communities, similar in size and user engagement. Our research helps build upon the established literature on the detection of extremism in online environments, and has implications for proactive monitoring of online communities.
Detecting Markers of Radicalisation in Social Media Posts: Insights From Modified Delphi Technique and Literature Review
2021 Neo, L.S. Article
This study involved the creation of factors and indicators that can detect radicalization in social media posts. A concurrent approach of an expert knowledge acquisition process (modified Delphi technique) and literature review was utilized. Seven Singapore subject-matter experts in the field of terrorism evaluated factors that were collated from six terrorism risk assessment tools (ERG 22+, IVP, TRAP-18, MLG, VERA-2, and Cyber-VERA). They identify those that are of most considerable relevance for detecting radicalization in social media posts. A parallel literature review on online radicalization was conducted to complement the findings from the expert panel. In doing so, 12 factors and their 42 observable indicators were derived. These factors and indicators have the potential to guide the development of cyber-focused screening tools to detect radicalization in social media posts.
Detecting Linguistic Markers for Radical Violence in Social Media
2014 Cohen, K., Johansson, F., Kaati, L. and Clausen Mork, J. Journal
Lone-wolf terrorism is a threat to the security of modern society, as was tragically shown in Norway on July 22, 2011, when Anders Behring Breivik carried out two terrorist attacks that resulted in a total of 77 deaths. Since lone wolves are acting on their own, information about them cannot be collected using traditional police methods such as infiltration or wiretapping. One way to attempt to discover them before it is too late is to search for various ‘‘weak signals’’ on the Internet, such as digital traces left in extremist web forums. With the right tools and techniques, such traces can be collected and analyzed. In this work, we focus on tools and techniques that can be used to detect weak signals in the form of linguistic markers for potential lone wolf terrorism.
Detecting Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network
2018 Zhang, Z., Robinson, D. and Tepper, J. Article
In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, and empirical research. Despite a large number of emerging scientific studies to address the problem, a major limitation of existing work is the lack of comparative evaluations, which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and gated recurrent networks. We conduct an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available Twitter datasets to date, and show that compared to previously reported results on these datasets, our proposed method is able to capture both word sequence and order information in short texts, and it sets new benchmark by outperforming on 6 out of 7 datasets by between 1 and 13% in F1. We also extend the existing dataset collection on this task by creating a new dataset covering different topics.
Design And Control Of Resilient Interconnected Microgrids For Reliable Mass Transit Systems
2019 Egan, T. J. G. MA Thesis
Mass transit systems are relied on a daily basis to transport millions of passengers and bring billions of dollars' worth of economic goods to market. While some forms of mass transit rely on a fuel, electrified railway systems are dependent on the electric grid. The electric grid is becoming more vulnerable to disruptions, due to extreme weather, changing supply and demand patterns, and cyber-terrorism. An interruption to the energy supply of a railway infrastructure can have cascading effects on the economy and social livelihood. Resilient interconnected microgrids are proposed to maintain reliable operation of electri_ed railway infrastructures. An engineering design framework, and supporting methods and techniques, is proposed for an electrified railway infrastructure to be upgraded from its existing form, to one with resilient interconnected microgrids. The sizing of the interconnected microgrids is performed using an iterative sizing analysis, considering multiple resiliency key performance indicators to inform the designer of the trade-o_s in sizing options. Hierarchical control is proposed to monitor and control the interconnected microgrids. A multi-objective problem cast in the tertiary level of control is proposed to be solved using game theory. The proposed designs are modelled and simulated in Simulink. Four case studies of railway infrastructures in Canada and the United Kingdom are used to demonstrate the effectiveness of the proposed designs. While results for each case study vary, resilient interconnected microgrids for railway infrastructures demonstrates a reduced dependence on the electric grid. The examples here are all scalable and can perform within the framework of any available energy system. The results are both extremely impressive and promising towards a more resilient and stable energy future for our railway and other critical infrastructures.
Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media
2020 Rogers, R. Article
Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.
Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media
2020 Rogers, R. Article
Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.
Delivering Hate : How Amazon’s Platforms Are Used to Spread White Supremacy, Anti-Semitism, and Islamophobia and How Amazon Can Stop It
2018 The Action Center on Race and the Economy and The Partnership for Working Families Report
Amazon has been called the “everything store,” but today it is much more than just a store, with publishing, streaming, and web services businesses. Its reach and infuence are unparalleled: Most U.S. online shopping trips begin at Amazon, Amazon dominates the U.S. e-book business, and the company’s web services division has over 60 percent of the cloud computing services market. All this adds up for Amazon and its owners. The company posted record profts of $1.9 billion in the last three quarters of 2017,7 and CEO Jef Bezos’s wealth soared to $140 billion in 2018, largely because of the value of Amazon stock. A close examination of Amazon’s various platforms and services reveals that for growing racist, Islamophobic, and anti-Semitic movements, the breadth of Amazon’s business combined with its weak and inadequately enforced policies provides a number of channels through which hate groups can generate revenue, propagate their ideas, and grow their movements. We looked at several areas of Amazon’s business, including its online shops, digital music platform, Kindle and CreateSpace publishing platforms, and web services business.
Defining Online Hate And Its Public Lives: What Is The Place For Extreme Speech?
2019 Gagliardone, I. Article
Following Sahana Udupa and Matti Pohjonen’s (2019) invitation to move the debate beyond a normative understanding of hate speech, this article seeks to build a foundation for conceptual and empirical inquiry of speech commonly considered deviant and disturbing. It develops in three stages. It first maps the public lives of terms that refer to online vitriol and how they have been used by different communities of researchers, politicians, advocacy groups, and national organizations. Second, it shows how different types of “haters” have been interpreted as parts of “swarms” or “armies,” depending on whether their violent potential emerges around critical incidents or whether they respond to longer-term strategies through which communities and their leaders tie their speech acts to explicit narratives. The article concludes by locating “extreme speech” within this broader conceptual tapestry, arguing that the paternalistic the gaze that characterizes a lot of research on online hate speech is tied to what Chantal Mouffe has referred to as the “moralization of politics,” a phenomenon that cannot be matched by responses that are themselves moral.
Defending an Open, Global, Secure and Resilient Internet
2013 Council on Foreign Relations, US Policy
A balkanized Internet beset by hostile cyber-related activities raises a host of questions and problems for the U.S. government, American corporations, and American citizens. The Council on Foreign Relations launched this Task Force to define the scope of this rapidly developing issue and to help shape the norms, rules, and laws that should govern the Internet. This is the report published by the Task Force.
Deep Learning for Hate Speech Detection in Tweets
2017 Badjatiya, P., Gupta, S., Gupta, M. and Varma, V. Article
Hate speech detection on Twitter is critical for applications like controversial event extraction, building AI chatterbots, content recommendation, and sentiment analysis. We define this task as being able to classify a tweet as racist, sexist or neither. The complexity of the natural language constructs makes this task very challenging. We perform extensive experiments with multiple deep learning architectures to learn semantic word embeddings to handle this complexity. Our experiments on a benchmark dataset of 16K annotated tweets show that such deep learning methods outperform state-of-the-art char/word n-gram methods by ~18 F1 points.