Library

Welcome to VOX-Pol’s Online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at onlinelibrary@voxpol.eu and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.

Featured

Full Listing

TitleYearAuthorTypeLinks
Crisis and Loss of Control: German-Language Digital Extremism in the Context of the COVID-19 Pandemic
2020 Guhl, J. and Gerster, L. Report
This report analyses the networks and narratives of German-speaking far-right, far-left and Islamist extremist actors on mainstream and alternative social media platforms and extremist websites in the context of the COVID-19 pandemic. Our results show: Extremists from Germany, Austria and Switzerland have been able to increase their reach since the introduction of the lockdown measures.
The Anti-Hate Brigade: How a Group of Thousands Responds Collectively to Online Vitriol
2020 Buerger, C. Report
#jagärhar is by far the largest and best-organized collective effort to respond directly to hatred online, anywhere in the world, as far as we know. It is also one of only two civil society efforts against hatred online to have been replicated in numerous other countries. In this detailed account of its efforts– the first qualitative study of such a group – Cathy Buerger shares her findings on how and why #jagärhär members do what they do, how working collectively influences members’ ability and willingness to respond to hatred, and how the group’s strategy is carefully designed to take advantage of Facebook’s algorithms and influence ideas and discourse norms among the general public – not necessarily the ones writing the hateful comments.
Indonesia: Social Grievances and Violent Extremism
2020 Moonshot CVE Report
Extremist groups’ online recruitment mechanisms frequently exploit the wide range of grievances and vulnerabilities experienced by individuals at-risk of radicalisation. While it is widely accepted that mental health and wellbeing play a vital role in resilience to violent extremism, most approaches tend to focus on preventing violent extremism through purely ideological means and are not sufficiently tailored to the individual at-risk.

In an effort to understand this audience further and, more importantly, the most effective means of preventing violent extremism, Moonshot conducted an experiment to assess the propensity among at-risk users in Indonesia to engage with ideological counter-content compared to psychosocial support content. The data gathered during this pilot indicate that psychosocial support is an area of unmet need among some of the individuals most vulnerable to violent jihadist recruitment online in Indonesia, and that this population is open to engaging with online support.
Facebook Redirect Programme: Moonshot Evaluation
2020 Moonshot CVE Report
The Facebook Redirect Programme (FRP) is designed to combat violent extremism and dangerous organisations by redirecting users who have entered hate or violence-related search queries towards educational resources and outreach groups. A pilot of the programme was launched with delivery partners Life After Hate in May 2019 and Exit Australia in September 2019. It was specifically designed to ensure that individuals searching for white supremacist and/or neo-Nazi communities on Facebook would be offered authentic, meaningful and impactful support off-platform. The purpose of this pilot was to test the programme design and inform future deployments targeting both new geographies and other hate-based communities. Moonshot was contracted by Facebook to evaluate the pilot period of programme performance and make recommendations for future deployments. This report evaluates the pilot programme by examining:
-Facebook’s use of keywords and the safety module as a method of redirecting people off-platform;
-The full user journey from Facebook to delivery partner landing pages;
-The extent to which the pilot can be considered a proof of concept for future deployments.
Shades of hatred online: 4chan duplicate circulation surge during hybrid media events
2020 Zelenkauskaite, A., Toivanen, P., Huhtamäki, J. and Valaskivi, K. Article
The 4chan /pol/ platform is a controversial online space on which a surge in hate speech has been observed. While recent research indicates that events may lead to more hate speech, empirical evidence on the phenomenon remains limited. This study analyzes 4chan /pol/ user activity during the mass shootings in Christchurch and Pittsburgh and compares the frequency and nature of user activity prior to these events. We find not only a surge in the use of hate speech and anti-Semitism but also increased circulation of duplicate messages, links, and images and an overall increase in messages from users who self-identify as “white supremacist” or “fascist” primarily voiced from English-speaking IP-based locations: the U.S., Canada, Australia, and Great Britain. Finally, we show how these hybrid media events share the arena with other prominent events involving different agendas, such as the U.S. midterm elections. The significant increase in duplicates during the hybrid media events in this study is interpreted beyond their memetic logic. This increase can be interpreted through what we refer to as activism of hate. Our findings indicate that there is either a group of dedicated users who are compelled to support the causes for which shooting took place and/or that users use automated means to achieve duplication.
Upvoting Extremism: Collective Identity Formation and the Extreme Right on Reddit
2020 Gaudette, T., Scrivens, R., Davies, G. and Frank, R. Article
Since the advent of the Internet, right-wing extremists and those who subscribe to extreme right views have exploited online platforms to build a collective identity among the like-minded. Research in this area has largely focused on extremists’ use of websites, forums, and mainstream social media sites, but overlooked in this research has been an exploration of the popular social news aggregation site Reddit. The current study explores the role of Reddit’s unique voting algorithm in facilitating “othering” discourse and, by extension, collective identity formation among members of a notoriously hateful subreddit community, r/The_Donald. The results of the thematic analysis indicate that those who post extreme-right content on r/The_Donald use Reddit’s voting algorithm as a tool to mobilize like-minded members by promoting extreme discourses against two prominent out-groups: Muslims and the Left. Overall, r/The_Donald’s “sense of community” facilitates identity work among its members by creating an environment wherein extreme right views are continuously validated.
The “Great Meme War:” the Alt-Right and its Multifarious Enemies
2020 Dafaure, M. Article
In this essay, I discuss how the alt-right has brought back into fashion traditional tenets of the reactionary, xenophobic, and often racist far-right, as demonstrated by George Hawley, and how it has managed to make these tenets appear as novel, provocative, and updated to the 21st century U.S. society and digital environment. I argue that to do so, alt-righters relied heavily on the creation, and sometimes reappropriation, of enemy images, with the ultimate goals of provoking outrage, instilling fear and/or hatred towards specific groups, reinforcing a sense of belonging within their own community, or more broadly manipulating collective perceptions and representations, first online then in real life. Indeed, the election of Donald Trump was hailed by the online alt-right as one of their major successes. With the help of irony, subversion, and often carefully engineered propaganda-like messages and images, the alt-right, it boasts, “meme’d into office” the Republican candidate. This paper consequently leads to an analysis of real-life repercussions of such adversarial rhetoric, notably through examples of recent far-right domestic terrorism in the US, and to a reflection on their place in an age of post-truth, fake news, and alternative facts. This contribution focuses on several enemy images. The first is that of the civilizational enemy from the outside, which uses the traditional process of othering. This theme is linked to Trump’s campaign and to his attacks against two major “enemies” of the U.S., namely Hispanics and Muslims. With the alt-right, refugees for example become “rapefugees,” which easily appeals to rampant islamophobia. The second enemy image created by the alt-right consists in its ideological opponents. Here, the function of the enemy image is to discredit opponents and their views (“cuckservative,” “feminazi,” or the sarcastic “Social Justice Warrior”). The third enemy image establishes a link between the first two. It depicts what I would call the “enemy within,” a common thread (or threat) in far-right ideologies. Indeed, cultural Marxism, a widespread conspiracy theory among the alt-right, is what its proponents believe to be the hidden reason for the perceived decline of the Western civilization. According to this worldview, the ideological opponents push a conspiracy against the West and its values. The recurring claims of a liberal bias among the media and academia also belong to this conspiracy theory. It also embraces elements of anti-Semitism, as well as traditional aspects of anti-communism, reminiscent of the historical Red Scares. Such a theory thus provides its believers with a broader narrative, as well as with a common enemy to rally against, and therefore builds a form of intersectionality among various online fringe groups.
Trans-Atlantic Journeys of Far-Right Narratives Through Online-Media Ecosystems
2020 Institute for Strategic Dialogue Report
This research briefing explores if and how far-right narratives from the United States (US), France and Germany gain traction in domestic mainstream media, or move across borders between the US on the one hand, and France and Germany on the other. It tests what will be referred to as the mainstreaming hypothesis (far-right ideas start out in far-right alternative media but eventually move to the mainstream) and the transnationalisation hypothesis (far-right ideas spread between national media ecosystems).
Mapping right-wing extremism in Victoria: Applying a gender lens to develop prevention and deradicalisation approaches
2020 Agius, C., Cook, K., Nicholas, L., Ahmed, A., bin Jehangir, H., Safa, N., Hardwick, T. and Clark, S. Report
This project aims to map right-wing extremism in Victoria through the lens of gender. It begins from the premise that there is an underexplored connection between antifeminist sentiment and far-right extremist sentiment. It does this by focusing on select Victorian-based online groups that have an anti-feminist and far-right profile. The project also works with stakeholders who work in the areas of gender and family violence, to gain insight into their practices and experiences.
Digital Dog Whistles: The New Online Language of Extremism
The International Journal of Security Studies Weimann, G. Article
Terrorists and extremists groups are communicating sometimes openly but very often in concealed formats. Recently Far-right extremists including white supremacist, anti-Semite groups, racists and neo-Nazis started using a coded "New Language". Alarmed by police and security forces attempts to find them online and by the social platforms attempts to remove their contents, they try to apply the new language of codes and doublespeak. This study explores the emergence of a new language, the system of code words developed by Far-right extremists. What are the characteristics of this new language? How is it transmitted? How is it used? Our survey of online Far-right contents reveals the use of visual and textual codes for extremists. These hidden languages enable extremists to hide in plain sight and for others to easily identify like-minded individuals. There is no doubt that the "new language" used online by Far-right groups comprises all the known attributes of a language: It is very creative, productive and instinctive, uses exchanges of verbal or symbolic utterances shared by certain individuals and groups. These findings should serve both Law Enforcement and private sector bodies interested in preventing hate speech online.
The Interplay Between Australia’s Political Fringes on the Right and Left: Online Messaging on Facebook
2020 Guerin, C., Davey, J., Peucker, M. and Fisher, T.J. Report
This research briefing outlines findings from an analysis of the far-right and far-left Facebook ecosystem in Australia in the first seven months of 2020. It analyses how the far-right and far-left discuss each other on Facebook and how narratives about the other side of the political spectrum shape the online activity of these groups. It also seeks to understand how central discussion about the ‘other side’ is to the far-right and far-left and how it fits within the broader online activities of these movements.
The Online Regulation Series | Insights from Academia II
2020 Tech Against Terrorism Report
To follow-up on our previous blogpost on academic analysis of the state of global online regulation, we take here a future oriented approach and provide an overview of academics and experts’ suggestions and analysis of what the future of online regulation might bring.
(Young) Women’s Usage of Social Media and Lessons for Preventing Violent Extremism
2020 Krasenberg, J. and Handle, J. Policy
The RAN small-scale expert meeting on (young) women’s usage of social media and lessons learned for preventing violet extremism (PVE) was aimed at unpacking some of the gaps. This paper summarises the highlights of the discussion, discusses the vulnerabilities that are specific to (young) women, explains how recruiters use these vulnerabilities online and, finally, presents the recommendations that the experts stressed during the meeting.
Birds of a Feather Get Recommended Together: Algorithmic Homophily in YouTube’s Channel Recommendations in the United States and Germany
2020 Kaiser, J. and Rauchfleisch, A. Article
Algorithms and especially recommendation algorithms play an important role online, most notably on YouTube. Yet, little is known about the network communities that these algorithms form. We analyzed the channel recommendations on YouTube to map the communities that the social network is creating through its algorithms and to test the network for homophily, that is, the connectedness between communities. We find that YouTube’s channel recommendation algorithm fosters the creation of highly homophilous communities in the United States (n = 13,529 channels) and in Germany (n = 8,000 channels). Factors that seem to drive YouTube’s recommendations are topics, language, and location. We highlight the issue of homophilous communities in the context of politics where YouTube’s algorithms create far-right communities in both countries.
Evaluating the scale, growth, and origins of right-wing echo chambers on YouTube
2020 Hosseinmardi, H., Ghasemian, A., Clauset, A., Rothschild, D.M., Mobius, M. and Watts, D.J. Article
Although it is understudied relative to other social media platforms, YouTube is arguably the largest and most engaging online media consumption platform in the world. Recently, YouTube's outsize influence has sparked concerns that its recommendation algorithm systematically directs users to radical right-wing content. Here we investigate these concerns with large scale longitudinal data of individuals' browsing behavior spanning January 2016 through December 2019. Consistent with previous work, we find that political news content accounts for a relatively small fraction (11%) of consumption on YouTube, and is dominated by mainstream and largely centrist sources. However, we also find evidence for a small but growing "echo chamber" of far-right content consumption. Users in this community show higher engagement and greater "stickiness" than users who consume any other category of content. Moreover, YouTube accounts for an increasing fraction of these users' overall online news consumption. Finally, while the size, intensity, and growth of this echo chamber present real concerns, we find no evidence that they are caused by YouTube recommendations. Rather, consumption of radical content on YouTube appears to reflect broader patterns of news consumption across the web. Our results emphasize the importance of measuring consumption directly rather than inferring it from recommendations.
The Online Regulation Series | Insights from Academia I
2020 Tech Against Terrorism Report
In this post, we look at academic analysis of global efforts to regulate online content and speech.
Social media and terrorism discourse: the Islamic State’s (IS) social media discursive content and practices
2020 KhosraviNik, M. and Amer, M. Article
The paper maintains the complementary nature of technological practice and discursive content in the process of meaning-making in digital jihadist discourse. The study shows that digital practices of strategic sharing, distribution and campaigns to re-upload textual materials are made possible by exploiting SMC communicative affordances. As for the analysis of discursive content, the paper focuses on YouTube and highlights strategic patterns and covert references in an IS-produced flagship video. It illustrates how IS discourse constructs its envisaged in-group/outgroup by (re-)symbolising current events within historical, political and ideological conflict scenarios, i.e. the incessant resistance and legitimacy of forces of virtue vs evil. By foregrounding symbolic references to military outgroup actors, IS legitimises its own violence and projects a powerful self-identity against a (perceived) global hegemony. The paper shows how the combination of a technologically savvy operation and a resistant, anti-hegemonic narrative, embedded in a strategically framed symbolism of Islam, may resonate with global (quasi)-diasporic digital consumers.
The Online Regulation Series | Tech Sector Initiatives
2020 Tech Against Terrorism Report
Although regulation frameworks of terrorist and harmful content online have been passed by governments in recent years, regulation in practice remains mostly a matter of solo or self-regulation by the tech sector. That is, when companies draft and apply their own rules for moderating user-generated content on their platforms or when they voluntarily comply with standards shared amongst the tech sector (the Global Internet Forum to Counter Terrorism is one example), without such standards being enforced by law. This, coupled with increased public pressure to address the potential harmful impact of certain online content – in particular terrorist material – has led major tech companies to develop their own councils, consortiums, and boards to oversee their content moderation and its impact on freedom of speech online. In this blogpost, we provide an overview of some of the prominent tech sector initiatives in this area.
The Online Regulation Series | Colombia
2020 Tech Against Terrorism Report
With a growing internet penetration rate (69%) and an increasing number of active social media users (35 million, at a growth rate of 11% between 2019 and 2020), the online space in Colombia remains governed by the principle of net neutrality.
The Online Regulation Series | Brazil
2020 Tech Against Terrorism Report
Brazil represents a major market for online platforms. It is the leading country in terms of internet use in South America, and a key market for Facebook and WhatsApp. WhatsApp’s popularity and the online disinformation campaigns that have been coordinated on the platform are essential to understand Brazil’s approach to online regulation. The messenger app has been accused of being used “for the dissemination of fake news”, whilst critics of the “fake news” bill have said that it served as a “standard” for new regulation in the country based on the app’s existing limitations on forwarding messages and group size.