Welcome to VOX-Pol’s online Library, a research and teaching resource, which collects in one place a large volume of publications related to various aspects of violent online political extremism.

Our searchable database contains material in a variety of different formats including downloadable PDFs, videos, and audio files comprising e-books, book chapters, journal articles, research reports, policy documents and reports, and theses.

All open access material collected in the Library is easy to download. Where the publications are only accessible through subscription, the Library will take you to the publisher’s page from where you can access the material.

We will continue to add more material as it becomes available with the aim of making it the most comprehensive online Library in this field.

If you have any material you think belongs in the Library—whether your own or another authors—please contact us at and we will consider adding it to the Library. It is also our aim to make the Library a truly inclusive multilingual facility and we thus welcome contributions in all languages.


Full Listing

Elites and foreign actors among the alt-right: The Gab social media platform
2019 Zhou, Y., Dredze, M., Broniatowski, D. A. and Adler, W. D. Article
Content regulation and censorship of social media platforms is increasingly discussed by governments and the platforms themselves. To date, there has been little data-driven analysis of the effects of regulated content deemed inappropriate on online user behavior. We therefore compared Twitter — a popular social media platform that occasionally removes content in violation of its Terms of Service — to Gab — a platform that markets itself as completely unregulated. Launched in mid 2016, Gab is, in practice, dominated by individuals who associate with the “alt right” political movement in the United States. Despite its billing as “The Free Speech Social Network,” Gab users display more extreme social hierarchy and elitism when compared to Twitter. Although the framing of the site welcomes all people, Gab users’ content is more homogeneous, preferentially sharing material from sites traditionally associated with the extremes of American political discourse, especially the far right. Furthermore, many of these sites are associated with state-sponsored propaganda from foreign governments. Finally, we discovered a significant presence of German language posts on Gab, with several topics focusing on German domestic politics, yet sharing significant amounts of content from U.S. and Russian sources. These results indicate possible emergent linkages between domestic politics in European and American far right political movements. Implications for regulation of social media platforms are discussed.
The Internet Police
2019 Breinholt, J. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus
Unraveling The Impact Of Social Media On Extremism
2019 Susarla, A. Report
Social media has been remarkably effective in bringing together groups of individuals at a scale and speed unthinkable just a few years ago. While there is a positive aspect of digital activism in raising awareness and mobilizing for equitable societal outcomes, it is equally true that social media has a dark side in enabling political polarization and radicalization. This paper highlights that algorithmic bias and algorithmic manipulation accentuate these developments. We review some of the key technological aspects of social media and its impact on society, while also outlining remedies and implications for regulation. For the purpose of this paper we will define a digital platform as a technology intermediary that enables interaction between groups of users (such as Amazon or Google) and a social media platform as a digital platform for social media.
Disinformation In Terrorist Content Online
2019 Jankowicz, N. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
EU Policy - Preventing The Dissemination Of Terrorist Content Online
2019 Krasenberg, J. Report
The use of the internet for recruitment and the dissemination of violent extremist materials raises significant policy challenges for the European Union (EU), its Member States, and content sharing platforms (CSPs) 1 alike. This problem requires – through the eyes of the EU – a combination of legislative, non-legislative, and voluntary measures based on collaboration between authorities and CSPs with respect for fundamental (human) rights.
Social Media, Terrorist Content Prohibitions, And The Rule Of Law
2019 MacDonald, S. Report
The importance of the rule of law to an effective counterterrorism strategy is widely accepted. Adherence to rule of law values protects both the legitimacy and moral authority of counterterrorism policies and legislation. This paper focuses on two specific rule of law values: minimalism and certainty. Minimalism is concerned with issues of scope. Laws should be as narrowly drawn as possible in order to preserve individuals’ autonomy and freedom to choose, to the fullest extent possible. Certainty is concerned with issues of clarity. Laws should be worded as clearly as possible so that individuals are aware of their responsibilities and able to make informed choices about their actions. Narrowly, clearly drawn laws also limit the discretion vested in officials, thus providing protection against inconsistent or inappropriate decision-making by those tasked with implementing the law.
The rule of law is traditionally associated with public institutions, not private technology companies. In the contemporary realm of counterterrorism, however, a steadfast public private distinction is difficult to maintain. Indeed, many have urged the importance of public-private partnership in responding to terrorists’ use of the internet. One specific issue that has generated much discussion has been social media companies’ regulation of extremist content on their platforms. Facebook’s Community Standards, the Twitter Rules and YouTube’s Community Guidelines all expressly prohibit content that promotes terrorism. Most of the discussion of these prohibitions has focused on the speed with which they are enforced, particularly following the attacks in Christchurch, New Zealand.2 This paper seeks instead to evaluate the prohibitions from the different, but equally important, perspective of the rule of law values of minimalism and certainty.
To inform the discussion, the paper draws on the debates that have surrounded the U.K. ‘Encouragement of Terrorism’ criminal offence. Created by the Terrorism Act 2006, and recently amended by the Counter-Terrorism and Border Security Act 2019, this offence has proved controversial from its inception for two principal reasons. First, the offence expressly encompasses both direct and indirect encouragement. Critics have argued that the concept of indirect encouragement is too nebulous and gives the offence too wide a scope. Second, the framing of the offence focuses not on the purpose of the speaker, but on whether the potential effect of the statement is to encourage terrorism.
This too, it has been argued, gives the offence too wide a scope. In terms of the social media companies’ prohibitions on terrorism-promoting content, this paper accordingly asks two questions. Do the prohibitions encompass indirect, as well as direct, encouragement? And, for the prohibitions to apply, must the encouragement of terrorism have been the purpose and/or the likely effect of the relevant content? The answer to neither question is clear from the wording of the prohibitions themselves. The paper will argue that, in terms of the values of minimalism and certainty, it is important that the answers to both questions are made explicit. It will also suggest how both questions should be answered and provide a proposed reformulation of the social media companies’ prohibitions on terrorism-promoting content.
Lessons from the Information War: Applying Effective Technological Solutions to the Problems of Online Disinformation and Propaganda
2019 Maddox, J. D. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus.
Counterterrorism is a Public Function: Resetting the Balance Between Public and Private Sectors in Preventing Terrorist use of the Internet
2019 Guittard, A. Report
In the closing scene of The Social Network, one of Mark Zuckerberg’s lawyers marveled at Facebook’s global expansion, asking “In Bosnia, they don’t have roads, but they have Facebook?” While the statement (and much of the film) was factually incorrect, it captured the "move fast and break things” mentality of companies like Facebook as they revolutionized the way people around the world communicate. Despite its benefits, this revolutionary shift in communications has posed several public policy challenges, from election integrity to the erosion of local journalism to terrorism. As someone who has worked in counterterrorism for nearly a decade, first in government and now from the private sector, I’ve seen this evolution firsthand. To date, most efforts to deny terrorists the benefits of a free and open internet are voluntary and industry-led. These include the Global Internet Forum to Counter Terrorism and its Hash Sharing Consortium, the expansion of dedicated counterterrorism teams at Facebook and Google and the launch of initiatives such as YouTube Creators for Change. These are positive and socially responsible initiatives that should be encouraged to grow.
However, the U.S. government – both its political leadership and its CT experts – should not take the convenient route of outsourcing difficult public policy issues to private companies. These issues should be addressed legislatively and in partnership with industry. Curbing terrorists’ use of the internet begs important social questions about the limits of free speech, the definition of terrorism, and national sovereignty over the internet at a time when the U.S. public is increasingly skeptical of the ability of internet companies to act in the public interest.
By examining similar experiences balancing security with technological advancement, CT policy makers will see that cooperation with the private sector is often contentious at first, with industry eschewing new regulation. This paper will examine three such cases: the restriction of radio in WWI, the introduction of counter-money laundering requirements on banks and the introduction of airline passenger screening. These cases show when the government acts within its Constitutional authorities to set clear expectations and work with industry in good faith, industry, government and the public benefit.
Fighting Hate Speech And Terrorist Propaganda On Social Media In Germany
2019 Ritzmann, A. Report
This paper, part of the Legal Perspectives on Tech Series, was commissioned in conjunction with the Congressional Counterterrorism Caucus
Three Constitutional Thickets: Why Regulating Online Violent Extremism is Hard
2019 Keller, D. Report
In May of 2019, two months after an attacker horrified the world by livestreaming his massacre of worshippers in two New Zealand mosques, leaders of Internet platforms and governments around the world convened in Paris to formulate their response. In the resulting agreement, known as the Christchurch Call, they committed “to eliminate terrorist and violent extremist content online,” while simultaneously protecting freedom of expression. The exact parameters of the commitment, and the means to balance its two goals, were left vague – unsurprising in a document embraced by signatories from such divergent legal cultures as Canada, Indonesia, and Senegal. The U.S. did not sign, though it endorsed similar language through G7 as recently as 2018, and will be asked to do so again in 2019.

In this paper, I review U.S. constitutional considerations for lawmakers seeking to
balance terrorist threats against free expression online. The point is not to advocate for any particular rule. In particular, I do not seek to answer moral or norms-based questions about what content Internet platforms should take down. I do, however, note the serious tensions between calls for platforms to remove horrific but FirstAmendment-protected extremist content – a category that probably includes the Christchurch shooter’s video – and calls for them to function as “public squares” by leaving up any speech the First Amendment permits. To lay out the issue, I draw on analysis developed at greater length in previous publications. This analysis concerns large user-facing platforms like Facebook and Google, and the word “platform” as used here refers to those large companies, not their smaller counterparts.

The paper’s first section covers territory relatively familiar to U.S. lawyers concerning the speech Congress can limit under anti-terrorism laws. This law is well-summarized elsewhere, so my discussion is quite brief. The second section explores a less widely understood issue: Congress’s power to hold Internet platforms liable for their users’ speech. The third section ventures farthest afield, reviewing constitutional implications when platforms themselves set the speech rules, prohibiting legal speech under their Terms of Service (TOS). I will conclude that paths forward for U.S. lawmakers who want to both restrict violent extremist content and protect free expression are rocky, and that non-U.S. laws are likely to be primary drivers of platform behavior in this area in the coming years.
Leveraging CDA 230 to Counter Online Extremism
2019 Bridy, A. M. Report
Current events make it plain that social media platforms have become vectors for the global spread of extremism, including the most virulent forms of racial and religious hatred. In October 2018, a white supremacist murdered 11 people at a synagogue in Pittsburgh, Pennsylvania. The shooter was an active user of the far-right social network Gab, on which he had earlier complained that a refugee-aid organization linked to the synagogue was importing foreign “invaders” to fight a “war against #WhitePeople.” Journalists searching the shooter’s social media accounts for a motive discovered a trail of anti-Semitic posts, including notorious Jewish conspiracy memes widely shared within the far-right’s online ecosystem. In March 2019, another white supremacist massacred 51 people at two mosques in Christchurch, New Zealand. Minutes before the attack, he shared links on 8chan to his Facebook page and a rambling racist manifesto.
Then he live-streamed the carnage to Facebook, which didn’t intervene in time to keep the footage from going viral on YouTube and elsewhere.3 To say that extremist content online caused the Pittsburgh and Christchurch tragedies would be a gross oversimplification. At the same time, however, we must reckon with the fact that both shooters were enmeshed in extremist online communities whose members have cultivated expertise in using social media to maximize the reach of their messages. YouTube’s Chief Product Officer described the Christchurch massacre as “a tragedy…designed for the purpose of going viral.”5
As offline violence with demonstrable links to online extremism escalates, regulators have made it clear that they expect the world’s largest social media platforms to more actively police harmful online speech, including that of terrorist organizations and organized hate groups. In the aftermath of the Christchurch shooting, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron urged governments and tech companies to join together in the Christchurch Call, a “commitment…to eliminate terrorist and violent extremist content online.” As their part of the bargain, Facebook, YouTube, Twitter, and several other tech companies agreed to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”
A Plan for Preventing and Countering Terrorist and Violent Extremist Exploitation of Information and Communications Technology in America
2019 Alexander, A. Report
Policymakers in the United States know that terrorists and violent extremists exploit information and communications technologies (ICTs), but the government still struggles to prevent and counter these threats. Although the U.S. does not face these challenges alone, the strategies and policies emphasized by some of its greatest allies are not viable or suitable frameworks for domestic policymakers. Since these threats persist, however, the U.S. government must develop a cohesive strategy to prevent and counter-terrorist and violent extremist exploitation of ICTs. The approach should rest on the pillars of pragmatism, proportionality, and respect for the rule of law, and aim to disrupt terrorist and violent extremist networks in the digital sphere. To pursue this objective, the following brief calls for political leaders to create an interagency working group to formalize leadership and conduct a comprehensive assessment of terrorist and violent extremist abuse of ICTs. The evaluation must also weigh the costs and benefits associated with responses to these threats. Then, government officials should work to enhance the capability and coordination of government-led efforts, pursue partnerships with non-governmental entities, and facilitate productive engagements with the technology industry. In short, this approach would allow the government to use legislation, redress, and strategic outreach to empower more players to responsibly prevent and counter terrorist and violent extremist exploitation of ICTs.
Jihadist online communication and Finland
2019 Malkki, L. and Pohjonen, M. Article
This study focuses on jihadist online communication in 2014–2018 from the perspective of Finland. In particular, it examines and analyses the visibility of Finland and persons connected to Finland in jihadist online communication and investigates the types of content persons who are or were living in Finland have produced and disseminated content on different online platforms and channels. The report focuses on jihadist material that was openly available online during this period. It also contains a section describing the development of jihadist online communication more generally, thus helping to put observations in a broader international context. Up till the early 2010s, Finland was conspicuous by its almost total absence from jihadist content. This situation changed in the early stages of the conflict in Syria, at which time jihadist online communication became more prolific in Western countries. Around the same time, Finland also became visible in international jihadist discussions for the first time. During the early years of the conflict, the volume of jihadist online communication related to Finland was higher than ever before. This volume should not be exaggerated, however, as the quantity of online communication linked to Finland remained relatively small by international comparison. Finland has been mentioned and persons coming from Finland have appeared a number of times in ISIS publications and videos. Persons originating from Finland have mainly featured in stories and videos intended for the Western public in a wider sense. In ISIS publications, Finland is also cited as one on the long list of the organisation’s enemies and a heathen country where orthodox Muslims are not understood. References to Finland in jihadist groups’ publications have been few and far between, and they do not add up to a clear picture of Finland as an important target. It is likely that ISIS has used persons from Finland in jihadist publications and videos mainly to target its communication at broader Western audiences. Numerous persons originating from other Western countries also appear in similar stories and videos. These stories and videos offer a new point of comparison, however, which may make
identification with jihadist activity seem more natural also in Finland. At this stage, the actual impact this has ultimately had on the development of jihadism in Finland should be considered as an open question.
In addition to established jihadist groups, jihadist online content has also been produced independently by their supporters. Many of those who have travelled to Syria and Iraq from Finland have been active on the social media. In addition to everyday messages typical of social media discussions, this activity has included  reporting on local conditions, responding to questions, and also encouraging others to go to Syria or Iraq. The communication has also contained some threats against Finland and, for example, against Shia Muslims. While the preferred channels for these activities appear to have been Facebook and Twitter, such content was also
found on other discussion platforms. Persons who are and were living in Finland have disseminated and, in some cases, also produced jihadist content in the Finnish language. The majority of the content available and distributed outside the social media, including on websites and in blogs or online discussion forums, has been translated from other languages. The volume of Finnish content has been quite modest, and as far as we can conclude, it is unlikely to have attracted a large number of readers.
The volume of open jihadist online communication has dropped significantly in the last three years, as technology companies have started actively deleting content inciting or advocating violence. Apart from a few exceptions, the content discussed in this
study is no longer available online. The golden days of open jihadist online communication are now mostly over, and this also applies to content related to Finland or provided in Finnish. While online content related to Finland has almost completely disappeared from public platforms, this does not mean that it no longer exists elsewhere. As disseminating such content publicly has become more difficult,
jihadist online communication has moved to closed and encrypted channels. The study found signs indicating that jihadist content related to Finland has been shared and interaction associated with it has also occurred, and may occur still, on these
closed and encrypted channels. The impact of jihadist online communication on Finland is not limited to content
directly linked to Finland, and content in which “Finland is mentioned” in some way is not always automatically the most significant for Finland. Online communication is produced in countless other languages, and it is frequently also consumed in languages other than the audience’s native language. Rather than living in the same country, the producers and consumers of jihadist online communication typically are part of the complex online milieu of international movements. Interpersonal relationships, even highly significant ones, may be established through online communication. This possibility may facilitate attachment to jihadist activity, especially for people for whom it is difficult to find persons with a similar ideological predisposition close by. At the same time it should be noted that, according to research findings, face-to-face interaction still almost always plays a significant role in recruitment to jihadist activism.
Modeling Islamist Extremist Communications on Social Media using Contextual Dimensions: Religion, Ideology, and Hate
2019 Kursuncu, U., Gaur, M., Castillo, C., Alambo, A. Thirunarayan, K., Shalin, V., Achilov, D., Budak Arpinar, I. and Sheth, A. Article
Terror attacks have been linked in part to online extremist content. Although tens of thousands of Islamist extremism supporters consume such content, they are a small fraction relative to peaceful Muslims. The efforts to contain the ever-evolving extremism on social media platforms have remained inadequate and mostly ineffective. Divergent extremist and mainstream contexts challenge machine interpretation, with a particular threat to the precision of classification algorithms. Our context-aware computational approach to the analysis of extremist content on Twitter breaks down this persuasion process into building blocks that acknowledge inherent ambiguity and sparsity that likely challenge both manual and automated classification. We model this process using a combination of three contextual dimensions -- religion, ideology, and hate -- each elucidating a degree of radicalization and highlighting independent features to render them computationally accessible. We utilize domain-specific knowledge resources for each of these contextual dimensions such as Qur'an for religion, the books of extremist ideologues and preachers for political ideology and a social media hate speech corpus for hate. Our study makes three contributions to reliable analysis: (i) Development of a computational approach rooted in the contextual dimensions of religion, ideology, and hate that reflects strategies employed by online Islamist extremist groups, (ii) An in-depth analysis of relevant tweet datasets with respect to these dimensions to exclude likely mislabeled users, and (iii) A framework for understanding online radicalization as a process to assist counter-programming. Given the potentially significant social impact, we evaluate the performance of our algorithms to minimize mislabeling, where our approach outperforms a competitive baseline by 10.2% in precision.
Online Terrorist Propaganda, Recruitment, and Radicalization
2019 Vacca, J. R. Book
Online Terrorist Propaganda, Recruitment, and Radicalization is most complete treatment of the rapidly growing phenomenon of how terrorists' online presence is utilized for terrorism funding, communication, and recruitment purposes. The book offers an in-depth coverage of the history and development of online "footprints" to target new converts, broaden their messaging, and increase their influence. Chapters present the emergence of various groups; the advancement of terrorist groups' online presences; their utilization of video, chat room, and social media; and the current capability for propaganda, training, and recruitment.

With contributions from leading experts in the field-including practitioners and terrorism researchers-the coverage moves from general factors to specific groups practices as relate to Islamic State of Iraq and the Levant (ISIL), and numerous other groups. Chapters also examine the lone wolf phenomenon as a part of the disturbing trend of self-radicalization. A functional, real-world approach is used regarding the classification of the means and methods by which an online presence is often utilized to promote and support acts of terrorism.

Online Terrorist Propaganda, Recruitment, and Radicalization examines practical solutions in identifying the threat posed by terrorist propaganda and U.S. government efforts to counter it, with a particular focus on ISIS, the Dark Web, national and international measures to identify, thwart, and prosecute terrorist activities online. As such, it will be an invaluable resources for intelligence professionals, terrorism and counterterrorism professionals, those researching terrorism funding, and policy makers looking to restrict the spread of terrorism propaganda online.
From “Incel” To “Saint” - Analyzing The Violent Worldview Behind The 2018 Toronto Attack
2019 Baele, S. J., Brace, L. & Coan, T. G. Journal
This paper combines qualitative and quantitative content analysis to map and analyze the “Incel” worldview shared by members of a misogynistic online community ideologically linked to several recent acts of politically motivated violence, including Alek Minassian’s van attack in Toronto (2018) and Elliot Rodger’s school shooting in Isla Vista (2014). Specifically, the paper analyses how support and motivation for violence results from the particular structure this worldview presents in terms of social categories and causal narratives.
Social Media and Terrorist Financing: What are the Vulnerabilities and How Could Public and Private Sectors Collaborate Better?
2019 Keatinge, T. and Keen, F. Article
Social media companies should recognise the political importance of counterterrorist financing (CTF) by explicitly reflecting the priorities of the UN Security Council and the Financial Action Task Force (FATF) in their policies, strategies and transparency reports.
• Furthermore, social media companies identified as being at high risk of exploitation should update their terms of service and community standards to explicitly reference and outlaw terrorist financing (consistent with universally applicable international law and standards such as those of the FATF) and actions that contravene related UN Security Council resolutions and sanctions.
• Social media companies should clearly demonstrate that they understand and apply appropriate sanctions designations; at the same time, policymakers should ensure that sanctions designations include, where possible, information such as email addresses, IP addresses and social media handles that can support sanctions implementation by social media companies. The more granular the information provided by governments on designated entities, the more efficiently the private sector can comply with sanctions designations.
• Social media companies should more tightly control functionality to ensure that raising terrorist funding through social media videos, such as big-brand advertising and Super Chat payments, is disabled.
• Researchers and policymakers should avoid generalisations and make a clear distinction between forms of social media and the various terrorist-financing vulnerabilities that they pose, recognising the different types of platforms available, and the varied ways in which terrorist financiers could abuse them.
• Policymakers should encourage both inter-agency and cross-border collaboration on the threat of using social media for terrorist financing, ensuring that agencies involved are equipped with necessary social media investigative capabilities.
• International law enforcement agencies such as Interpol and Europol should facilitate the development of new investigation and prosecution standard operating procedures for engaging with operators of servers and cloud services based in overseas jurisdictions to ensure that necessary evidence can be gathered in a timely fashion. This would also encourage an internationally harmonised approach to using social media as financial intelligence.
• Policymakers should encourage the building of new, and leveraging of existing, public–private partnerships to ensure social media company CTF efforts are informed and effective.
The Conflict In Jammu And Kashmir And The Convergence Of Technology And Terrorism
2019 Taneja, K. and Shah, K. M. Article
This paper provides recommendations for what government and social media companies can do in the context of Jammu and Kashmir’s developing online theatre of both potential radicalisation and recruitment
Hidden Resilience And Adaptive Dynamics Of The
2019 Johnson, N. F., Leahy, R., Johnson Restrepo, N., Velasquez, N., Zheng, M., Manrique, P., Devkota, P. and Wuchty, S. Article
Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes1–6 and an alarming increase in youth suicides that result from social media vitriol⁷; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings8–11; recruitment of extremists12–16, including entrapment and sex-trafficking of girls as fighter brides¹⁷; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family¹⁸; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate19,20 and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours21,22 such as financial fraud.
Defining Online Hate And Its Public Lives- What Is The Place For Extreme Speech?
2019 GAGLIARDONE, I. Article
Following Sahana Udupa and Matti Pohjonen’s (2019) invitation to move the debate beyond a normative understanding of hate speech, this article seeks to build a foundation for conceptual and empirical inquiry of speech commonly considered deviant and disturbing. It develops in three stages. It first maps the public lives of terms that refer to online vitriol and how they have been used by different communities of researchers, politicians, advocacy groups, and national organizations. Second, it shows how different types of “haters” have been interpreted as parts of “swarms” or “armies,” depending on whether their violent potential emerges around critical incidents or whether they respond to longer-term strategies through which communities and their leaders tie their speech acts to explicit narratives. The article concludes by locating “extreme speech” within this broader conceptual tapestry, arguing that the paternalistic the gaze that characterizes a lot of research on online hate speech is tied to what Chantal Mouffe has referred to as the “moralization of politics,” a phenomenon that cannot be matched by responses that are themselves moral.