Adversarial Shifts and the Availability of Extremist Content Online

By Conor Rees

Online extremist activity is not a new phenomenon. Terrorist and Violent Extremist (TVE) use of the Internet has been increasingly well researched since the turn of the millennium. This development of knowledge has led to improved understandings of why TVE’s use the Internet for reasons including recruitment, spreading propaganda, and fundraising (Weiman, 2004; Conway, 2006). Understanding how terrorists use the Internet is a topic that is constantly changing and evolving, however. As a result, further understanding how TVE’s use the Internet, researchers continue to draw from numerous data types including open-source data to analyse online TVE activity. This blog provides a roadmap of TVE use of the Internet (past, present, and future) and proposes that locating and therefore analysing online extremist content may only become more difficult.

First Contact

Collecting open-source intelligence of online extremist groups, actors, and communities is a well-established practice. TVE groups and actors have a longstanding and well-documented history with the Internet. Akin to the rest of society, extremists were quick to utilise the Web’s once newfound benefits, using its services for global and immediate communications.

The prolific use of the Internet by TVEs has taken many forms. The first documented success can be seen in the development of Terrorist/Violent Extremist Operated Websites (T/VEOWs) e.g., Stormfront. This white nationalist platform has provided a centralised and public-facing safe haven for the far-right to develop a sense of community since 1996. Since then, Stormfront (and equivalent T/VEOWs) has offered ample opportunities for data collection and analysis. In the years between its creation in 1996 and 2017 the site amassed over 13 million posts (Conway, 2018; Prisk, 2017). The majority of these were public facing and available for collection by researchers conducting open-source data analysis. Provided researchers knew where to look, data collection was relatively straightforward during this period.

Recent History

Despite a longstanding and well-documented history of TVE use of the Internet, it was not until 2012 and beyond that the matter received worldwide attention and notoriety. During this period, extremists began to intensively engage with social media (Gill, 2016). During this early period of social media’s lifecycle, the full spectrum of extremist groups sought to mobilise in this new online and public-facing environment. For example, Conway et al., referred to 2014 as the ‘Golden Age’ of ISIS on Twitter. Researchers active during this period had a wealth of open-source extremist content to draw on.

Following this, social media companies began to proactively work to improve the moderation of the content that could be found on their platforms. These efforts came as a result of public outcry  which in turn initiated international governmental regulation (Hui et al., 2022). This wave of terrorist content regulation largely follows the introduction of Germany’s Network Enforcement Act (NetzDg) in 2017. The follow-up wave of international online regulation largely required Internet companies to identify terrorist use of their services, and subsequently remove it, often within short removal deadlines, and levying fines where regulations are not met.

If, today, terrorists were able to maintain a stable and overt presence on mainstream social media platforms, they almost certainly would do so. This is because mainstream social media platforms have very large user bases that TVE actors wish to exploit for purposes including disseminating propaganda to as wide an audience as possible. Maintaining such a presence is no longer as easy as it once was, however. As a result of recent increased content moderation, adversarial shifts have taken place, whereby, extremists and terrorists have been forced to change their behaviour and/or their preferred platform(s) to combat widespread takedowns (Vegt et al., 2019).

Looking Forward

Terrorist content is certainly still present on the larger tech platforms to some degree. However, it is unquestionably far less prevalent than in previous years. However, online extremist and terrorist content does not cease to exist when met by the content moderation wall. When pushed off major social media platforms, the content goes back to where it started.

At the beginning of this post, it was mentioned how violent extremist and terrorist content first appeared on websites and discussion forums before the focus shifted to social media. The findings of a 2022 report by Tech Against Terrorism (TAT) indicates that TVE groups are once again utilising T/VEOWs. The TAT report was based on analysis of 198 T/VEOWs from across the full spectrum of extremist ideologies and groups, which were used in similar ways for similar purposes. To indicate the scale of this adversarial shift, a sample of just 33 of the 198 T/VEOWs had combined average monthly visits of 1.54 million.

T/VEOWs offer a number of affordances, most notably the capability for TVEs to be less impacted by content moderation practices. This results from these websites’ service providers either lacking capacity or intention of moderating extremist content. Consequently, T/VEOWs effectively undermine international efforts to combat terrorist use of the Internet. T/VEOWs do little to inhibit the process of open-source data collection, however.

The point of concern and interest lies in the direction that these T/VEOWs can take. If extremists continue to follow the trend of using their own platforms, countering TVE content may only prove more difficult (Tech Against Terrorism, 2022). Increased adoption of alternative spaces, including end-to-end encrypted messaging services, the ‘Dark Web,’ or decentralised hosting technologies would see this concern realised. This not only highlights concern for the Internet’s capacity to remove extremist content, but also researcher capacity to access and analyse the data from these platforms altogether.

Connor Rees is a PhD at Swansea University, studying the relationship between the extreme right and hybrid-human automated extremist content removal. Twitter: @Connor_Rees67