Lone Actors in Digital Environments

Want to submit a blog post? Click here.

By Alexander Ritzmann

The Halle attacker – who killed two people, injured two and aimed at killing dozens more at a Synagogue on 9 October 2019 – was inspired and motivated by online manifestos. In addition, he streamed his attack online and posted his own manifesto online, too. His attack has been marked as a typical ‘lone actor’ attack.

‘Lone wolves’, ‘lone actors’, ‘solo terrorists’, ‘loners’, ‘lone attackers’ – all definitions suggest a single individual, finding his or her way into an extremist ideology without affiliating to a group or network and operating on their own. However, most of the so-called ‘lone actors’ who carry out attacks subscribed to certain narratives and unorganised collectives. These collectives are leaderless and without clear hierarchies, but their followers are connected and united by shared narratives, values and enemies. For example, during his trial, the Halle shooter said that he did not join any group since he thought they would all be under surveillance. But he made clear that he feels like a soldier fighting for the “white race”.

Radicalisation Awareness Network (RAN) Practitioners have organised a number of meetings to discuss the topic of “lone actors”, and one outcome of a recent meeting is very clear: so-called lone actors usually are neither alone nor do they feel lonely. This makes the narrative of ‘lone’ actors inaccurate and potential harmful. It might lead to the underestimation of the milieus and informal networks that provide ideological, moral and sometimes logistical support to lone attackers. Online and social media platforms clearly provide the means for “lone actors” to be recruited or to self-recruit, to meet others and to consume terrorist narratives. Having said this, terrorism in most of its forms has existed long before social media.

So-called lone actors usually are neither alone nor do they feel lonely

Social media companies have invested resources into content moderation tools and procedures to diminish violent, hateful and extremist speech online. They also partner with specialist organisations in the field of preventing and countering violent extremism (P/CVE) such as Moonshot CVE. A concrete example of this was the one-to-one online intervention approach designed to fill the gap of not having systematised attempts to supplement counter-speech efforts with direct online messaging and engagement at scale. Delivered on Facebook to date and working across violent right-wing extremism (VRWE) and violent Islamist ideologies, the programme provides an opportunity for individuals showing clear signs of radicalisation to meet and engage with someone that can support their exit from hate. Furthermore, Moonshot CVE helps them to identify hotspots, narratives and terminology related to radicalisation online.

However, significant challenges remain and more resources should be invested by the companies to make their services safer for users. While the immense amount of user-data might make it difficult to look for terrorist content, the platforms are able to scan, process and monetise all user data. First-line practitioners face the challenge that they are often told to ‘be present online’, but do not necessarily know exactly which platforms or websites are relevant, how to behave and communicate online and what risk indicators to look out for. Especially within certain groups, such as the recent phenomenon of Violent Incels, so-called shitposting and violent language is a key element of their way of communication. Differentiating between noise and relevant signals, and low-risk and high-risk individuals, is therefore hard.

Secondly, the digital landscape in which lone actors operate changes constantly. For social media companies and online platforms, it is a constant ‘cat and mouse’ game: a game of constant pursuit, near captures and many comebacks. If extremists are removed from certain platforms such as Facebook, they move to others, such as 8Chan or Telegram.

Trends, narratives, memes, insignia

The recommendations for preventing and countering “lone actors” from turning to violence largely resemble P/CVE strategies that were already in place within RAN over many years. It is key to maintain existing efforts to properly understand individuals and their pathway to radicalisation. We need to learn from debriefings, trials and research. Practitioners need to be trained to understand trends, narratives, memes, insignia of online extremists milieus and need to be able to engage in conversations with individuals vulnerable to radicalisation. This makes them able to detect specific warning signs.

On the side of policy, the dialogue around and pressure on social media and online-gaming regarding their efforts to not only deplatform terrorist content but to identify potential digital “lone actor” terrorists in a proactive way, need to be kept up. Lastly, it is key to invest in policy-oriented research on this phenomenon, and more particularly, into identifying digital milieus where potential “lone actors” are active, so we don’t misunderstand and then miscalculate the threat terrorists without formal ties to organisations pose.


Alexander Ritzmann is a Senior Advisor to RAN and the Counter Extremism Project. On Twitter @alexRitzmann.

This article is republished with permission from the April 2021 edition of Spotlight magazine, ‘Digital Challenges‘. Spotlight is a publication from the European Commission’s Radicalisation Awareness Network for RAN’s network of practitioners. Image credit: Unsplash.

Leave a Reply