Now You See It, Now You Don’t? Moving Beyond Account & Content Removal in Digital Counter-Extremism Operations

By Lorand Bodo

On 25–26 April 2018, a major multinational digital content takedown operation was conducted against the Islamic State (IS). The operation targeted the major online media outlets directly associated with IS. The operation was reportedly successful in collecting digital evidence about IS activities, including the seizure of servers located in Canada, the Netherlands and the US.

This multinational takedown operation was coordinated by the European Union Internet Referral Unit (EU IRU), which was established in 2015 as part of the European Counter Terrorism Centre (ECTC) based at the headquarters of the European Union Agency for Law Enforcement Cooperation, more commonly known as Europol.

A few days later, Mina al-Lami, a BBC Monitoring jihadist media specialist, analysed the Europol operation’s efficacy. Al-Lami found that during the operational period, little change was observable on the messaging platform Telegram regarding IS media output. In fact, Islamic State media operatives continued to disseminate propaganda using Telegram. Furthermore, al-Lami rightly pointed out that it was unclear what Europol’s precise objectives were. In this context, two main points stand out. First, Europol’s operational press release did not mention Telegram, either as a target or as a partner. Second, it is clear that the operation took down some servers, but we do not know how many were affected in total, and it could therefore be argued that IS still has plenty of other options to disseminate its propaganda across the surface internet, deep web and dark net.

We will probably need to wait to see more clearly the results of this operation, such as arrests of IS media operatives on the basis of digital evidence acquired during the operation, but we can in any case already consult the existing literature on the efficacy of such takedown operations.

There are only a few studies that have examined this so-called ‘hard’ approach to policing extremist digital content. One of the first studies was conducted by J.M. Berger and Jonathon Morgan (2015) in The ISIS Twitter Census, which aimed to define and describe the population of ISIS supporters on Twitter. Inter alia, Berger and Morgan found that account suspensions “have concrete effects in limiting the reach and scope” of IS activities on Twitter. However, both advise against this approach as it would lead to the loss of valuable intelligence and, furthermore, result in evolved networks that would become more internally focused, and which could, in turn, potentially accelerate or intensify the radicalisation process of those individuals remaining in these networks.

Another study by Berger and Perez (2016) studied English-language IS supporters from June — October 2015 and found that suspensions had an overall flattening effect on the size and reach of the network by significantly reducing the reach of individual users who had repeatedly been targeted. Most interestingly, Berger and Perez observed the adoption of online counter-measures by IS supporters, including methods to quickly generate new accounts and ways to find other suspended users.

A more recent study, conducted by Conway et al. (2017), aimed to accurately measure takedowns of terrorist material online and pro-IS Twitter accounts. Their findings suggest that aggressive account removals had a positive impact in making the Twitter platform less appealing to IS supporters, although some propaganda material was still surfacing on Twitter. Additionally, they observed that this aggressive approach was primarily targeting IS accounts, while other jihadist-related accounts appeared to be subject to less pressure.

From these studies, three things are apparent. First, the ‘hard’ approach does appear to have had, in some cases, a positive impact in terms of limiting the reach of the group. But, second, as in the case of IS on Twitter, the targeted group has quickly adapted to these new circumstances, and found new ways to disseminate their extremist materials. And third, it seems that the Islamic State is the main target of this hard approach, while lesser attention has been paid, at least to date, to other extremist groups, such as other jihadi-inspired violent extremist groups, or extreme right- or left-wing groups.

The eternal truth, that more research is needed, applies in this case, particularly research that goes beyond Twitter to include other (fringe) social media sites, and which thoroughly examines a given group’s reaction to account removals to gain more granular insight and understanding of the efficacy of the ‘hard’ approach. Furthermore, future research needs to examine the impact of takedown on various other issues, such as its gendered effects (Pearson, 2017), to include other extremist ideologies and to use comparative research designs.

These operations need to be mindful that targeted groups are often resilient and can readily adapt to overcome content takedown and account suspensions. Given the forlorn hope that even multinational efforts can effectively police an area of the enormous size of the Internet, law enforcement authorities must be responsive to the multiplicity of alternative methods available for (so-called “bulletproof”) hosting and distribution of content online, as well as a steady proliferation of alternative social networking platforms to which targeted groups can migrate.

To illustrate this point, let’s return to Europol and its latest action against IS. Only a few days after A’maq News Agency had been taken down, IS relaunched its website, according to Charlie Winter, “complete with thirty months’ worth of operation claims, videos and infographics”. Most ironically, the new website uses the .eu country code top domain, according to Jihadoscope, which may or may not be a conscious effort to troll Europol.

We can also observe similar counter-measures that have been adopted by other extremist groups, for example the UK far-right group Britain First. In December 2017, Britain First and its leaders, Paul Golding and Jayda Fransen, were banned from Twitter, which in turn led the leadership to urge their followers to move to an alternative platform, namely Gab.ai — a social media platform, which puts “people and free speech first” and welcomes users banned or suspended from other social networks. A few months later, Facebook banned the group from its platform, which reinforced Britain First’s position on ‘migrating’ to Gab as the likelihood of being banned from there is close to zero.

Given these manifest shortcomings of an overly-narrow approach to ‘takedown’ operations, what are the alternatives? First, law enforcement operations must be planned in the knowledge that they will generate effects, not only intended effects of benefit to law enforcement but also reactive changes in the behaviour of targeted groups (and, with the attendant publicity of such operations, in the behaviour of other groups which learn lessons from the coverage of these operations). This adds complexity to the operational planning phase, and should result in a more sophisticated operational plan to try to address pre-emptively these second-order effects. In other words, using a simple ‘hard’ approach will lead to content removals and account suspensions in the short-run, but targeted groups will quickly adapt to the new environment, and could simply use other means to distribute their content and engage in online activities, causing a short-run effect that law enforcement and security agencies might ‘lose sight of’ the online activities of these groups as they migrate to other platforms and incorporate revised communications security protocols. This highlights the complexity of the problem and the corresponding need for a more complex operational response.

Rather than solely focusing on distinct and isolated issues, e.g. extremist content online, operational decision makers should embrace a more holistic way of thinking about countering violent extremism in the digital space.

One suggestion could be to increase critical thinking and media literacy skills among youths and adults. Why this is more important than simply taking down accounts and content demonstrates the following statement by Aby Yahya al-Libi, who is considered to have been a leading thinker behind Al-Qaeda’s propaganda strategy:

“As you all know, most common people are not familiar with scientific discussions and political analysis. They act according to their emotions. Even a photograph which is attached to an inspiring song will motivate a large number of them [for example, a photograph of] injured or poor people. This will cause merchants to give money for the sake of Allah.”(Clarion Project, 2018)

In short, successfully evoking specific emotions is key to any effective propaganda strategy: improved critical-thinking and media-literacy skills must therefore play an important role in countering this strategy. This relates directly to the notion of community cohesion and resilience as a vital element of national security, as articulated by the former UK security and intelligence coordinator, Sir David Omand, in his (2010) book Securing the State.

Recently, the Institute for Strategic Dialogue published a report entitled Digital Resilience: Stronger Citizens Online, which explored the educational impact of a curriculum aimed at fostering digital resilience. The report suggests thatthe curriculum had a positive impact, increasing students’ sense of responsibility for their actions online, as well as their self-reported knowledge on a range of topics that are critical to safe and resilient use of the Internet”.

Overall, given the considerable difficulty, if not impossibility, of completely removing violent extremist content from the Internet, citizens can be forgiven for scepticism about whether governments should continue to spend large sums of tax-payers’ money on takedown operations that resemble a never-ending game of ‘Whac-a-Mole’. Certainly, from an intelligence-gathering perspective, leaving sites in play for longer enables more effective exploitation and monitoring to generate operational insights into these extremist groups. So, there is an inevitable trade-off between intelligence collection and operations to remove digital content transmission channels. Surveillance rather than disruption can lead to improved understanding and perhaps actionable intelligence, but a more successful effort to remove content could obviously better disrupt a group’s propaganda machine and potentially reduce the size of its online audience.

More broadly, to counter violent extremism in the round, we need to interpret this as more than a narrow problem amenable only to law enforcement or intelligence solutions: a social and educational approach will be integral, drawing on the notion of a community’s or wider society’s resilience both ‘in real life’ and in the digital domain.


Lorand Bodo is a researcher at Ridgeway Information focusing on open source intelligence (OSINT) and online extremism. You can find him on Twitter @LorandBodo.

This article was originally published on Medium.com website.

Leave a Reply