Why outsourcing counter-terrorism online won’t work in future

By David Wells

Policing online hate speech currently falls into a murky
space shared between governments and big tech.

The past five years have seen the creation of a latticework of overlapping methodologies to identify and remove online terrorist content and hate speech. Most notably, a significant proportion of these efforts have effectively been privatised by being outsourced to the tech sector.

So, how concerned should governments be that the big tech bubble has begun to burst? Falling share prices have led to thousands of job cuts across the tech sector over the past few months. While there is little evidence yet – with the exception of Twitter – that those cuts have directly impacted content moderation staff, the direction of travel is not a positive one.

A decline in the relative focus on counter-terrorism for governments is now coinciding with financial pressure on Big Tech.

Up to now, and under pressure from both governments and advertisers, tech platforms have invested significantly in people and technology to proactively remove or block harmful content. Some national and regional initiatives have introduced legal requirements for tech companies to remove content identified by authorities as being illegal or harmful. To improve the efficacy and cross-platform nature of these content moderation efforts, the UN Security Council has also called for voluntary tech sector engagement in collaborative initiatives and encouraged public-private partnerships.

These efforts came to fruition at a time when counter-terrorism was one of, if not the highest priority issue for many countries – and during a period of unparalleled big tech growth and profit margins. But a decline in the relative focus on counter-terrorism for governments is now coinciding with financial pressure on Big Tech. Yet arguably, countering terrorism and violent extremism online is more complex than at any time since these counter measures were established.

Rather than primarily countering Islamic State, a universally recognised terrorist group whose online propaganda materials included formal logos and standardised images – identifiable and removable using digital hashing technology – the current threat environment is significantly more diverse. Online content is therefore more difficult to characterise along a continuum between distasteful but legal speech, and terrorist incitement.

Alongside Islamic State, al-Qaeda, and the growing threat posed by the transnational far right, there has been a rise in threats posed by individuals motivated by mixed ideologies, drawing inspiration from a cocktail of mainstream and fringe narratives. The loose, unstructured and overlapping nature of these online movements, and the ideological connections between them and mainstream political and media commentators, makes it increasingly difficult and politically sensitive for Western governments to make decisions about whether content is illegal or harmful.

Instead, an increasing volume of decisions continue to be outsourced to the private sector. Tech platforms use terms of service and content moderation policies to remove material and ban or shadow-ban individuals and specific accounts, with limited transparency or opportunity for redress – to the understandable ire of human rights bodies.

There are early indications that terrorist groups will seek to take advantage of weakening content moderation rules.

Despite legitimate concerns, this approach had largely succeeded in pushing overtly terrorist and violent extremist actors onto less tightly moderated online spaces (including gaming platforms), under-resourced smaller platforms, or pro-free speech apps.

However, all major platforms have continued to face challenges in relation to less overt but increasingly dangerous hate speech, particularly in connection with far-right and nationalist movements globally.

In these less overt contexts, informed decision-making requires language skills, local and cultural context, and knowledge of the narratives propagated by hate groups. Particularly when the far right so often hides the true meaning of their messaging behind irony, memes and gifs. None of this is easy or cheap, especially for platforms that are global in scale.

Which brings us back to the recent job cuts impacting big tech. Although existing content moderation teams and structures for these platforms appear safe thus far, the exception to the rule – Twitter – is perhaps illustrative of the potential impact of cuts elsewhere.

There are early indications that terrorist groups will seek to take advantage of weakening content moderation rules. And that Twitter will be more welcoming of far-right figures who seek to spread hate speech (particularly targeted at the LGBT+ community) and who had previously been consigned to the online fringes.

For now, this is likely to remain atypical among the larger platforms, who have more stable leadership and bigger profit margins to absorb the economic pressure. But Twitter’s shift in approach demonstrates the fragility of the existing model, and the extent to which many established norms were voluntary, not legal requirements.

Twitter’s approach may also provide other platforms with a lowest common denominator that monopolises global headlines and scrutiny by governments, media and civil society. And in the process, allow its rivals to quietly cut costs or reduce their own efforts, safe in the knowledge that relative to Twitter, they will be perceived as satisfactory.

As belts are tightened, questions may also be asked about the value of collaborative, public-private activities that previously delivered valuable PR in relation to a high-priority and high-profile issue.

If these two trends emerge, governments across the world may need to recalibrate their approach to, and expectations of, big tech in the years to come, and strike a new balance that relies less on the carrot of public-private partnerships and more on the stick of regulation. They should be aware however that taking a more hands-on role comes with huge challenges.


This article first appeared in The Interpreter, and is republished from the Lowy Institute.

Image credit: Pexels, under Creative Commons Zero Licensing. 

Want to submit a blog post? Click here.