The Overlap Between Terrorist Content Online, Disinformation, and the Tech Sector Response

By Anne Craanen and Charley Gleeson

Terrorist use of the internet is a complex and multi-faceted threat which requires an equally nuanced response. Terrorists use a wide ecosystem of tech platforms for a variety of purposes, composed of external (public propaganda dissemination) and internal (operational maintenance) communications. Our analysis shows that within the ecosystem of tech platforms there is a clear targeting of a wide range of small and micro-platforms.

These platforms range in purpose from file-sharing, video hosting, and image sharing, to archiving, messaging, and paste sites. Tech Against Terrorism assesses that this is an issue as small tech companies do not have the capacity or capability to deal with terrorist exploitation. Therefore, we support the global tech sector counter violent extremist and terrorist use of their services whilst respecting human rights.

The use of a wide variety of platforms ensures content longevity as content removal practices across platforms have a large disparity. Large tech platforms have the capacity to build and maintain automated content moderation algorithms, whereas smaller platforms tend to lack this capability. Multiple large platforms state that over 90% of removed content is detected, flagged, and removed by automated methods, before a user is able to view it. In contrast, small- and micro-platforms rely almost exclusively on manual, human moderation which takes significantly more time in detecting and removing harmful content. Online tools like mirroring services are also being increasingly used by terrorist actors to further spread content and ensure longevity online. As such, the threat posed by online terrorist content is one that requires a collaborative and cooperative approach to effectively counter.

The evolution of terrorist content online has seen development into the realm of disinformation and conspiracy theories, especially from far-right violent extremist entities. As the lines between terrorist content, disinformation, and conspiracy theories become more blurred, content moderators operate in a grey-zone where they must determine what content meets the threshold of illegal. Some studies have shown the prevalence of conspiratorial thinking within extremism and have highlighted conspiracy theories as having a functional role in violent extremist groups. Tech Against Terrorism has noted the use of COVID-19 conspiracy theories and disinformation as recruitment tools into far-right violent extremist movements, especially highlighted by the increasing overlap between online content. Similarly, some Islamist terrorist groups have published propaganda featuring messages against vaccinations for COVID-19, stating that the virus is an act of God. The manipulation of disinformation and conspiracy theories, especially those surrounding COVID-19, by terrorist groups is presenting a higher threat level as it is likely to be used to further recruit and radicalise into terrorist organisations.

Despite the difficulties in moderating online content, tech companies are at the forefront of disrupting terrorist use of the internet by content moderation, using both automated and manual methodologies. The Terrorist Content Analytics Platform (TCAP), developed by Tech Against Terrorism, uses a combination of human and automated Open Source Intelligence (OSINT) methods to identify and verify terrorist content. The TCAP then flags URLs containing terrorist content to tech platforms, assisting content moderators in identifying terrorist content. The TCAP has begun to bridge the gap in content moderation capabilities across a wide range of platforms by alerting verified terrorist content to moderators. Since its inception in November 2020, the TCAP has flagged over 13,000 URLs containing terrorist content to 68 platforms, 93% of this content has now been taken down.

Tech Against Terrorism believes that countering terrorist use of the internet should take a unilateral approach in ensuring that all potential avenues of terrorist exploitation are analysed so effective countermeasures can be implemented. Inter-organisation collaboration which reinforces the relationship between public and private entities should be at the basis of countering terrorist use of the internet, to ensure that responses are proportionate and appropriate to the threat. Furthermore, as shown through our transparency-by-design approach of the TCAP, we cement all our practices in the rule of law and ensure that thorough review processes are undertaken by independent reviewers.

To conclude, countering terrorist use of the internet requires the collaboration of public and private entities, to ensure that terrorist content can be swiftly removed from online spaces, while ensuring the rule of law and human rights are thoroughly upheld.


Anne Craanen is a Senior Research Analyst at Tech Against Terrorism, researching Islamist terrorism and violent far-right extremism, as well as the role of gender in terrorism. On Twitter @CraanenAnneCharley Gleeson is a TCAP Analyst at Tech Against Terrorism. Charley’s work on the TCAP is focused on OSINT and policy development.

If you want to hear more about Tech Against Terrorism, sign up to the weekly digest. On Twitter @TechvsTerrorism and @TCAPAlerts. For more questions on its work, get in touch at: contact@techagainstterrorism.org.

This article is republished with permission from the March 2022 edition of Spotlight magazine, ‘Digital Ecosystem. Spotlight is a publication from the European Commission’s Radicalisation Awareness Network for RAN’s network of practitioners. Image credit: pngtree.

Want to submit a blog post? Click here.

Leave a Reply