The EU’s Terrorist Content Regulation: Concerns about Effectiveness and Impact on Smaller Tech Platforms

This is the third in a series of posts and responses addressing the EU’s regulation on online terrorist content; the first post is HERE and the second HERE. [Ed.]

By Adam Hadley and Jacob Berntsson

Terrorist use of the internet is a significant threat that has almost become inseparable from terrorism itself. Terrorist groups use a plethora of online platforms, both for strategic and operational purposes, and it is clear they have acquired the expertise needed to exploit existing and emerging internet tech such as messaging apps, video platforms, and content storage sites.

In this context it is understandable that a number of countries, along with the European Union, are exploring regulation designed to remove online terrorist content. The measures proposed by the EU include an obligation for tech platforms to remove material within an hour of receiving removal orders from various Member States’ “competent authorities”. Failure to comply will result in financial penalties.

The proposal introduced by the European Commission, which was partially altered in the EU Parliament’s reading (a final version is still pending), also includes a requirement for companies to introduce “proactive measures” (often labelled “upload filters”) and gives Member States the right to submit content referrals.

Many of the legislative efforts around the world, whether proposed or already introduced, have been criticised for placing freedom of expression at risk. But a simple truth, often overlooked, is that some of these laws will promote measures that are ineffective and potentially counter-productive from a counterterrorism point of view. As Stanford academic Daphne Keller has pointed out, there is often a notable absence of counterterrorism experts in the debate regarding online regulation.

It was therefore disappointing to read the Counter Extremism Project’s (CEP) opinion piece on the proposed regulation, a piece that dismissed legitimate concerns about freedom of expression and suggested that digital rights organisations and MEPs should ignore challenges around automated removal tools in the interests of swiftly introducing the regulation.

We believe that it is unhelpful to label concerns about the regulation as “false commentary”: criticism of the proposal have been made by three different UN Special Rapporteurs; the Council of Europe; the EU’s own Fundamental Rights Agency (which warned that the proposal is in possible violation of the EU Charter for Fundamental Rights); academics, and a range of civil society groups.

As an initiative that works with the tech sector to help it scale its response to terrorist exploitation, Tech Against Terrorism also has concerns about the effectiveness of the regulation and its impact on smaller tech platforms.

It is well established that smaller and newer tech platforms are most at risk of terrorist exploitation. This is borne out by our own research and by statistics from the EU Internet Referral Unit. Several of these smaller platforms are run by just one or two people. These platforms will inevitably struggle to comply with the proposed one-hour removal deadline and the requirement, set out in the proposed legislation, that companies appoint a point of contact who must be contactable 24/7.

Our concern is that the proposed regulation could restrict competition in the tech sector by punishing companies who fail to meet, for them, impossible demands, while failing to help them improve their response to the threat of terrorist exploitation.

Allowing just one hour for verification and removal of content also raises significant concerns about freedom of expression. The regulation places no expectation of subject matter expertise on the “competent authorities” in each Member State. This might lead to fragmentation between states in terms of content reported, however due to the one-hour removal deadline companies will feel obliged to remove content, even when authorities get it wrong.

Similar concerns were addressed by France’s constitutional council, which recently blocked France’s “cyberhate law” after concluding a requirement to remove flagged content within 24 hours did not leave enough time for tech companies to verify whether reported content is manifestly unlawful under French law.

Another challenge is around so-called ‘upload filters.’ CEP is correct to highlight automated solutions, alongside human verification, as an effective way to identify and remove terrorist content. But introducing data-driven solutions is technically complex, and many smaller companies won’t be able to afford it. We know this since much of our work is focused on the detail of implementation – our work in password protecting Jihadology took a team of four people more than 12 months, with funding provided for this by the GIFCT. This is not straightforward work.

Rather than punishing companies for not having the resources to introduce proactive measures, we should find ways to support them. Similarly, instead of dismissing legitimate concerns from civil society groups about the impact automated solutions can have on human rights, we should listen to the points they make and introduce appropriate mechanisms to reduce this risk.

That is why Tech Against Terrorism is developing the Terrorist Content Analytics Platform to support smaller platforms in moderating terrorist content on their sites. We have actively sought out and responded to concerns from civil society in our development plan. As part of our co-leadership of the GIFCT’s working group on technical approaches, we will work with a range of cross-sector partners to identify data-driven solutions to counter terrorist use of the internet, with a particular focus on smaller internet platforms and their needs.

There are other concerns with the proposed EU regulation. The original version (removed from the parliamentary draft) gave Member States the power to “refer” content to tech platforms. The difference here is that companies are asked to assess content against their own Terms of Service instead of against a definition of terrorism provided by law.

It is true that some referral mechanisms have proven efficient, including those used by the EU’s Internet Referral Unit. But critics have noted that referrals may allow governments to request removals outside of the legal process, and therefore grant more power to tech companies to decide whether terrorist content remains online or not. Keller has called this “the rule of Terms of Service”.

It is vital that democratic governments lead the way in ensuring that the rule of law is upheld, particularly on an issue like terrorism, and that counterterrorism experts endorse this approach. That is why we call on governments to improve designation of terrorist groups. Doing so will create definitional clarity, which will make it easier for tech companies large and small to take decisive action to remove online terrorist propaganda. Furthermore, we believe there is more to be done by governments to bolster open-source intelligence (OSINT) capabilities.

We have worked closely with the EU for several years, for example via the EU Internet Forum and with Europol, and support much of their work and approach. However, as counterterrorism professionals it would be remiss of us not to encourage further debate about matters of such importance. Instead of punishing smaller companies for failing to comply with disproportionate requirements, we believe there should be focus our efforts on providing them with the collaborative tools they need to improve their response.


Tech Against Terrorism is a public-private partnership that supports the tech industry, and in particular smaller platforms, in tackling terrorist use of the internet whilst respecting human rights through open-source intelligence analysis, mentorship, and operational support. Tech Against Terrorism works closely with the UN and the Global Internet Forum to Counter Terrorism. On Twitter here @techvsterrorism

Adam Hadley is Director of Tech Against Terrorism and can be found on Twitter @adamihad, Jacob Berntsson is Tech Against Terrorism’s Research Manager and on Twitter @berntsonjacob

Want to submit a blog post? Click here.

Leave a Reply