Moderating Terrorist and Extremist Content

Want to submit a blog post? Click here.

By Joan Barata

According to the latest figures provided by Facebook, 99,6% of the content actioned on grounds of terrorism (mostly related to the Islamic State, al-Qaeda, and their affiliates) was found and flagged before any user reported it. This being said, it is also worth noting that big platforms already have a long record of mistaken or harmful moderation decisions in this area, inside and outside the United States.

In any case, there is a tendency, particularly in the European political scenario, to state that “platforms should do more”. This is at least the political argument used to push new and controversial pieces of legislation such as the recent proposal in the EU regarding terrorist content online. If it were to be adopted, this legislation would require intermediaries to detect, identify, remove or disable access to, and even prevent the re-uploading of certain pieces or types of content, as well as the expeditious removal (one hour) of content upon the reception of an order from the competent authority.

The existing legislation is, in any case, insufficient and unfit to deal with the complexity of matters that platforms need to handle, especially in the most immediate instances for online speech regulation. The statutory imposition of obligations to thoroughly monitor content would contravene applicable liability exemptions (also recognised by regional and international legal standards) and transform these private entities into unaccountable law enforcement agencies and seriously affect the users’ exercise of the right to freedom of expression.

What is needed is a coordinated effort between tech companies, policy makers and experts. This effort should aim, before anything, at establishing a common understanding and sharing of knowledge about the way platforms operate, the different manners in which terrorist organisations or individuals may use platforms for their purposes, the panoply of measures and instruments available to State authorities and platforms, and the implications and trade-offs derived from different possible choices.

Once again, current discussions are based on the possible options available for platforms, law enforcement agencies or the judiciary to properly assess and detect problematic content and the steps to be taken once this happens. In this context, the use of content matching techniques on the basis of a database of digital fingerprints or hashes is a tool that still needs to be properly studied in terms of the impact on freedom of expression and actual effectiveness as well as possible harm to vulnerable groups. In particular, the Global Internet Forum to Counter Terrorism (GIFCT) was established in 2017 by Facebook, Twitter, Microsoft and YouTube as an industry initiative “to prevent terrorists and violent extremists from exploiting digital platforms”. In fact, this initiative must also be classified as an outcome of the “platforms shall do more” motto developed by Governments, particularly in Europe. The GIFCT is currently on the path to become an independently managed body. However, concerns linked to lack of transparency, vulnerability vis-à-vis pressure coming from Governments and impact on the actions of civil society groups, still make it quite a controversial instrument.

Considering all these elements, it is clear that engaging in and promoting counter-narratives can be a useful tool for tech companies in order to: a) prove their commitment to counter-terrorism policies and alleviate legal and regulatory purposes, b) avoid the risk of accusations of curtailing legitimate speech (particularly political speech disseminated by minority communities, and c) present themselves to users as safe environments.

Platforms’ engagement with counter-narratives has taken place using different modalities, including providing content or messages directly, financially supporting certain groups or initiatives, promoting the visibility of certain types of content in the context of radical groups, conversations or searches, or facilitating ad-targeting tactics for non-profit organizations. In some cases, this has been the result of active engagement or collaboration between platforms and different stakeholders.

The effects of these initiatives have still not been properly documented, and thus may be perceived as a predominantly PR effort. Further, many rely on micro-targeting and fall short in responding to a coordinated or systematic adversarial action. In addition to these shortcomings, there are no clear and straightforward demands or directives from Governments and civil society as to how these projects must be specifically designed, which audiences to target, and a proper assessment of potential risks and benefits. At this present stage, platforms have neither formulated successful and general internal policies regarding the dissemination of counter – or alternative narratives nor have they engaged in systemic and comprehensive efforts with all relevant stakeholders in order to discuss, understand and articulate proper strategies in this area. In this sense, it is important to underscore how, for example, regular reports by major companies regarding content moderation policies and enforcement focus almost exclusively on the measures adopted regarding the restriction or elimination of illegal or objectionable content.


Joan Barata is an Intermediary Liability Fellow at Stanford University’s Cyber Policy Center. On Twitter @JoanBarata

This article is one chapter of the report, Beyond Limiting and Countering: How to Promote Quality Content to Prevent Violent Extremism and Terrorism on Online Platforms, published by Resonant Voices. Republished here with permission.

Image credit: Caution, licensed under CC BY 2.0.  

Leave a Reply