Some Recent Trends in the Use of the Internet/ICT for Terrorist Purposes – Part II

The following is the second of three blog posts reporting on discussions at a workshop held in the Swiss Federal Institute of Technology, ETH Zurich, Switzerland on 25 August 2016 under the auspices of the UN Counter Terrorism Committee Executive Directorate (UNCTED) and the Swiss-based ICT4Peace Foundation and their joint project on ‘Private Sector Engagement in Responding to the Use of the Internet and ICT for Terrorist Purposes.’ Supported by Microsoft, Facebook, Kaspersky Lab, and the governments of Spain and Switzerland, the project seeks to deepen understanding of the private sector—notably technology and social media companies—response to how terrorist groups  are using the internet and ICT for terrorist purposes in the context of a number of UN Security Council Resolutions and reports.

During this part of the workshop participants tackled challenges related to the definition of terrorism and its application to ICT with regard to content removal guidelines. Issues of legitimacy, cultural norms, necessary expertise, and the expectations placed on ICT companies by governments and society at large were discussed along with more technical details about the mechanics of content removal.

From Policy to Action: Emerging Practices and Persistent Challenges

It was immediately emphasised that there is no agreed definition of “terrorism” in international law, which in turn makes it impossible to determine exactly what constitutes terrorist content online. In the absence of an accepted definition, companies tend to use sanctions lists to inform their decisions. The United Nations, European Union and various governments actively maintain lists of proscribed terrorist groups and individuals. Both Google and Facebook use the US terrorist list to inform their content policies, whereas Microsoft recently announced at a special meeting of the Security Council on 11 May 2016 that it would amend its terms of use to specifically prohibit the posting of terrorist content by or in support of organizations included on the Consolidated United Nations Security Council Sanctions List.  

The list includes all individuals and entities subject to sanctions measures imposed by the Security Council. Problems of transparency and delisting processes were raised along with the possibility that such lists could be abused by governments to persecute rival organisations however significant effort has been made by the EU and UN to address these issues. One participant stressed that the UN Consolidated list remains the only international framework that exists to inform decisions about and encourage respect for the rule of law.

There remains a difficulty in formulating adequate guidelines on what content counts as ‘terrorist’ and whether it should be taken down. The European Commission stressed the importance of reaching out to smaller companies, particularly startups, to make them aware of how terrorists could exploit their platforms. Speakers also highlighted the importance of educating users on the Terms of Service and promoting user-driven self regulation. There was general agreement on the need to better inform the public about updates to intermediaries’ Terms of Service.

Civil Society representatives stressed that symbols or indicators of terrorist content are not necessarily obvious to the layperson and therefore decisions relating to content removal require significant expertise. Anjem Choudary’s trial in the United Kingdom shows how some individuals can use their knowledge of domestic law to avoid conviction for promoting terrorist causes. When ICT companies do base content removal policies on local laws of different jurisdictions there is concern that the legal and judicial environment can change rapidly, making it difficult for the companies to keep track of developments. This is a particular challenge for smaller internet companies that are not do not have necessarily have the resources to continuously revise their Terms of Service.

Framework and Legitimacy of Content Removal Practices

There is a zero tolerance policy among private sector ICT companies when it comes to hosting terrorist content on their platforms. The importance of transparency and consistency with international human rights standards in handling take-down requests was stressed, with particular reference to the United Nations Guiding Principles on Business and Human Rights and a number of Sector Guides developed by the EU.

The ICT company representatives stressed further that the people of a given country should decide what is appropriate content rather than executives at an ICT company and that international norms should shape their content policy. In this regard, speakers highlighted the importance of educating users on the Terms of Service and promoting user driven self-regulation. There was a need to better inform the public about updates to intermediaries’ Terms of Service.

Representatives from human rights organizations stressed that content removal practices are often opaque and that governments and technology and social media companies need to invest more in meaningful transparency rather than irregularly published reports that do not provide sufficient detail. Some participants also raised concerns that there are no open processes to ensure redress for social media users who feel that their content has been unfairly or unjustifiably removed.

Operationalising EU content removal policy

In 2015, the EU established an Internet Referral Unit (IRU) at Europol, aimed at reducing accessibility to terrorist and violent extremist propaganda on the Internet by identifying and referring relevant online content to the hosting internet service provider, with a clear assessment of how it is terrorist material and might be in breach of their terms and conditions. Furthermore, the EU IRU supports Member States with operational and strategic analysis. Since its inception, the IRU – modelled on the UK Counter Terrorism IRU – has assessed (and processed for subsequent referral to concerned Internet service providers) over 11,000 violent extremist messages across 31 online platforms in eight languages. Around 90 per cent of the content referred has been removed from the platforms by online service providers. The process is a voluntary one. It is ultimately for the companies to decide whether or not they wish to remove the terrorist content from their platform. In terms of procedures, the IRU flags relevant operational content to the private sector either after receiving a request from a member state or through its own searches for terrorist material.

All reported content goes through a human analyst or translator. According to some participants, trust between governments and ICT companies is essential for dealing with content-related issues and it is critical for IRUs to avoid operating in grey-zones, which could lead to accusations of censorship practices. At the same time, a number of representatives from ICT companies and civil society groups questioned why governments prefer to use perceived violations of company Terms of Service over established legal norms. Concern was expressed that IRUs lack transparency and accountability because of the secrecy that surrounds their work. In response the Europol IRU stressed that the EU Parliament and Member States provide oversight of its work in this area while transparency is achieved through its annual activity reports.

Europol was commended for its openness by one civil society member but it was pointed out that much remains to be done; most IRUs are not open to such public scrutiny. Some participants highlighted the need for any reaction or process to be swift. The harm could only be mitigated if the material was taken down quickly. Any process which took days or weeks would prove nugatory due to the speed at which terrorist material was disseminated. Some participants questioned whether IRUs should be presented as a good or best practice, expressing concern that governments might increasingly use them as a means to circumvent due process and international human rights obligations through an increased dependence on these kinds of voluntary public-private partnerships.

Other Public-Private Engagement Challenges

Discussions during the workshop highlighted a number of additional challenges relating to new forms of public-private engagement in response to terrorist use of the Internet and ICT. Some participants suggested that more needs to be done to improve technical co-operation and information sharing between companies and governments to address the problem of “whack a mole.” The EC stated that this was a particular concern of EU Ministers, and it was looking forward to hearing back from the companies as to how this might be addressed. In this regard some participants suggested that users should be able to flag objectionable content more easily and raised the possibility of developing a central “clearing-house” for such content to improve how technology and social media companies co-ordinate content removal requests between different platforms and jurisdictions.

A representative from one ICT company stressed that there was a risk of a “one-way street” in terms of data sharing whereby governments place all of the information-sharing burden on the ICT companies. In general, workshop participants agreed that it is essential to improve information sharing between companies and governments.

The Mechanics of Removing Content from ICT platforms

Even if specific content has been identified and reported, most technology and social media companies generally do not support so-called “recursive takedowns” in which similar items of content are removed from across the entire ICT platform, particularly when posted by third parties. One company representative suggested that techniques from image hashing could be applied to content to help identify recursive content however there would be challenges given that content used by terrorists and their supporters is often ambiguous and sometimes re-used for legitimate reasons such as news or commentary. While content relating to child protection is usually unambiguous, journalists and researchers often reuse content used by terrorists. Furthermore, terrorist content is often intrinsically more difficult to identify algorithmically given the wider range of potential image patterns.

While all the major technology and social media companies have reportedly established dedicated take-down teams, many of these are overwhelmed by spurious reports of content violations. For example, many users report innocuous content (such as Justin Bieber videos) and in the process create noise that is difficult for the companies to filter out. Furthermore, a number of participants from large technology and social media companies stressed the unprecedented volume of content removal requests, explaining how difficult it is for their teams to analyse and act upon such a large amount of information.

Other workshop participants emphasised that content removal from websites should be considered as should access to search engines. Websites can be hosted in any country and terrorists are able to exploit this loophole to circumvent local jurisdictional oversight. An example was given of a German-language blog that was closed in Germany and then moved out of jurisdictional control into another country.

One telecommunications company representative stressed that his company does not have the legal power to access the content of communications but only the metadata in a number of restricted situations. In this regard he highlighted that telecommunications companies are much more regulated than social media/Internet companies.

Resource Constraints

During the workshop, representatives from startups emphasised that small or emerging companies are primarily focussed on growth and so often do not have the time, capabilities or resources to consider how their technologies may be used by terrorists. Some may not be aware of the broader security issues while others may not be aware that their platforms or services are being used by terrorists until law enforcement agencies come knocking at their door.

Furthermore, many startups are not large enough to have the capacity and resources to fully engage with governments or larger technology and social media companies on these issues. For instance, the nature of many startups means that a successful product with several million users can be developed and supported by just one or two individuals. Finally, in some cases start-up technology or platforms can remain fully functional even if the business behind the technology no longer exists, providing a potential challenge with regard to dealing with violations of Terms of Service.

Some start-up representatives highlighted the importance of engaging small companies on these risk issues from the outset. Conversely, for governments and international organisations it can also be costly to identify and build relationships with startups given their high number and short lifespan. Some suggested that a longer-term approach could involve engaging with tech. universities, business schools and others – potentially even venture capitalists – to ensure a multidisciplinary approach to the field which ensures acknowledgement of potential risks and related costs relating to ICT use from a very early stage.

Leave a Reply