Internet Intermediaries, Human Rights, and Extremist Content

Content removal on social media platforms often takes place through semi-automated or automated processes. Algorithms are widely used for content filtering and content removal processes1, including on social media platforms, directly impacting freedom of expression and raising rule of law concerns (e.g. questions of legality, legitimacy and proportionality). While large social media platforms like Google or Facebook have frequently claimed that human beings remove all content2, large parts of the process are automated3 and based on semi-automated processes.

According to a report from the British Intelligence and Security Committee of Parliament, various automated techniques exist for identifying content believed to break the terms of service of the respective provider, be it because of extremist content, child exploitation or illegal acts such as the incitement to violence. These techniques may also be used to disable or automatically suspend user accounts4. A particular challenge in this context is that intermediaries are encouraged to remove this content voluntarily, without clear legal basis. This lack of a legal basis for ‘voluntary’ automated content removal makes it even more difficult to ensure that basic legal guarantees such as accountability, transparency or due process are upheld5.

In the US, the Obama administration advocated for the use of automated detection and removal of extremist videos and images. Additionally, there have been proposals to modify search algorithms in order to “hide” websites that would incite and support extremism. The automated filtering mechanism for extremist videos has been adopted by Facebook and YouTube for videos. However, no information has been released about the process or about the criteria adopted to establish which videos are ”extremist” or show “clearly illegal content.”

In the wake of reports from The Times of London and the Wall Street Journal that ads were appearing on YouTube videos that espoused “extremism” and “hate speech”, YouTube reacted with a tighter use of its algorithm operated to detect “not advertiser-friendly” content, which has reportedly affected independent media outlets, including comedians, political commentators and experts. Similar initiatives have been developed in Europe, where intermediary service providers, in response to public and political pressure, have committed themselves to actively counter online hate speech through automated techniques that detect and delete all illegal content.

While not disputing the necessity to effectively confront hate speech, such arrangements have been criticised for delegating law enforcement responsibilities from state to private companies, for creating the risk of excessive interference with the right to freedom of expression, and for their lack of compliance with the principles of legality, proportionality, and due process.

Requiring intermediaries to restrict access to content based on vague notions such as “extremism” obliges them to monitor all flows of communication and data online in order to be able to detect what may be illegal content. It therefore goes against the established principle that there should be no monitoring obligation for intermediaries, which is enshrined in EU-law and in relevant Council of Europe policy guidelines.

Due to the significant chilling effect that such monitoring has on freedom of expression, this principle is also reiterated in the draft recommendation on the roles and responsibilities of internet intermediaries prepared by the Council of Europe’s Committee of Experts on Internet Intermediaries in September 2017.

Moreover, by ordering the intermediary to decide itself what to remove as “extremist” and what not, the public authority passes the choice of tools and measures onto a private party, which can then implement solutions (such as content removal or restriction) that the public authorities themselves could not legally prescribe. Public-private partnerships may thus allow public actors “to impose regulations on expression that could fail to pass constitutional muster”6 in contravention of rule of law standards. Moreover, these kinds of demands by public institutions of private actors lead to overbroad and automated monitoring and filtering of content.

The Europol Internet Referral Unit had, one year after its launch in July 2015, assessed and processed 11,000 messages containing violent extremist content materials across 31 online platforms in eight languages, reportedly leading to the removal of 91.4% of the total content from the platforms. Steps have reportedly been taken to automate this system with the introduction of the Joint Referral Platform announced in April 2016.

While the imperative of acting decisively against the spread of hate messages and the incitement to racially-motivated offences is indisputable, such practices raise considerable concerns related to foreseeability and legality of interferences with the freedom of expression. Notably the data on extremist online content that Europol is processing refers not just to content that is illegal in Council of Europe Member States, but also to material that violates the terms of service of an internet intermediary.

Moreover, in many situations extremist content or material inciting violence is difficult to identify, even for a trained human being, because of the complexity of disentangling factors such as cultural context and humor. Algorithms are today not capable of detecting irony or critical analysis. The filtering of speech to eliminate harmful content through algorithms therefore faces a high risk of over-blocking and removing speech that is not only harmless but can contribute positively to the public debate.

According to the European Court of Human Rights, Article 10 also protects shocking, offensive or disturbing content. Algorithmic blocking, filtering or removal of content may thus have a significant adverse impact on legitimate content. The already highly prevalent dilemma of large amounts of legal content being removed because of the terms of service of internet platforms is further exacerbated by the pressure placed on them to actively filter according to vague notions such as “extremist”, “hate speech” or “clearly illegal content”.

According to the European Court of Human Rights, any obligation to filter or remove certain types of comments by users from online platforms puts an “excessive and impracticable” burden on the operators and risks to oblige them to install a monitoring system “capable of undermining the right to impart information on the internet.” The Venice Commission has equally called for efforts to strengthen human rights safeguards and to avoid excessive burdens being placed on providers of electronic communication networks and systems.

References:

1: Urban, Jennifer M., Joe Karaganis, and Brianna L. Schofield. 2016. ‘Notice and Takedown in Everyday Practice’. Available at SSRN 2755628.

2: Buni, Catherine and Soraya Chemaly. 2016. ‘The Secret Rules of the Internet’. The Verge.

3: Wagner, Ben. 2016. Global Free Expression: Governing the Boundaries of Internet Content. Cham, Switzerland: Springer International Publishing.

4: Rifkind, Malcolm. 2014. Report on the Intelligence Relating to the Murder of Fusilier Lee Rigby.

5: Fernández Pérez, Maryant. 2017. ‘Parliamentarians Encourage Online Platforms to Censor Legal Content’. EDRi.

6: Mueller, Milton. 2010. Networks and States: The Global Politics of Internet Governance. MIT Press.


The above is an excerpt from the Council of Europe’s Committee of Experts on Internet Intermediaries‘ (MSI-NET) Study on the Human Rights Dimensions of Automated Data Processing Techniques (In Particular Algorithms) and Possible Regulatory ImplicationsFollow him on Twitter @WWSchulz.

The report was authored by Wolfgang Schulz and colleagues and originally published on Council of Europe website. Republished here with permission from Council of Europe Media & Internet Division.

Leave a Reply