Who Needs Courts? A Deeper Look At the European Commission’s Plans to Speed Up Content Takedowns

By Emma Llansó

In early March the European Commission released its “Recommendation on measures to effectively tackle illegal content online”, which presents the Commission’s ideas for how to speed up removal of allegedly illegal content. (CDT’s full analysis of the Recommendation is here.) The Recommendation includes a number of departures from the traditional court-order process, which provides both substantive and procedural protections for individuals whose speech is challenged under the law. Instead, the Commission relies on several approaches to speedy censorship that circumvent the courts and provide the public with no way to hold the government accountable for declaring that someone’s speech violates the law.

Below, we provide a closer look at these alternative censorship models, which have been gaining traction in Europe over the past few years.

  1. Trusted flaggers (with government backing)

The Commission recommends that providers employ “trusted flaggers”, individuals or entities who would be given increased influence over typical users, including priority placement in notification queues. The Commission did not invent “trusted flaggers” in this set of recommendations. Trusted flaggers are the linchpin of the Commission’s Code of Conduct on Illegal Hate Speech, as well as Germany’s much-maligned NetzDG. Providers such as YouTube and Facebook have used them for years to identify material that violates their Terms of Service.  But the Commission, in the Code and today’s Recommendation, seeks to delegate substantial state responsibility to these trusted flaggers, without any safeguards.

To wit: the Commission seeks to have these trusted flaggers perform the role of law enforcement in identifying and pursuing action against illegal speech online. The Commission also implies that these NGOs and other trusted flaggers may act as final arbiters of the illegality of the speech they flag, instructing providers to treat notices “with an appropriate degree of confidence as regards their accuracy.” (Or perhaps the appropriate degree of confidence is zero?)

While the Commission continues to push for trusted flaggers to fulfill governmental obligations, they do not recommend any kind of accreditation or verification process that would independently evaluate the quality of the notices provided by trusted flaggers. The Commission also does not contemplate any kind of penalty or accountability for a trusted flagger who abuses the powers they are given. Instead, providers are directed to “fast-track” notices received by trusted flaggers.

  1. Internet Referral Units

The Commission also recommends that all Member States institute Internet Referral Units, programs in which law enforcement officials notify providers of allegedly illegal content that may also violate a provider’s Terms of Service. The Commission also recommends that providers should implement “[f]ast-track procedures” to respond to referrals submitted by IRUs.

IRUs, which started with the UK Metropolitan Police in 2010 and gained traction within Europol and various European countries, are controversial. They empower law enforcement officials to unilaterally declare that speech violates the law and to seek its removal from online platforms, without ever having an independent arbiter evaluate the speech under the law at issue. IRUs are an intentional circumvention of the traditional court-order process, which is slower and more cumbersome (arguably, this is by design). Proponents attempt to justify the use of IRUs in part by arguing that, because the provider receiving the notification is evaluating the content under their own Terms of Service, it is essentially no different than having speech flagged to them by a private party.

But this defense misses several key points: First, government actors must be held accountable for the action they take to censor speech, even if another party is the ultimate implementer of the decision. A law enforcement officer who pressures a bookstore owner not to sell a certain book, or a radio station not to air a certain program, does not need to purge the inventory or turn off the microphone with her own hand to have engaged in censorship.

Second, IRUs provide no accountability mechanisms to ensure that officials are only pursuing speech that violates the law. We have also seen various officials pressuring companies to expand the definitions of what is not allowed in their Terms of Service, to better guarantee that if a law enforcement officer flags content to a provider, it will come down. Regardless of whether a post violates a provider’s Terms of Service, it violates fundamental human rights principles for a government official to target lawful speech for censorship.

The Commission’s recommendations also include a call for Member States to report quarterly to the Commission about the referrals submitted by their IRUs and the decisions taken against the content. While any amount of systematic transparency from existing IRUs would be welcome, a shallow report of takedown rates alone will not provide meaningful insight into the operation of these units. Rather, the Commission should be examining meaningful accountability measures for IRUs that ensure independent review of notifications sent by law enforcement, and that provide individuals whose speech has been targeted by the government with their rightful access to appeal and remedy from the government.

  1. Hash databases

The third structure the Commission proposes to speed up the removal of allegedly unlawful speech are hash databases. Hashes are digital fingerprints of files that can be used to identify whether a newly uploaded file matches one that has been previously identified and hashed.  Hashes are currently used in the detection and deletion of child sexual abuse material and allegedly copyright-infringing files. Hash-matching as a technique for moderating online content can be very useful when it is appropriate to treat every copy of a file the same way, but is less appropriate when context is necessary to determine whether a post violates a law or company’s Terms of Service. For example, it is essentially universally agreed upon that there is no context in which an image of child sexual abuse is lawful. On the other hand, a person may make fair, lawful use of a copyrighted work, and a simple hash-match will not take account of that.

At the end of 2016, four major online service providers (Facebook, YouTube, Microsoft, and Twitter) agreed to establish a communal hash database in which they could share the hashes of files each had deemed “the most extreme and egregious terrorist images and videos”. The consortium has since grown to include seven additional companies and 40,000 hashes of videos and images. The participating companies say they do not automatically remove every item in the database from their services, and instead use it to identify and evaluate the content according to their own Terms of Service. Because there is no systematic reporting about the contents or operation of this database, it is impossible to know how often the inclusion of a file in the database leads to that content coming down across the web. There are no published mechanisms for appealing the inclusion of a file in the database.

This is the sort of system — in its opacity and lack of any apparent accountability mechanisms — that the Commission encourages industry to create. As we noted when the hash-sharing consortium was first announced, a private centralized database creates a tempting target for governments. And while the terrorist propaganda hash database initiative began (under intense general pressure) as a voluntary industry effort, the Commission calls for “working arrangements between all relevant parties, including where appropriate Europol” to ensure “a consistent and effective approach” to content removal through the database.

The more involved government becomes in this kind of hash-sharing database, the more it looks like an unadjudicated blacklist of content that is prohibited from appearing across the web.  Centralized databases for the purpose of censorship will always raise significant threats to freedom of expression. The risk that such a database becomes politicized is very high, and companies will face pressures from governments around the world about their inclusion or exclusion of a wide variety of speech and opinions. The Commission should not promulgate a blacklist of verboten content, and should not encourage the development of these databases, especially without substantial safeguards against the very obvious dangers they pose.

***

In a rule-of-law system, the procedures for evaluating whether someone’s speech violates the law include safeguards such as review by an independent arbiter, the opportunity to defend one’s self and one’s speech, and the ability to appeal a determination of illegality. The friction these safeguards introduce in the process is a part of the protection they provide — censorship is not supposed to be easy.

Unfortunately, in its quest to tackle high volumes of allegedly illegal content online, the Commission has jettisoned the traditional court-order process and the safeguards that come with it. Rather than promote these various structures that completely circumvent the essential role of courts in determining when someone’s speech violates the law, the Commission should investigate how to scale up the traditional procedures without sacrificing independent oversight, notice, and opportunities to appeal the government’s determination that speech violates the law.

As it stands, the Commission’s recommended approaches would sever the public’s ability to hold the government to account for declaring certain speech off-limits; this is absolutely the wrong direction for democratic societies that value the rule of law.


Emma Llansó is the Director of CDT’s Free Expression Project, which works to promote law and policy that support users’ free expression rights in the United States and around the world. You can follow her on Twitter @ellanso.

The article originally appeared on Center for Democracy & Technology (CDT) website. Republished here with permission.

Leave a Reply