Takedown Collaboration by Private Companies Creates Troubling Precedent

By Emma Llansó

On 5 December, Facebook, Microsoft, Twitter, and YouTube announced their intent to begin collaborating on the removal of terrorist propaganda across their services. Center for Democracy & Technology (CDT) is deeply concerned that this joint project will create a precedent for cross-site censorship and will become a target for governments and private actors seeking to suppress speech across the web.

Governments have a legitimate objective in preventing the commission of terrorist acts, but security concerns can also motivate policies that jeopardize fundamental rights. Yesterday’s announcement comes after several years of demands from governments in the EU, the US, and around the world for these companies to do more to stop the spread of messages from terrorist organizations that seek to recruit individuals and inspire violent acts.

Under this intense pressure, these companies have started a dangerous slide down the slippery slope to centralized censorship of speech online. Below, we describe what is known about this collaboration and discuss the significant risks this approach poses to free expression online, given the complex nature of information about terrorist activity and the dominant role of these companies in the online environment. We offer preliminary recommendations for transparency, remedy, and accountability that must be incorporated if the companies do move forward with this proposal, and urge these companies, independent service providers, and governments to reject the trend toward centralized censorship for the Internet.

Notice and Takedown and Takedown and Takedown and . . . 

Four leading US-based internet companies – Facebook, Microsoft, Twitter, and YouTube – plan to begin sharing information about images and video depicting “violent terrorist imagery” in a centralized database, in order to facilitate the removal of this material from their services.  As described in yesterday’s announcement, if a participating company identifies an image or video that violates its Terms of Service against “content that promotes terrorism”, that company can submit the hash, or digital fingerprint, of that file to a database.  Other participating companies will then be able to scan for matches to this hash among the files that are already on, or are newly uploaded to, their servers. If a participating company finds a match to one of these hashes, the company will “independently determine” whether that file violates its own Terms of Service.

The announcement underscores that this system is voluntary: participating companies retain discretion over whether to submit hashes to the database, and whether to remove any matching files from their own services. The announcement states that “matching content will not be automatically removed.”  This is an important distinction: an automated takedown system, where matching files were immediately removed across all platforms without any additional review, would be extraordinarily vulnerable to abuse and to over-blocking lawful speech. Automated takedown would ensure that any removal would propagate across all participating platforms rapidly, with no consideration of context or opportunity to catch mistakes.

It appears that this database of hashes is intended to operate more as a centralized mechanism for notice to participating companies, flagging content for their review, and sending the signal that one of their peers found the content to be “the most extreme and egregious terrorist images and videos”. Even without automated takedown, however, this centralized system will create some risks to free expression.

Centralized database creates target for government censorship efforts 

The agreement focuses on collaboration among the participating companies and mentions government only in noting that “each company will continue to apply its practice of transparency and review for any government requests”. But these companies will undoubtedly face substantial pressure from governments and private actors across the globe to expand the scope of the database and include additional content in it.  Moreover, the existence of this coordination agreement will almost certainly embolden governments to demand that Internet companies collaborate on censorship efforts for everything from hate speech to alleged copyright infringement.

It would not be difficult to imagine, for example, the EU proposing this kind of coordinated filtering scheme as a way to implement the dangerous monitoring proposals being pushed in the Copyright and AVMS Directives. While the participating companies may not have changed any of their existing Terms of Service or moderation standards under yesterday’s agreement, this hash database represents a new point of centralized control that governments and others will seek to exploit.

This database itself appears to be modeled after the National Center for Missing and Exploited Children’s hash database of images that appear to be child pornography, which these and other companies use to block images from their services using Microsoft’s PhotoDNA product. While that system has its own weaknesses – service providers are required by federal law to report images to NCMEC but the majority of these images are never adjudicated to be illegal by a court – child abuse imagery is distinct from “terrorist content” in that there is no context in which the publication or distribution of child pornography is considered lawful.

Lack of clear definition of what speech will be targeted creates opportunity for scope creep

There is no internationally agreed upon definition of terrorist propaganda. Companies have the freedom to be more restrictive of speech than what the government could prohibit—it’s highly likely that much of the content in this database will be lawful speech in the US—and they have developed idiosyncratic definitions of “content intended to recruit for terrorist organizations”, “dangerous organizations”, and “violent threats (direct or indirect)”.  The announcement emphasizes that “[e]ach company will continue to apply its own policies and definitions of terrorist content”. While it would be troubling for the participating companies to converge on a lowest common denominator, most-restrictive definition of “terrorist content”, the lack of clear parameters for what material may end up in this database creates its own risks.

Without a bright line denoting what can – and cannot – be submitted to the database, the terms of the agreement are vulnerable to mission creep. Participating companies will face external and internal pressure to include disturbing, graphic, and violent content of many kinds in this database. A clear definition that sets a very high bar for inclusion of images and video in the database would create a bulwark against the inevitable onslaught of proposals for other content to be included.

Incentives are stacked in favor of takedown

The agreement describes the participating companies’ intention to continue conducting their own review of any material brought to their notice under this collaborative system. It’s important for each company to engage in careful review of this content, since they each have different content policies and different standards for newsworthiness or other exceptions. It’s entirely possible that one company would decide to remove an image (posted alone in a tweet, for example), and that another company would decide to leave up (as a featured image in a news article or otherwise contextualized). Different services have access to different degrees of context about their users and the material they post.

However, the existence of this centralized database, and the fact that another leading internet company has declared an image or video to be “the most extreme and egregious” content, could create a legal expectation that all participating companies are on notice of allegedly illegal content that may appear on their sites.  At the very least, it will create a normative presumption against the sharing of this material, which the agreement describes as “content most likely to violate all of our respective companies’ content policies”, and could require participating companies to spend time justifying why they are hosting terrorism-related material that their peers have rejected.

Moreover, if the participating companies move forward with the plans to “involve additional companies in the future”, it seems likely that smaller companies, with fewer resources to spare on in-depth qualitative content moderation, will respond to any hash in the database by simply blocking that content.  It could be difficult for a small company to justify, as a financial or reputational matter, a decision to continue hosting “violent terrorist imagery”, even if there are eminently defensible reasons to do so.  But this is the danger at the core of a system designed to expedite takedown of content across platforms.  The chance that this database becomes anything other than a repository of material prohibited across all participating services seems razor thin.

Uncertain plans for transparency, remedy, and accountability raise more questions 

In the announcement, the companies refer to their existing processes for transparency reporting and appealing content-takedown decisions, and note that they “seek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way”. Accountability is a crucial component of any content moderation process and it’s important that the companies have these independent processes in place.  But this unprecedented system for coordinating takedowns raises fundamental questions about what transparency and access to remedy should look like across these services.

It’s not clear what information the participating companies plan to make public about the collaboration project or their independent activity. Will participating companies publish reports about the material they take down based on matches to this hash database?  Currently, none of the participating companies publishes regular reports on their Terms of Service enforcement activity.  If a company is alerted to a propaganda video by a government actor (e.g., through an Internet Referral Unit), will that company submit the video’s hash to the database (and thereby extend the government actor’s influence to other companies)?  What sort of information will be provided about the database itself? How will the companies provide information about what is and isn’t included?

Regarding remedy, if a user’s image or video is taken down across many platforms in a short time, will she know whether this was the result of hash-sharing amongst the companies?  Will she be able to submit an appeal to the centralized function, or will she have to petition each participating company individually? (Not all participating companies offer post-level appeals of takedown decisions. For example, Facebook only allows users to appeal the removal of a profile or page.)  If a company decides that it mistakenly removed a post and submitted it to the database, will it be able to signal to the other participating companies that they should re-evaluate their decisions, too?

Recommendations

We have just begun to analyze this program and its potential effect on free expression online, but offer the following preliminary recommendations to respond to some of the worst risks:

To avert government pressure or co-optation of the database: 

  • Clearly and unequivocally state that under no circumstances will the companies accept a contribution of a hash by someone acting on behalf of a government.
  • Commit to publicly announcing any such effort or request from a government, and to challenge any such demand that is accompanied by a gag order.

To prevent mission creep and expansion of the substantive scope of the database: 

  • Develop and publish a clear bright-line rule for content that sets a very high bar for what is properly submitted to the database.
  • Create a mechanism to ensure that images or video that form the basis of news stories or are shared by journalists do not get swept up in repeated takedown efforts.
  • Limit the ability to contribute hashes to this database to senior staff within a participating company, and only following an internal process of escalation and careful review.
  • Create a mechanism for participating companies to challenge another company’s contribution of a hash, to identify and rectify mistaken submissions.

To provide transparency, access to remedy, and other accountability measures:

  • Participating companies should be required to offer content-level appeals to their users for mistaken takedown decisions.
  • Commit to regular reporting by participating companies of the functioning of this database, including the nature and type of material it comprises and the influence it has on companies’ decisions to take material down.
  • Institute a delay in submitting hashes to the database to provide time for the affected user to initiate an appeal.
  • Allow independent third-party assessment, on a periodic basis, of the material that is registered within this database, in order to verify that the substantive scope of the database remains very narrow.

The dangers of centralizing control over information

We are in the midst of wide-ranging debate over the role of internet companies in facilitating our access to information and shaping our societies. With no clear guidelines for what speech may be targeted and no independent assurance that participating companies are sticking to the terms of their agreement, today’s announcement will further undermine user trust in the information they see on these services. If an image or video can reliably be suppressed from all the major social media platforms at once, what will happen to people’s ability to find this material to inform news, political debate, academic researchpolicy discussionsart, literature, or any of the myriad other wholly legitimate reasons to post and access even horrific examples of terrorist propaganda?

Even considering our recommendations, this proposal is a stark example of the fundamental threat that centralization poses to freedom of expression and underscores the importance of an information ecosystem that includes not only diversity of voices, but diversity of content hosts and service providers.  Fortunately, the internet as a medium still retains its ability to, as John Gilmore said in 1993, “interpret censorship as damage and route around it.” We encourage the development and proliferation of independent content hosts, social media networks, and other service providers, and urge operators to retain their independence.

Governments and companies are understandably motivated to respond to the threat of terrorism and the recruitment of individuals, especially young people, to commit terrorist acts on behalf of ISIS and other organizations. But any policy response to this terrible challenge must be evaluated for its likely efficacy at preventing terrorist acts and for its impact on fundamental rights.  Proposals such as this one that focus on smoothing the way for coordinated censorship create substantial dangers for the future of the information society. We urge these companies to reconsider their proposal and, at a minimum, implement our recommendations to limit the impact on free expression online.


Emma Llansó is the Director of CDT’s Free Expression Project, which works to promote law and policy that support users’ free expression rights in the United States and around the world. She tweets at @ellanso. The article originally appeared on Center for Democracy & Technology (CDT) website. Republished here with permission.

Leave a Reply