Six Technical Steps for Fighting ISIS on Twitter

Last week Twitter General Counsel, Vijaya Gadde, contributed an editorial in The Washington Post entitled ‘Here’s How We’re Trying to Stop Abuse while Preserving Free Speech.’ Yesterday, Twitter’s Director of Product Management, Shreyas Doshi, released new information on ‘Policy and Product Updates Aimed at Combating Abuse’ in a Twitter Blog post, including an update on their Violence and Threats: Abusive Behaviour Policy. 

ISIS Twitter

by J.M. Berger

ISIS supporters are pushing back on Twitter suspensions with a series of countermeasures. These steps have allowed it to recover from its low point since suspensions began, although the network is still degraded relative to last September, and to its high point in May and June 2014.

Most of the activity in the contested portion of the network at this stage revolves around users who create new accounts after being suspended, or users who create multiple accounts. Between 2,000 and 10,000 accounts are in a constant state of churn, with repeat offenders being knocked down and then creating new accounts, over and over again.

Here are six technical steps Twitter can take to offset ISIS supporters’ efforts to get around suspensions, if they are not already using them — and they may well be using some of them, although certain patterns in the data suggest they are not.

None of these are perfect solutions to the challenge presented by the ISIS-supporting network and its persistent supporters, but each will help. The goal, as always, is most correctly understood as degrading this network rather than destroying it, which would require different tools.

1. Score new accounts for indicators that the user is a repeat offender.  

Completely automating the process of detecting and eliminating repeat offenders would result in a substantial amount of collateral damage, in terms of suspending journalists and opponents of ISIS, but accounts can be scored for risk factors, and those scores can be used to help guide suspension decisions, whether proactive or in response to reports. Risk factors include:

  • Account’s bio, profile or first tweets state outright that the user’s previous account was suspended. This is a no-brainer.
  • Significant number of other users tweet that the account was previously suspended.
  • Account handle is a previously suspended account handle with a consecutive number attached.
  • Account adds followers at a pace much faster than the typical new user.
  • Account changes username more than once a month (more on this below).
  • Account immediately follows or is followed by large number of other accounts that have been frequently reported for abuse but not suspended.
  • New followers or following selected by app rather than human selection.

These scores can be used in a number of ways. For instance, if an abuse report is received for an account that has a high score on these criteria, the account could be provisionally suspended immediately, as Twitter does with certain accounts that display criteria associated with spam. If the scores can be refined over time to a high degree of accuracy, accounts that pass a certain threshold could be automatically flagged for review.

2. Force developers to associate apps with a unique web page.

When a developer creates a Twitter app, he or she enters a name for the app and a URL for the app’s home page, which is then used to identify tweets generated by the app. These can be falsified to make it look like tweets are being sent through ordinary Twitter clients. Twitter can require developers to link to a web page containing a unique identifier in order to better verify the method by which a tweet has been sent. This will allow Twitter and others working against ISIS to get a better view on how ISIS is using technology to manipulate the system, in addition to making more work for those who would abuse the system. This would obviously have additional utility in countering a wide variety of spam and influence operations.

3. Force apps that automatically follow other accounts for a user to post a permanent tweet disclosing this action.

This would also help shed light on the methods by which ISIS users speedily follow accounts returning from suspension, and whether this problem needs to be approached from a standpoint of app activity or network activity.

4. Limit Twitter handle changes. 

One countermeasure being used by ISIS supporters is to frequently change their usernames. While this is not necessarily effective at hiding from Twitter, it can be effective at thwarting activists who seek to report Twitter accounts for abuse. While there are counter-countermeasures available for the most serious ISIS-hunters, the frequent name changes create a higher bar for participation in reporting campaigns. If Twitter wants to continue to handle this problem primarily on the basis of user reporting, it should make it easier for that to happen. And there are very few legitimate reasons for changing usernames on a regular basis, except to promote spam and abuse. Limiting username changes to once a month provides adequate flexibility for those who need the function, while significantly capping abuse.

5. Abuse reports should include userid

I would hope this is already the case, but just in case it’s not, user abuse reports should travel with a user who changes his or her screen name.

6. IP or device tagging/banning of offending users 

While there are known countermeasures to IP tracking, those countermeasures require an investment of time and resources to implement. Automatically detecting repeat offenders through their IPs would provide some benefit, although not a total solution. Furthermore, there are other device-specific steps that could be taken with repeat offenders. For instance, if someone is using a Twitter smartphone app, that app could be permanently disabled when a user is suspended, or if a user is suspended more than once. Countermeasures could be applied to this approach, of course, but again, the goal here is to increase the cost of participation.


This post was first published in Intelwire on 13 April 2015. Cross-posted here with permission.

J.M. Berger (with Jessica Stern) is the author of recently published ISIS: The State of Terror. Berger is is a researcher, analyst and writer covering extremism, with a special focus on extremist activities in the U.S. and extremist use of social media. He is an Associate Fellow at VOX-Pol partner ICSR.

Photo Illustration: Yahoo News

Leave a Reply