Can AI help us identify and stop terrorist attacks?

This is an abridged version of a presentation delivered by David Wells at the World Counter Terror Congress in London on 4 May 2017.

Intro

Since leaving the intelligence world in late 2014, I’ve researched and written about counter-terrorism. My primary focus has been a controversial and misunderstood area of modern day intelligence practice – how big data and counter-terrorism inter-relate. Today I’m going to be looking at one specific element of this subject – how intelligence agencies might use artificial intelligence to help navigate the ever-growing data pool they’re faced with.

It has the potential to be a complex subject matter. I’m going to try to keep it as simple as possible, and focus on the implications for practitioners, not the technical ins and outs.

I want to briefly answer three quite big questions:

  • Firstly, why? What elements within the current counter-terrorism climate point towards the need for AI?
  • Secondly, how this might work in practice? I’m going to focus specifically on one element of counter-terrorism, covert online interaction with extremists.
  • And finally, I’ll look at the advantages but just as importantly, disadvantages associated with this approach.

Current CT climate

We all know that that the current size and scale of the Islamist terrorist threat is unprecedented. And that the events of the past 5 years in Iraq and Syria will resonate for decades to come.

Most importantly in the context of today however, the past 5 years have seen a fundamental shift in the way terrorists communicate with each other. And how they communicate with the rest of the world.

The move towards encryption as standard has been central to this shift. Terrorist groups have greater confidence that they can protect their communications from the prying eyes of the authorities. And do so without compromising the ease and speed of their communications.

Alongside this trend, there has also been a move towards greater communications availability. We’ve seen an extraordinary growth in the speed and availability of mobile communications. Multiple devices, multiple apps and multiple communications methods are now the norm, regardless of whether you’re a terrorist or not.

So, we’re collectively faced with more targets or potential targets than ever before. And they’re generating more data than ever before.

As we’ve heard, terrorist methodology has shifted towards unsophisticated attacks with limited lead in time or obvious precursors. That means Government agencies need to be able to make quick assessments of who they should prioritise. And who they shouldn’t.

Broadly speaking, there are two routes to achieve this. Technical intelligence or human intelligence.

For the former, the encryption challenge I’ve referenced makes it difficult to gain a quick insight into the threat posed by an individual or group, particularly for law enforcement agencies. Traditional wiretaps or warrants are ineffective when the communications provider doesn’t have an unencrypted version of the communications.

Human intelligence does offer us options, through targeted surveillance or developing human assets. But these require significant time and resources, with no guarantee of success.

In this context, it is unsurprising that intelligence agencies have looked at a blended approach. One that combines the monitoring of extremist online activity with direct, covert interaction with individuals espousing extremist ideology.

What better way to discover the identity, location and intentions of an individual than by asking them? And doing so in a low risk, resource effective and deniable way?

The covert nature of these types of programs makes it difficult to assess their value in any detail.

Instead, I’m going to briefly look at media reporting of one such program to highlight the potential difficulties of the current approach. And from there, speculate as to the unique advantages that AI might offer in this space.

The program in question is run out of a US Air Force base in Florida. Called WebOps, the Defence Department initiative scours social media to identify potential Islamic State recruits. Having done so, the program uses covert personas to interact with them and attempts to dissuade them from travelling to Iraq or Syria.

Before we start looking at it in detail, a caveat. My knowledge of it is based entirely on media reporting. I make no comment on the report’s accuracy and intend no criticism of the program itself. Rather, I use it to illustrate the types of issues similar programs face.

So let’s imagine you’re recruiting individuals for this program. I’m sure you can all think of a few ideal qualities.

  • Firstly, you’d be looking for language capabilities. Arabic would be an obvious choice for many in 2017, but which language would depend on the nature of the threat your country faces. Clearly, this won’t be static.
  • Secondly, you’d hope that your candidate had a working knowledge of Islam and Islamic theology.
  • Thirdly, you’d want regional or country knowledge relating to the supposed origin of your persona.

In the US Air Force base example, the media report suggests their staff ticked the first two boxes. Most were fluent Arabic speakers of North African origin, with a reasonable knowledge of Islam.

Unfortunately however, the base’s focus was on Syria, Iraq and Yemen. And unfortunately, the staff members knew nothing of Middle Eastern history and in particular, the importance of sectarian issues.

Assuming your candidate has the above, you would also want them to understand the current counter-terrorism and counter-extremism climate. And ideally be a skilled communicator, capable of developing and sharing messaging across a range of media.

It seems unlikely that one individual would combine all of those attributes. The obvious solution is to develop a team containing individuals with complementary skill-sets. A team that works closely together to ensure that each piece of the covert communications puzzle is covered.

Even if you could recruit individuals with all of these skills however, there are significant issues around vetting, security, cost and resource allocation.

And in an hypothetical scenario where this is all achievable, the US Air Force example highlights further issues.

Most obviously, a hierarchical and risk averse organisational culture is at odds with the speed required when interacting online.

We’re used to messaging being instant. In the context of a rapid-fire exchange of messages, a delay to get your response signed off could risk exposure. Or at the very least, the undermining of trust.

And if this type of sign-off is required, it makes it difficult to run your persona 24/7, 7 days a week. This is critical. Any persona only active during office hours between Monday and Friday is likely to stand out to the extremist online community.

None of these challenges need to be insurmountable. I’m sure that using this type of approach has helped deliver significant investigative progress for many in this room. This approach can and does work, albeit with significant limitations.

AI

That’s broadly where we’re at in 2017. Moving onto our second question then, what opportunities could technology – and specifically artificial intelligence – offer to streamline and improve this type of program?

Before going any further, I should make clear what I mean when I use the term AI.

I like the term ‘anthropic computing’ coined by Stanford computer scientist Jerry Kaplan.

Broadly then, a program meant to behave like or interact with humans. Or as Alan Turing suggested in the 1950s, a program that convinces people it is interacting with a human.

This is no longer science fiction. Perhaps the most familiar example would be online customer support bots on company websites.

They are getting more ubiquitous and better at what they do. Particularly in their ability to answer more complicated questions by retrieving information from across a company website.

That said, a prolonged interaction with one of these bots leaves you in little doubt that you’re talking to a program not a person. They don’t sound real. And ironically, a large reason for this is that they’ve been programmed too well – we don’t typically associate politeness, perfect grammar and formal language with customer service operatives!

Despite these limitations, customer service bots appear to fit my definition of AI. But I think we can add more nuance.

Customer service bots are an example of software carrying out tasks, that while time consuming, are not intellectually demanding. I think you’ll all agree that interacting with extremists online doesn’t fit into this category.

So perhaps we should refine things further. Charles Isbell, a professor at Georgia Tech College of Computing, has a complementary definition (see image above).

He believes AI requires two things that clearly differentiates it from automation.

This feels like a more accurate description of what I’m talking about today. Not just automating existing processes. Or using algorithmic analysis of big data to identify the most interesting trends. Both of these are incredibly valuable, but they are already happening in 2017.

What I’m thinking about is AI software that replaces some or all aspects of covert online interaction currently undertaken by humans.

I’m fully aware that it sounds like I’m veering into the realms of science fiction. But bear with me.

The last few years have seen dramatic improvements in the ability of computer programs to outwit human opponents.

It’s 20 years since a computer program – Deep Blue – defeated Garry Kasparov, the reigning Chess World Champion. While widely seen as a major breakthrough in AI, it managed to defeat Kasparov using brute force. It could scan 300 million positions per second and analyse up to 74 moves ahead. This was a victory for processing power, not intelligence.

Last year, Google’s AlphaGo AI defeated the World Champion at Go, an ancient Chinese board game. Go is yet more complicated than chess, with trillions of possible moves that make for almost endless possible outcomes. As a result, success is more about intuition than probabilities. It takes more than just processing power.

Central to AlphaGo’s development was that programmers had included reinforcement learning – the program repeatedly played against itself, adjusting its strategy based on trial and error. It learned.

Finally, at the start of this year, an AI program called Libratus defeated four of the best poker players in the world.

Poker was believed to be yet more challenging again. Instead of seeing the entire board, a poker player always has imperfect information at their disposal. Opponents give deliberately misleading information, and bluffing is critical to a successful performance.

Again, Libratus was programmed with the ability to learn. Aside from some serious processing power, it was simply given the rules of poker and sent off to play online. Over the course of millions of hands, it developed a winning strategy. And interestingly, a strategy that was markedly different to the human strategy of choice.

The poker analogy is an appropriate one in the context of online extremism. There is often a significant discrepancy between what extremists say online and what they do offline.

This is not unique to extremism – we all know that Facebook or Instagram profiles can be a carefully curated attempt to portray a lifestyle we want, rather than the lifestyle we have.

But in counter-terrorism, bottoming out those discrepancies really matters. Is an individual a ‘keyboard warrier’, sounding off and making threats that they have no capability or intent to carry out? Or does their online activity represent a pre-cursor to something altogether more dangerous?

This is a microcosm of what all counter-terrorism practitioners do on a daily basis. Threat assessment – who do we rule in? And who do we rule out?

All intelligence and law and enforcement agencies have existing strategies to do this, based on multiple information feeds and risk matrices. The critical gap when doing so is often the gap between the lead intelligence – such as links to known extremists or the use of extremist rhetoric – and our knowledge of intent and capability.

How feasible is it to imagine an AI counter-extremism bot helping to fill this gap by engaging with extremists online?

Anyone who has worked in counter-terrorism or counter-extremism will be aware that there is an extremist vocabulary and set of social norms. And that despite the actions of Twitter, Facebook et al, there is still a lot of publicly available extremist activity online, in addition to a sizeable library of propaganda material produced by AQ and IS.

It’s certainly tempting to imagine the possibility of a self-learning bot, capable of developing a sufficient grasp of this language to trick extremists into believing that they were human.

Suddenly, the issues I highlighted with the US Air Force program earlier fade away. The AI program could operate 24/7, potentially using multiple personas and multiple languages across multiple platforms.

The potential scale and reach of the program would quickly allow new persona to develop credibility within the extremist network – it could corroborate its own credentials.

Each interaction with an extremist would be a learning experience. If an attempt to directly engage with an individual on one platform failed, why? All of these lessons would immediately feed back into the bot’s ongoing strategy.

The potential scale and reach of the bot and its multiple persona mean that it could also be used in a variety of ways.

Where an individual is identified as a potential threat, its focus could be on intelligence or evidence gathering. For those whose commitment to the cause appears to be wavering, the bot could attempt to de-radicalise, employing the most effective messaging strategies it has identified over time.

Taken to its furthest extent, it isn’t beyond the realms of possibility that the bot could attempt to shape the extremist debate, providing competing views that push extremist ideology in a particular direction.

And this is just the tip of the iceberg. A self-learning bot could do more than just engage with individuals online on a vast scale. As the poker example demonstrates, it’s possible that a bot with a singular mission and the ability to learn from its successes and failures could devise a new and innovative disruption or prioritisation strategy.

Returning to the target matrix or prioritisation example, let’s imagine a country like France with thousands of targets or potential targets. Currently, these are prioritised with targeted technical or human surveillance only directed against the top tier – those assessed to be a significant or imminent threat.

But in the background, your AI bot could attempt to interact with all of these individuals on multiple platforms using multiple personas, continually assessing the threat each poses.

In some cases, this might corroborate your initial assessment. But it might also help identify instances where someone towards the bottom of your list actually poses an imminent threat.

And moving forward, as it learns further about why this discrepancy exists, or mistakes that have been made, it could help inform a fundamental shift in how we assess the threat posed by extremist individuals.

I’m sure you all agree that this offers exciting possibilities and potential advantages. But before getting ahead of ourselves, it’s worth attempting to ground this in reality.

Firstly and most obviously, developing such a platform would initially at least, require significant investment and technical expertise.

Let’s imagine for a minute that this isn’t an issue. The next factor central to the success of the Go and Poker bots was access to big data. And the ability to iteratively learn in a relatively uncontrolled environment.

As I mentioned earlier, we have no shortage of data.

But in the context of learning the language, terminology or arguments used by extremists online, this data is spread across multiple stakeholders. Everyone from law enforcement and intelligence agencies; academics; community groups; journalists and perhaps most significantly, private sector communications or applications providers.

Each group has a different slice of the pie – some might have a narrow, national or regional focus that encompasses multiple platform or media; whereas in the private sector, a company like Twitter holds a huge amount of data relating to extremist use of the platform.

The inevitable consequence of this diffusion of data is that without a concerted and collaborative effort, any AI program seeking to learn how best to communicate would be doing so through a relatively narrow window.

This doesn’t have to be terminal; there is plenty of value in a national or platform specific approach.

Perhaps a bigger problem is the potential need to set the bot loose on the internet to facilitate iterative learning. Because it’s fair to say that the track record of this type of approach has so far been mixed at best.

I’m sure many of you saw this media reporting last year about Tay, a Microsoft Twitter bot designed to interact with the 18-24 age group. Within just a few hours of going live, Tay’s Tweets descended into anti-semitism and racism, and she was deactivated within a matter of days.

This was, in part, due to deliberate attempts by Twitter users to lead Tay astray. They were interacting with a known, identifiable bot. This isn’t a like-for-like scenario.

But given the subject matter under discussion with online extremist communities, there is potentially a danger that in learning the language of extremists, the bot could become one. Or at the very least, make significant mistakes as part of its learning process.

What are the legal implications of a Government-created AI bot inadvertently inciting an individual to carry out a terrorist attack? Do we really want to create a program which, initially at least, amplifies the volume of online terrorist discourse? I’m sure any of you that have worked in the public sector will be familiar with Government appetite for risk.

This dichotomy is central to any debate about the use of AI in counter-terrorism.

The real value of this type of bot would be to reduce workloads, doing the work of tens if not hundreds of intelligence analysts, generating focused, unique intelligence.

But given the potential risks associated with the program, there would be an understandable tendency to reduce the bot’s autonomy, increasing the need for ongoing oversight of its activities and the intelligence leads it generates.

This is the worst case scenario – a program with significant legal, ethical and financial risks that takes up more rather than less agency resources.

The future

That’s just a brief insight into the potential use of AI in one aspect of counter-terrorism. I imagine you can all think of other ways in which the technology could be used. And I’m certain that you can think of significant risks associated with its use!

I’m sure to some of you, particularly those battling every day with ageing Government IT systems, this all sounds very unlikely! Certainly, it is unlikely to appear overnight.

We’re already seeing an increasing reliance on the use of big data, algorithms and a greater use of automation to aid our analysis. Rather like the move towards driverless cars – where more and more elements of the driving process have been automated or computer-aided – progress towards true AI in counter-terrorism is likely to be iterative rather than revolutionary.

But as this progress continues, it is critical that in addition to the obvious benefits, we also consider the risks.

David Wells is a former intelligence analyst. This post originally appeared on his personal Blog. His Twitter handle is @DavidWellsCT.

Leave a Reply