Terrorist attacks are fundamentally designed to ‘terrorise, polarise and mobilise’ different segments of the public. That this is so was tragically underscored by the recent events in New Zealand, where the perpetrator very obviously and self-consciously prepared a messaging campaign to accompany his acts of violence. Recognising these communicative and performative aspects of political violence is of vital importance to terrorism studies in the information age, given how research has attended more to the causes and consequences of the violence, than how this influences public perceptions, reactions and values.
Since 2013 and the murder of Lee Rigby in London, a research programme at Cardiff University’s Crime and Security Institute has been underway to understand patterns of social reaction to terrorist events, and how social media data provides both a mirror and motor for how public definitions of the situation are assembled post-attack. The most recent focus of this effort has been upon four of the attacks that took place in England during 2017, and in particular the role of misinformation and disinformation in influencing public understandings of these events. In some early findings from this work we delineate three techniques used to author and amplify disinforming communications.
The first is ‘spoofing’: where an individual claims to be someone, or have a social status, or social identity, they do not really possess. Information spoofing involves misrepresenting the content of a message, through processes of falsification, suppression or amplification. Sometimes these techniques are blended together.
If spoofing works by falsification or misrepresentation, ‘truthing’ persuades by claiming to be furnishing the audience with the ‘real’ facts. One version of this identified across the four 2017 attacks involved invoking statistics, data, quotes and official statements to try and discredit other narratives. A second version, related to how proponents of conspiracy theories framed truth claims as part of their narratives, purporting to convey what really happened in relation to a contentious or contested episode. Importantly, when engaging in truthing behaviours, actors do not just undermine accounts, for example by labelling them ‘fake news’, but proffer a more or less plausible alternative.
Take as an example a statement made by Assistant Commissioner Mark Rowley immediately after the Westminster attack where he said:
…we must recognise now that our Muslim communities will feel anxious at this time given the past behaviour of the extreme right wing and we will continue to work with all community leaders in the coming days.
Serving as both a tacit warning to potential far-right agitators that the police were preparing for trouble they might cause, whilst simultaneously reassuring Muslim communities, Rowley’s message attracted considerable opprobrium from individuals with extreme right-wing proclivities. In their reactions to Rowley’s message, they sought to manipulate it as ‘true’ evidence of how white working class people were being ‘forgotten’ by the state and its institutions. This is symptomatic of a wider motif of behaviour by extremist groups in terms of how they seek to co-opt and configure ongoing events, to illuminate the fundamental ‘truth’ of their wider narrative of grievance.
The third component part of the article’s triptych of disinformation techniques we label ‘social proofing’. This works by manipulating the biases of human cognition to ‘follow the herd’, inasmuch as people will look to the behaviour of others around them to shape and steer their own responses and actions. Online environments are especially susceptible to social proofing effects. It is the reason why many celebrity fan social media accounts ‘purchase’ thousands of followers, to make themselves look more popular and thus attract other genuine fans to them. It is also a key role played by bot-nets, to increase the visibility of messages and accounts.
Wrapped around this focus upon how disinformation is constructed and communicated are two additional points. First is the identification of hostile state involvement in authoring and amplifying rumours, conspiracies and fake news, as part of an apparent attempt to induce increased social tensions and conflict in the aftermath of terror. This is significant because whilst there has been considerable ‘air time’ given to discussing the activities of the St Petersburg-based Internet Research Agency, especially in relation to the 2016 US Presidential election, here we have evidence of them operating an information influence operation in a non-election space. This suggests a wider sphere of interests and activities on the part of the Kremlin’s disinformation assets. As documented in our research, we have empirical evidence of them attempting to amplify social tensions in the wake of all four attacks, using accounts ‘spoofing’ identities across the political spectrum, from white male Republicans, to black female activists.
The second more generic finding concerns the comparative neglect in both research and policy of the aftermath of terrorism. Far more attention has been directed towards the ‘upstream’ processes of radicalisation, than the ‘downstream’ issues of harm mitigation and management. This is not altogether surprising since prevention is the preferable objective. It is, however, a significant oversight given how the sequence of attacks that we have seen across Europe and a number of other countries over the past five years, evidences how, despite their best efforts, the authorities cannot interdict all plots. Given that this is the ‘realpolitik’ of contemporary counter-terrorism, it is important to be able to show that social reactions to terror events are patterned, and that it is possible to influence the tenor and tone of any public impact. Salient to which is how tracking and tracing social media data affords a high resolution picture of what happens in the wake of an attack, and the ways communicative actions and interventions performed by a range of different actors influence how the event comes to be defined and publicly understood.
Martin Innes is Professor in the School of Social Sciences and Director of the Crime and Security Research Institute and of the Universities’ Police Science Institute, Cardiff University.
Helen Innes is Research Fellow at the Crime and Security Research Institute, Cardiff University.
Diyana Dobreva is Research Assistant at the Crime and Security Research Institute, Cardiff University.
This article was originally posted on The London School of Economics and Political Science website. Published here with permission.