By Joe Whittaker
In the aftermath of a terrorist attack, a series of events can ostensibly be relied on. As we mourn, the media frantically try to gather information about the attacker(s) and, upon learning that they used social media for some part of their activity, it is suggested by journalists, politicians, and pundits that they were radicalised in an “echo chamber”. This is often followed by choruses of disdain for the social media sites, proclaiming that they must do more to halt extremism in future, usually with little understanding of the difficulties that social media companies face in monitoring such content. However, despite the term being de rigueur in discussions about extremism and terrorism, we have a very limited understanding of what an extremist echo chamber actually is and the term is thrown around quasi-intellectually as if the speaker has deep insight into how actors are radicalised online.
With regards to the media, journalists are quick to offer the idea of an echo chamber as a diagnosis for radicalisation, as can be seen in this typical excerpt:
When an individual’s social network becomes isolated from broader society, it can consequently twist into an extremist echo chamber, draping an aura of plausibility and common sense over those ideas that are morally reprehensible and utterly disconnected from reality.
This type of analysis can be seen across media sources, with the concurrent theme that surrounding oneself with homogenous voices is conducive to extremism, and that this effect is particularly prevalent on social media. Very infrequently is there any account which actually describes what it is like to be in an echo chamber, with a few exceptions.
Similarly, academic research into online radicalisation has offered little more than definitions that are incorporated into analysis. Seminal research by von Behr et al found that, among other things, in the majority of 15 case studies the Internet acted as an echo chamber – it provided ‘a greater opportunity than offline interactions to confirm existing beliefs’, and similar sentiments are expressed in the works of Hussain & Saltman, Sageman, and Klausen, and are representative of the wider field. The implicit idea is that engagement in extremist echo chambers facilitates radicalisation by fostering confirmation bias until its members are unaware of the extent of their polarised beliefs, which can lead to violence. This conclusion seems to be logically valid – the conclusion follows the premises – but relies on a host of auxiliary hypotheses that bring the logical soundness into question. Such premises are usually assumed in radicalisation research. As result, supplementary questions are ignored, such as:
- What are the effects of being inside an echo chamber?
- Are they more prevalent online or offline?
- Is everyone susceptible to them in the same ways?
- What is the role of personalisation technology?
As it is often assumed that being in an echo chamber can facilitate radicalisation, answers to these questions would help elucidate that relationship.
The principal problem, however, is that such questions are not tackled within the field of online radicalisation. To the best of this author’s knowledge, only one piece of research, by O’Hara & Stevens, expands beyond a superficial explanation of echo chambers online and is mostly theoretical in nature. They find that the case has not successfully been made that echo chambers necessarily lead to malign effects, nor are they difficult to exit, and even that evidence for their existence is based on methodologically unsound research. They argue that it is plausible that there is some value to echo chambers in certain circumstances; it may help mitigate conflict to discuss issues with like-minded actors and encourage political participation from those who would otherwise remain silent. As a result, there is good reason to be sceptical of the assumption that being in an echo chamber can facilitate radicalisation. It is important, however, to note that there is no empirical research that tackles anything beyond the basic concept of an echo chamber (i.e. if the user was “in” or “out” of one).
If, then, there is little empirical research regarding the effects of echo chambers on radicalisation, then perhaps other fields may be instructive. Unlike research into terrorism and extremism, which contains notoriously difficult to access populations, academics in Political Science and Internet Studies have been able conduct empirical research which delves deeper into the causes and effects of echo chambers. Such research generally confirms the existence of echo chambers, but with an interesting caveat; those who are closer to the edges of the political spectrum are most likely to only post and share online content that adheres to their pre-existing beliefs. This effect is found both online and offline. This concurs with a number of findings that suggest the typical social media user is more, not less, likely to encounter cross-cutting opinions when online.
Such findings could have important ramifications regarding online radicalisation; those who end up in an echo chamber are often, in some ways, primed for it by their political or social views. This could concur with the prevailing wisdom in online radicalisation that the Internet facilitates, rather than drives, the process and that most of the time when extremists use the Internet they are already on the path to radicalisation, but further Internet use can exacerbate it. Once in an echo chamber, there are often several noticeable traits, such as biased narratives which are created from distrust, paranoia, and unsubstantiated rumours. Furthermore, a positive and significant correlation is found between users in an echo chamber and negative emotional posting. That is to say, the more time users spend in the chamber, the more negative their sentiments.
A question that remains unanswered is the role of personalisation technology in social media, often referred to as the “filter bubble”, and how this affects the formation of echo chambers. It is suggested that the ways in which users interact on social media affects the future content they see. This could have the ability to significantly affect those on the path to radicalisation by erroneously fostering the notion that a user’s social network is more extreme than it actually is, based on recent interactions. This topic, however, is drastically under-researched because the “Web 2.0” gatekeepers keep their filtering algorithms a matter of great secrecy. As such, very little academic research has been conducted on this topic, and as a result, it is currently close to impossible to isolate whether users enter social media echo chambers of their own accord (by self-selecting their social networks due to confirmation bias) against the role of personalisation filters. There is one empirical study, conducted by Facebook which confirms that users do form echo chambers, but that users’ individual choices play a bigger role than filtering. This should, however, be taken with a large pitch of salt due to the obvious vested interests. Furthermore, how it may extrapolate to other social media sites’ filtering algorithms remains unclear.
This perspective has sought to do two things: firstly, to highlight that the phrase “echo chamber”, while popular in discourse surrounding terrorism and radicalisation is rarely expanded on beyond its definition. Secondly, to suggest that other fields may be instructive in some ways that can be the basis for further research on radicalisation, both by offering hypotheses of how users may act within echo chambers and also by offering problems that can form the basis of future research. While extremist populations remain difficult to reach, there are some avenues that could be explored. For example, one could conduct a sentiment analysis of extreme groups, similar to the research mentioned above, to assess whether posts became more negative, or users become more isolated, over time. At the very least, those who study how actors radicalise on the Internet, should not throw around the term “echo chamber” without further inspection. At best, it can be utilised as a significant gap in our knowledge and form the basis of seminal research.
This article was written by Joe Whittaker is a Research Fellow at ICCT. His research is focused on online radicalisation in the ‘Web 2.0’ era, evaluating whether the increased interactivity offered by social media has led to the Internet playing a greater role than previously thought. Joe is also affiliated with the Cyberterrorism Project in Swansea. You can follow him on Twitter: @CTProject_JW
This article was originally posted on The International Centre for Counter-Terrorism website. Republished here with permission.