By Laura Jakli and Paul Gill
This blog post summarises the preliminary results of a VOX-Pol supported study that estimates the effects of social media echo chambers on political polarisation.
Social Media and Political Polarisation
Countless news articles and studies argue that social media exacerbates political polarisation and distorts the political news landscape. The general argument put forward to explain polarisation is that the media environment has fragmented immensely in recent decades. The Internet has allowed for people to live in segregated echo chambers, consuming one-sided news tailored to suit their political priors. The rise of digital echo chambers corresponds to a surge of extremist parties across Europe, and to the rapid growth of ISIS, among other terrorist organisations.
Although this argument is theoretically well supported, there is relatively weak empirical evidence informing how social media polarises people. This is because it’s difficult to parse causes from effects in the study of online behavior. Do moderates choose to engage in cross-ideological information networks, or does accidental exposure to cross-ideological ideas actually moderate attitudes? Do extremists choose to remain concentrated in ideologically one-sided news networks, or do echo chambers further polarise their opinions? Are extremists more radicalised by echo chambers than moderates, or are the effects similar across moderate and extremist groups? We employ a unique research design that addresses some of these methodological issues, using a combination of social network analysis and machine learning.
Using Breaking News to Explore Echo Chamber Effects
Our study estimates the effect of news consumption in social media echo chambers on users’ political attitudes. We do this in a few key steps, using data collected from Twitter.
As an initial assumption, we rely on the concept that social media echo chambers become especially salient during major news events. Recent research supports this intuition. Our first task was to define a range of political events appropriate to the study of echo chamber effects. We focused on breaking news events salient in the United States and United Kingdom, so that we could limit our text analysis to an English language framework. Our goal was to select events that were so highly politicised that they reverberated across the range of political media, with each echo chamber taking a distinct spin on how to interpret the event and its political consequences.
For example, highly publicised mass shootings in the United States fit this requirement. Mass shootings tend to generate a large volume of extreme media coverage and “spins” in addition to more mainstream left-leaning and right-leaning coverage. We chose the Las Vegas shooting in October 2017 to explore Twitter conversations about topics such as the ‘gun debate’, ‘gun control’, and ‘gun rights’, sampling approximately 10,000 tweets on a combination of these query searches using a small search window (less than a week) on either side of the event. The shooting was the deadliest in U.S. history, leaving 58 people dead and 851 injured. While left-wing media focused on the topic of gun control and limiting semi-automatic weapons access in the immediate aftermath of the shooting, right-wing media focused on ways to protect and even strengthen gun rights. Meanwhile, extreme left-wing and extreme right-wing echo chambers both took much darker, conspiratorial tones.
The same logic and sampling procedure was applied to sample tweets from three other events: the November 2015 Paris attack at the Bataclan, the June 2016 Brexit referendum, and Donald Trump’s November 2016 presidential election. In total, we sampled approximately 40,000 tweets across these four events.
How can we use these data to distinguish causes from effects? The basic intuition is that our sampled tweets compare like-minded groups’ political attitudes on the same topic (using the same search queries) just days apart. That is, tweets from within the far left echo chamber are only compared to other far left tweets, and so on. Because we hold the search queries constant pre- and post-event, and because we compare the tweets of the same echo chamber just before and just after the event, we can attribute any differences we find in political attitudes to individuals’ news consumption within social media echo chambers during the event, rather than to the longer-term process of self-selection into those echo chambers.
The goal of our project is to test a new method to measure the effect of social media networks on public opinion and attitude change. Our framework allows us to examineheavily concentrated, ideologically one-sided news exposure on social media as a cause, rather than as an effect.
Scoring Users and their Tweets: Methodology and Research Design
Our research was carried out in two major stages. The first stage consisted of politically scoring all sampled Twitter users based on the media elites that they follow on Twitter. To do so, we used a Bayesian spatial model developed by Barbera (2015), which is featured in an R package called tweetscores and uses the Twitter REST API to estimate the ideological positions of Twitter users across six Western countries. The tweetscores package includes pre-estimated ideology for a wide range of political accounts and media outlets, and uses this data to estimate users’ ideologies based on their Twitter networks. Using the political scores produced by Barbera’s network analysis method, we categorise all of our sampled Twitter users into four political echo chambers: far and moderate right, and far and moderate left.
The second stage of our project involved using an open-source machine learning tool called fastText to evaluate the content of these users’ tweets. Facebook’s AI Research lab developed this library, which combines features of natural language processing with deep learning algorithms. This machine learning approach is a significant improvement over other types of text analysis. For example, automated text analysis methods continue to feature a variety of substantial biases in classification, particularly when the featured text is very brief, such as a 140 word-limited tweets or other social media material.
Since machine learning algorithms have to be able to generalise from labeled training data to the unseen observations comprising the test data, it is most ideal to train data within its domain (in our case, Twitter). As such, we sampled a large corpus of tweets from media elites across the political spectrum to construct our training data—each elites’ tweet corpus was labeled into one of the four categories of political sentiment—far right, moderate right, moderate left, and far left. These tweets were used to “train” our machine learning classifier, which then processed our test data (the 10,000 randomly sampled tweets per event) and used the training data to label the test set.
Using machine learning, we were thus able to estimate the political composition of each echo chamber’s tweets. For example, 30% of far right ideologues’ tweets could be classified as far right in terms of text content, whereas only 15% of the moderate left ideologues’ tweets could be classified as far right in content. For each echo chamber, the algorithm produced an estimate for the proportion of tweets classified as far right, moderate right, moderate left, and far left. These estimates were in turn used to make our inference about the effects of political echo chambers on public opinion. That is, the pre-Vegas far right users comprise one category; the proportional estimates of tweet categories here are compared to that of the post-Vegas far right users. The same logic applies for the other three comparisons: pre-Vegas far left users are compared directly with post-Vegas far left users; pre-Vegas moderate left users are compared directly with post-Vegas moderate left users; and pre-Vegas moderate right users are compared directly with post-Vegas moderate right users.
Ultimately, we interpreted the differences in these matched pairings (i.e., the difference in proportions) as the overall effect of social media echo chambers on political attitudes.
Results and Conclusion
We set out with the goal of better understanding how echo chambers affect their members. We were interested to see how politically extreme and politically moderate echo chambers functioned, and how (and if) they radicalised people. To accomplish this, we conceptualised explosive political news stories as quasi-treatments.
In three of the four events we examined, we find statistically significant convergence (p <.05) toward the modal opinion of each political echo chamber following the event. We use this evidence to argue that people become more ideologically sorted—rather than more ideologically polarized—vis-à-vis social media echo chambers in the immediate aftermath of major political events. Social media consumers formulate their political opinions in their media environments and learn to follow their echo chambers, becoming more ideologically homogenous.
These preliminary results add novel empirical evidence to the current literature on social media and political polarization. Our study suggests that social media echo chambers facilitate the process of ideological sorting—rather than radicalisation. This finding also challenges conventional wisdom in political science, which tends to conceptualise ideological sorting as a multi-decade phenomenon. Our data, which spans multiple event cases and time periods, suggests that political sorting can also happen as a result of sudden jolts to the landscape of public discourse. We find that digital echo chambers can facilitate rapid ideological sorting during major news events.
Laura Jakli is a Predoctoral Fellow at the Center on Democracy, Development, and the Rule of Law at Stanford University, and a Ph.D. Candidate in the Department of Political Science at the University of California, Berkeley. You can follow her on Twitter: @laurajakli
Dr. Paul Gill is a Senior Lecturer at University College London’s Security and Crime Science Department. You can follow him on Twitter: @paulgill_ucl