Our Cyber Threats Research Centre colleagues couldn’t host an in-person TASM Conference this year, but instead organised a week of virtual events from 21 to 25 June 2021. This post is the first in a three-part series based on overviews of three of the virtual TASM panels. Read parts two and three. [Ed.]
By August Dichter
How do we begin to define, and then redefine, mis/dis and mal-information to meet what is arguably one of the greatest challenges facing the nation-state today, foreign influence operations? That is precisely what the Legal Innovation Lab roundtable, chaired by Swansea University’s Dr. Kristan Stoddart, set out to find in the Mis/dis and mal-information: Who-why and its effects and effectiveness online panel which took place on Tuesday 22 July 2021.
What followed was a conversation with an esteemed panel made up on Dr. Emma Briant of Bard College, Dr. David Gioe of West Point Military Academy, Professor Christian Kaunert of University of South Wales, and Carl Miller of the Demos think tank. The conversation weaved through time and place and in and out of conflict, highlighting the scope and challenge of the new technologies which have enabled manipulated information to undermine trusted democratic institutions, like the free press, and influenced people to make decisions contrary to their best interests.
The distinctions between the three forms of manipulated information are rather straightforward: misinformation involves the dissemination of information believed to be true, but is not; disinformation employs the deliberate spreading of false information; and mal-information seeks to manipulate truthful information to incite harm.
There seemed to be agreement amongst the panelists that the current conversation around mis/dis and mal-information needs to be redefined to focus on holding to account the powerful institutions sharing the manipulated information, rather than getting bogged down by the individual half-truths and outright lies being spread by individuals. Legislating for a ‘truth’ vs ‘lie’ could lead down a dangerous path of moderating what people ultimately think. There was even a call for the ‘demilitarization of information’, a fear many panelists shared regarding the ethical utility of military use of influence operations at home and abroad.
Solutions to the manipulated information problem pointed towards independent oversight of online platforms. In addition, the evidence of success in Nordic and Scandinavian countries’ approaches to media literacy and critical thinking skills was also highlighted in the roundtable. But it was warned that education cannot be the sole saving grace, as one panelist pointed out, education as an institution itself has become politicised.
The looming question which nobody in the field has yet to find an answer to beyond ‘a delicate balance’ was the one of how to manage free speech principles while regulating manipulated information. Measuring the success of regulation is also a challenge that has yet to be understood.
Although manipulated information is a nuanced subject to work through, the roundtable was filled with engaging cultural references, such as the Voight-Kampff test from Blade Runner. Speaking through a spotty Wi-Fi connection at the end, chairman of the roundtable, Dr. Stoddart, jokingly assured the audience that foreign hackers probably had nothing to do with the minimal connectivity interference. But how could we really know?