Breaking the ISIS Brand Counter-Narratives – Part II: Ethical Considerations in Fighting ISIS Online

This is the second in a two-part series; part one is HERE.

By Anne Speckhard & Ardian Shajkovci

Last week’s Blog post described ICVSE’s efforts directing counter narratives into the ISIS-dominated Internet space. This post discusses the ethics of attempting such interventions. As in all our work, our research ethics for Internet interventions with ISIS endorsers, supporters, and distributors of ISIS materials is to do no harm.

In our recent activity targeting English and Albanian Facebook accounts, we approached accounts showing serious support for terrorism and making serious movement toward terrorism. We took these accounts to be representing actual individuals, although they could simply be duplicate accounts of one prolific recruiter or bots.

Secondly, we judged these individuals as exhibiting dangerous behaviors both to themselves and others by the fact of endorsing and distributing ISIS materials. While we could be wrong in acting as though these account holders were actual individuals, we felt to err on the side of caution is the correct approach. As a result, we judged that any intervention that might diminish an actual individuals’ support for terrorism was in their as well as society’s best interest. Likewise, we determined that such an approach was highly unlikely to harm them, provided we were careful in how we reported our results compared to doing nothing.

Some could argue that creating anonymized Facebook accounts, such as in this particular research, to engage with the target group and feeding them video material that was designed to look like their usually consumed ISIS content and then using their reactions (e.g. angry, etc.) as impact evaluation carries negative ethical implications. Moreover, misleading the target group into connecting to us as being interested to befriend ISIS enthusiasts without disclosing our true intent, which in our case represented an attempt to dissuade them from potentially engaging in violence by tagging them with counter narratives that appeared to be pro-ISIS videos, could be argued to carry negative ethical implications.

We, however, judged that intervening in the case of someone moving toward terrorism—even when misleading them into watching our counter narrative videos—was acting in their and societal best interests.  Anyone who is prevented from joining a terrorist group, or  prevented from carrying out a terrorist attack by encountering the words of an ISIS insider denouncing the group, might translate to a life saved, if not hundreds of lives saved.  The ethics were clearly in favor of carefully carrying out this study.

This study had two goals: firstly, to learn if we could reach ISIS supporters, endorsers and distributors on Facebook. Secondly, we tried to learn if we could meaningfully engage with them, hopefully to diminish their support for groups like ISIS. In this study, we demonstrated that we can reach our target audience.

As far as meaningfully engaging with them, although disturbing, the comments made by the sample participants demonstrate that they engaged with our materials and thought about them to some extent.  This was a short-term study fraught with difficulties in observing effects, so we cannot say that we necessarily dissuaded participants from endorsing or supporting ISIS. However, we do know we reached them, including their followers, and managed to engage many with our counter narrative materials. Given the alternative—that is, doing nothing and allowing ISIS endorsers and distributors to carry on with no intervention other than law enforcement—we judge our activity a success.

We have, and continue to, focus test our videos with vulnerable populations worldwide. In such offline, face-to-face interactions, our research ethics is to first create rapport and a relationship of trust with our research participants (e.g. vulnerable population) before videos are shown to them. This also holds true for this study and our ongoing online focus-testing endeavors as well. However, given the importance of evidence-based research on CVE, we also felt that using anonymized accounts in some cases was necessary, as highly radicalized individuals are very unlikely to accept friend requests from openly identified CVE practitioners, and may themselves be dangerous persons.

In terms of research ethics, while the videos used during our short intervention could be directly linked to the work of our Center, given the ICSVE logo is displayed on each of our videos, the decision was made to protect the true identities of those involved in this particular activity. That was judged as necessary and important, especially when dealing with highly radicalized individuals who publicly endorse ISIS, as in many cases these account holders are dangerous and may even be currently fighting abroad with ISIS.

Our logo on the materials, however, was kept intact to add authenticity to our intervention attempts—to preclude them from being labeled as government propaganda and instead reflect our years of research into psychological motivations for joining terrorist groups. We firmly believed that the benefits of being open in that regard about who we are and what we do outweighed the risks of hiding our identities.

Leveraging Facebook is just one of many ways of generating evidence from social media platforms to test the utility of our video products with vulnerable populations. A more controversial approach, similar to our study in some cases, is to create anonymized profiles to befriend our target audience. We carefully weighed risks vs. benefits of engaging in the study, and considered a number of ethical challenges facing our research. In this regard, we opted not to directly interact with our subjects (Elovici et. al., 2014), other than to share and tag the counter narratives to their accounts.

Likewise, the research was conducted with careful consideration of ethical concerns related to data collection, protection of research participants’ identities, and data storage. In this regard, our research design was fully rooted in ethical guidelines for academic research, including guidelines on Internet research with vulnerable and extremist individuals online.

To ensure confidentiality of our research participants, we did not include categories of personal information (e.g. date of birth, alleged commission of a crime by a participant, proceedings for offence committed and court sentences related to our participants, etc.) that would be considered as sensitive personal information under the law (e.g. EU laws).

We also excluded personally identifiable information on our research participants’ social networks, as it would make it easier to identify their identities, even in the case of those who utilized pseudonyms. We only focused on the general picture, or the ecology of collective behavior of our research participants as opposed to individuals. Our goal remained on identifying information relevant to our central research questions and assessing the overall impact of our CVE materials on ISIS-endorsing and ISIS-supporting individuals identified on Facebook.

Despite the fact that our intervention took place in an online environment, the research involved a considerable amount of risk, although most of that risk was to the researchers rather than to the subjects. To minimize potential risks, we established pseudonyms and anonymized Facebook accounts in some cases to gain access to our target group and minimize the potential for risk for the researchers. That said, because our videos could be linked to our Center and the names of the researchers following the publication of the findings may become known to some of the subjects studied, there is always a possibility of risk of being exposed to physical danger in real life.

We also considered legal ramifications of engaging in online research and the potential to violate counterterrorism laws of countries where our research participants reside. To resolve that, we developed research protocols that ensured a significant degree of protection. To demonstrate, we only uploaded our videos into respective Facebook accounts and avoided dialogue with our participants, which could have potentially led to legal issues and implications. We made sure that our presence in the Facebook account spaces did not constitute an entrapment.

Moreover, given that law enforcement might have been active on Facebook seeking actionable intelligence, we only collected data deemed to be critical to our research, all in an effort to avoid interfering with any ongoing law enforcement activities and investigations online. Equally important, our research was conducted over a relatively short period of time, and we avoided extended contact with our participants to potentially avoid legal or additional ethical implications (UNODC, 2012; Stern, 2003; Davis et.al.,2010).

Some could argue that if any reactions, including negative, could be counted as successful intervention, then there can also be counter-productive interventions, which actually might foster radicalization processes by generated negative emotions or further fuelling the extremist ideology (e.g. the sense of being infiltrated by spies or “infidels” trying to trick them, etc.).

From a psychological standpoint, it is important to acknowledge the potential impact when ideologically attenuated individuals view videos such as ours (e.g. cognitive dissonance). It is also important to acknowledge the potential impact on those who are unsure of what to believe and are searching for ISIS propaganda material, as it is to discuss if our counter-narratives could impact human cognitive aspects or actions in positive or negative ways. In this regard, we are cognizant of the fact that while our counter- narrative videos have a huge potential to make a positive difference in the fight against ISIS, there is a small possibility for potential for harm in the possibility of raising even more defiance from existing governance.

The potential for such harm is minimal, however. For instance, our counter-narratives underwrite [initial] sympathy towards ISIS cadres as human beings versus demonized “monsters,” as often depicted, given our defectors were at some point in ISIS, as well as detailing the many reasons for wanting to join, without validating the violent means that the terrorist group propagates. We highlight human costs of engaging in terrorism both for the recruit and those harmed by the group, which serves in opposition to sleek and deceitful ISIS propaganda videos.

In addition, our direct targets to be illegitimatized are not potential recruits, but rather the terrorist group and the terrorist leadership. In other words, our overall objective is to target the terrorist group and discredit them in the eyes of potential recruits to save potential recruits from the costs, including loss of their own and lives of others, of engaging in terrorism.

As the authors of the study, we are cognizant of the fact that certain manipulations online, such as feeding video materials to vulnerable populations consuming and promoting ISIS material, may potentially lead to negative side effects, such as change in mood or change in self-esteem. In addition, because this was online research, the participants could have withdrawn from Facebook without notice, complicating any prospect for intervention.

That said, given that our video materials “are not expected to create more extreme reactions than those normally encountered in the participants’ lives” (British Psychological Society, 2014, p. 7), given they are already consuming ISIS materials, we have minimized the prospect of causing any psychological harm.

Because this was small-scale exploratory intervention, we did not coordinate our activity with law enforcement and other agencies, although our videos are being used by the same worldwide for intervention purposes. Many intelligence and law enforcement agencies, including the FBI, London Metropolitan Police, Dutch National Police, Belgian intelligence, Kosovo National Police, and many others are well aware of our Breaking the ISIS Brand Counter-Narrative videos.

We regularly train intelligence and law enforcement worldwide and our Center publications are disseminated on regular basis to such entities. Likewise, many of these entities are aware of our efforts and supportive of them, thus we were not concerned about interfering with ongoing law enforcement activities or confusing law enforcement with our profiles. In the case of the latter, the profiles might be mistaken as ISIS endorsing, provided only the names of the videos we shared in these forums are taken into account.

Because this was a small-scale intervention with a relatively short presence and passive interaction online, our research had little potential to interfere with any ongoing law enforcement or intelligence activities. Further, because it may have actually dissuaded some of the targeted users from progressing further towards violence, it potentially avoided additional costly law enforcement activities. While our video materials may initially appear to be ISIS products, with a simple click of the videos, it is clear that they are in fact counter-narratives.

Although our target audience held extremist ideas and shared ISIS-related violent content, this does not mean that they all will end up engaging in violence or terrorism. To better map the interests and online communications of our target group, coupled with the fact that not all relevant information can be found via open source research, we will continue to elicit the help of law enforcement and intelligence officials to conduct online social network and lifestyle analysis to better identify our target audience.

Although difficult, future and more prolonged research might include more open participatory research with terrorists and extremists, both online and offline, to test the utility of the videos, similar to what our Center has done with high-profile ISIS cadres in Iraq. It might include reaching out in advance to law enforcement agencies to advise them of our specific plans, to request a set of protocols to potentially follow when we identify accounts distributing ISIS content, and to coordinate efforts where necessary. That said, we have been in contact with police regarding accounts we find that are distributing ISIS materials.

In addition to a number of aforementioned measures introduced to facilitate data collection and ensure an ethical research process without placing us, as researchers, or our research participants at risk, we also ensured that our data are stored properly and that no unauthorized persons aside from the research team had access to the data. We also will ensure long-term storage and securing of the data collected (e.g. password-protected files, storing data on secured servers, etc.).

In conclusion, careful and measured approaches to countering ISIS and other terrorist groups’ online and face-to-face recruitment must be put in place. Undoubtedly, there will be ethical hurdles to be passed in doing so. As researchers of this study, our view is that if lives can be saved and terrorist crimes prevented by identifying ISIS endorsing, supporting and distributing profiles in the digital space, and if tagging them with effectively crafted counter narratives is possible, then the only ethical thing to is to do so.

ISIS has carried out its nefarious activities in the digital space for too long without being opposed.  Social media companies are now responding. Governments, too, are fighting back.  NGOs and other civil society organizations also need to enter the space with initiative such as the Breaking the ISIS Brand Counter Narrative Project.

Read the full report here.


Anne Speckhard, Ph.D., is an adjunct associate professor of psychiatry at Georgetown University School of Medicine and Director of the International Center for the Study of Violent Extremism (ICSVE). Follow @AnneSpeckhard on Twitter.

Ardian Shajkovci, Ph.D., is the Director of Research and a Senior Research Fellow at the International Center for the Study of Violent Extremism (ICSVE).

Leave a Reply