When news of the Facebook contagion study hit, I was presenting a session on research ethics to the VOX-Pol summer school at Dublin City University. I had intended to discuss the Belfast Project as an example of social, behavioural, and educational research gone badly—indeed, this project had international intrigue, raised serious issues related to participant privacy and consent, and pushed research regulations to their limits. But, suddenly, with news of Facebook’s newsfeed manipulations, there was a hot new case in internet research to consider. The first responders were quick to call attention to the “creepiness” of the study (the name of the article itself might be responsible for the creepiness factor: “Experimental evidence of massive-scale emotional contagion through social networks”); those responses were quickly followed by questions about user/participant consent and the ethics of deception research. Initial reactions seemed to center around several points:
- This research was definitely “wrong”—individuals should have been told about the research. Deception research is okay, but there are “rules.”
- Facebook isn’t a regulated entity and doesn’t have to follow “the rules.”
- Facebook should exercise some ethical considerations in its research—some called for it to “follow the rules,” even if they aren’t what we are used to.
- Facebook does have rules; they are called “terms of service.” Did Facebook violate something else, like user trust?
- Facebook does research pervasively, ubiquitously, and continuously. “Everyone” knows that.
- Why is this case different? Because the line into an academic, peer-reviewed journal was crossed with—gasp—industry research?
- Why didn’t an earlier version of the study, in 2012, raise such fuss?
It has been a few months since the initial fallout from the study, and we have seen interesting afterthoughts and nuanced thinking on the study from the academic press, popular media, tech journals, and more. For example, there was Mary Gray’s panel titled “When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities,” and the Proceedings of the National Academy of Sciences published “Learning as We Go: Lessons from the Publication of Facebook’s Social-Computing Research.” There was also a joint reaction from 27 ethicists in Nature, which argued against the backlash in the name of rigorous science. And, to empirically assess if a “similar” population of users—namely, Amazon Turkers—would respond to research ethics violations in ways similar to the subjects of the contagion study, Microsoft’s Ethical Research Project conducted its own study.
I’ve been studying internet research for a long time—at least a long time in internet years, which are quite similar to dog years. I remember the AOL incident and the “Harvard Privacy Meltdown.” Those, and now the contagion study, are internet research ethics lore. They are perfect case studies.
Recently, I had the good pleasure of presenting on the contagion study at the Society of Clinical Research Associates’ Social Media Conference. There were some in the room who were unaware of the controversy. Others were of the mind that we should expect this sort of thing. And, some were aghast (my anecdotal results align, more or less, with what Microsoft’s Ethical Research Project systematically found!). And, recently, I talked with yet another reporter, but this one asked a very pointed question: “Why are people so upset?”
One reason is that we have finally come face to face(book) with the reality of algorithmic manipulation—we are now users and research participants, always and simultaneously. If we stopped to think about every instance of such manipulation on any social media platform, our experiences as users would be dramatically different. But it is happening, and our interactions on the internet are the subject of constant experimentation. As OKCupid reminded us: “…guess what, everybody: if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.” Welcome to the era of big data research, composite identities, and the “new” frame of research.
The Facebook study also sheds light on a clash between the “human subject,” as defined in existing research regulations, and the data persona that we develop though our interaction with social media. Traditional research regulations are being challenged by new forms of participant recruitment, engagement, and analyses that defy a strict alignment of regulations to praxis. The current era of internet research will only reveal these clashes more and more, and in many ways, the contagion study is a perfect example of this “new normal” in internet research ethics. I mean a few things by this.
First, we’ve been seeing a morphing of the research space for many years in the face of social media. It is becoming more and more difficult to isolate “research” from every day activities and experiences, and it is increasingly more challenging to distinguish the researcher from other stakeholders in the research enterprise. Similarly, distinguishing between users, or consumers/customers, and research subjects is becoming more complicated. The research spaces of today’s social media are ubiquitous and pervasive.
Second, for years, the computer sciences and, more specifically, computer security research, have been engaged in various forms of research like the contagion study and have been publishing their results widely. However, these researchers have stayed, in general, outside the fray of human subjects research. The dominance of Facebook is obviously a variable in this case, but, as others have stated, this is certainly not the first, nor the last time this kind of research will be conducted.
Third, this case calls into clear view the importance of considering terms of service (and recognizing their inherent limitations vis-a-vis the regulations and the application of the regulations to third-party controlled research) in relation to “consent.” We must acknowledge how differently conceived and understood “consent” is under the framework of human subjects research versus other legal settings. Consider, for instance, that while there are alternatives to research participation, the terms of service acknowledgement is a legal requirement with only one alternative: Don’t use the service. As users agree to the terms of service of various sites, new challenges related to internet research arise. For example, a site may be used as a research venue by a researcher, but the consent conditions are in direct contrast with the site’s terms of service (e.g., research participants are told that their data will discarded after some time, when the terms of service state otherwise). As our research spaces merge, it is critical to understand this distinction between consent and terms of service and conceptualize a flexible approach that fulfills the letter and spirit of ethics and law.
Fourth, the new normal of internet research is also one of identifiability. From the technical infrastructure to the norms of social media (e.g., the norm of sharing), individuals are intentionally and unintentionally contributing to the sharing and use of data across users, platforms, venues, and domains. Within this framework, we are seeing an increase in non-consensual access to data. Data streams are complex, intermingled, and always in flux, and it is, in IRB lingo, becoming impracticable to seek and/or give consent in this environment (think big data, of course). From these streams, and from these diverse data, we can extrapolate theories, patterns, and correlations to individuals and communities. We, individually and collectively, are identifiable by our data streams, hence the targeted ads, newsfeed content, recommendations, and so on, that determine our online experiences. Our online experiences could be very different, and to this end, researchers are studying the ethics of algorithms very closely now. But, the days of anonymous FTP (file transfer protocol) do seem a thing of the past. Anonymous data is simply not valuable in the new normal of internet research.
The Facebook study also demonstrates the importance of reconsidering group harms, secondary subjects, and research bystanders—the internet of today is not about the individual as much as it is about patterns, groups, connections, relationships, and systems of actors and networks. Within this complex nexus, the notion of consent is changing, as is the notion of “minimal risk.” Our every day realities now include the risks of data manipulation, data sharing, aggregation, and others. Our consent is more often implicit, and that long-standing notion of practicability is ever more important.
In this nexus, we are finding a space for communication between and among researchers of all walks. But, once again, I am brought back to a most fundamental question in research: “What does the research subject get out of it?”
Where do we, the collective research community, go from here? What do the feds think about this? Facebook issued new research guidelines, but are they enough? Would a joint statement from the Federal Trade Commission and the Office for Human Research Protections be useful? What does this case, and the collision of customers and subjects, mean to them? As we academics scurry for special issues and conference panels on the implications of the contagion study, does anyone else, including industry researchers and the subjects of their research, want to weigh in?
Or, will this be simply cast to the cannons of internet research ethics lore? I know that I, for one, am eager to continue the conversation that this study started.
This post originally appeared on Primer’s Ampersand on 22 October 2014. It is republished here with the author’s permission.