Programming Human Beings to Build a Hate-Free Internet

This year’s wild presidential campaign was not only a test of two very different politicians but of humanity in general.

Because social media is now far more entrenched than four years ago, we were effectively subjected to a giant, real-world experiment. The research question: how exactly do we treat one another when sensitive matters of race, gender and religion are thrust into a massive, un-moderated global forum? The answer: terribly.

We now know that hordes of anonymous xenophobes, misogynists, religious extremists, homophobes and cultural vigilantes are capable of turning social media feeds into an online version of a seedy Deadwood saloon.

As this “debate” approached its November climax, prominent thinkers like feminist writer Jessica Valenti and Jonathan Weisman, the New York Times deputy Washington editor, quit Twitter in disgust – Ms. Valenti because of rape threats she’d received against her five-year-old daughter, Mr. Weisman because of relentless anti-Semitic slurs. We are “losing the internet to the culture of hate,” a TIME magazine cover declared. Worst of all, the campaign season demonstrated how this cesspool of online public discourse is shaping the crucial decisions society makes in the offline world. We need just mention one Twitter account to emphasize this point: @RealDonaldTrump.

The outlook need not be so ugly. As we argue in our new book, The Social Organism, the fact that people are now free to share their unfiltered thoughts with a mass audience anywhere on the planet gives social media enormous potential to foster collaborative innovation, prosperity, peace and freedom. But to achieve that progressive outcome, we must contain the bigotry, hatred and mob justice that’s haunting social media. The question is: how? How do we minimize certain types of speech without also quashing the free flow of ideas and creative collaboration that this radically democratized system of mass communication can unlock?

Image courtesy - Michael Casey via LinkedIn.com
Image courtesy – Michael Casey via LinkedIn.com

The answer may lie in the same software that delivers the system’s relentless firehose of untethered thoughts. Some social networks are discovering that they can use their code to “program” people to do the right thing, that when their customers are prompted to reflect on antisocial behavior they tend to check themselves. Perhaps people aren’t inherently selfish and rude, after all. They just need a reminder, and perhaps a carrot or two.

When a flood of homophobic and racist slurs began to spread over its network, gaming giant Riot Games worried that it might scare off many of the 67 million users of its flagship League of Legends game. But it knew that that forcing players to identify themselves rather than using pseudonyms would breach important standards of privacy and that suspending players would likely invite an even bigger backlash from its customer base. So it came up with a crowd-sourced, democratic solution that targeted the thing their users most cared about: winning the game.

Riot Games asked users to compile case files, both of offensive posts they’d encountered and of instances in which players adopted positive, conflict-resolving language. These were placed in a massive database on which everyone was asked to cast item-by-item votes, classifying each as either negative or positive, and were subjected to sophisticated analyses to identify sarcasm and collusion. All that information was then fed into an algorithm that delivered rewards and penalties to individual players whenever it flagged certain behavior, along with a pop-up window explaining why the chosen words were or weren’t conducive to successful teamwork. The algorithm was also imbued with machine-learning capabilities so that it could self-improve as slang and behavioral norms evolved.

This artificially intelligent system of incentives and education quickly gave rise to a powerful self-reinforcing feedback loop of good behavior. It proved to be a remarkably successful case of Pavlovian conditioning. Reports of verbal abuse at League of Legends immediately dropped by 40%, according to lead game designer Jeffrey Lin. Incidents of homophobia, sexism and racism are now found in just 2% of all games. More than 90% of punished players never commit another offense.

Another success story comes from Nextdoor, which helps real-world neighbors connect with each other. Concerned by the prevalence of racial profiling in its neighborhood-watch service, the company changed the prompts that members see when they describe someone they believe is acting suspiciously. The new prompts encourage users to focus on what the person was doing and wearing rather than on their race. If they do choose race as a descriptor, the software requires them to also describe other, non-racial features before permitting them to post the item. The company says there has been a 75% drop in racial profilingsince these changes were adopted.

Importantly, the companies in both these examples strived to avoid the impression that Big Brother is in charge — Riot Games’ algorithm is founded on the players’ consensus while Nextdoor’s uses an opt-in function that leaves the final decision up to the user. There are important lessons in this for bigger platforms such as Facebook and Twitter, which until now have tended to rely on centrally controlled censorship as the go-to weapon against abusive behavior.

Companies that set themselves up as thought police face serious backlash. Facebook has discovered this in the reactions to its centrally run censorship model. Proving itself incapable of grappling with the nuances of cultural expression, its anti-nudity function stirred outrage from news outlets when it banned Nick Ut’s iconic 1972 photo of a naked Vietnamese girl fleeing a napalm attack. Republicans complain that its newsfeed algorithm is biased toward Democrats. Yet The Wall Street Journal recently reported that Facebook employees were angry at Mark Zuckerberg for not removing Donald Trump’s anti-Muslim posts, arguing that he was getting a privileged exemption from rules that bar such speech.

Our argument against censorship is not limited to the classic liberal case for free speech. It’s also that social media’s collective culture – less than two decades old – needs to confront its demons if it is to evolve the kinds of self-policing social mores that offline society has developed over millennia. In The Social Organism, we approach this new, vastly amorphous system of human communication as an evolving, living being. And just like humans’ immune systems, it needs to develop immunities against pathogens. If we simply choose to quarantine the disease of hate, social media’s bigots will quickly figure out how to get around the restrictions and come back stronger – like superbugs developing a resistance to antibiotics.

A prime example came after Twitter banned right-wing provocateur Milo Yiannopoulos over the 24-hour barrage of racist, sexist Twitter rants that his 380,000 rabid followers dumped on comic actress Leslie Jones. The “alt right” leader immediately became a martyr, with a #FreeMilo hashtag that surged to the top of the trending list. Somehow, a thuggish band of misfits got to claim they were the victims.

Those who’ve been harmed by online abuse see censorship as retribution. It’s understandable that they might disagree with us on censorship. But make no mistake: we are calling for a massive public response against hate speech. It’s just that if the measures taken hinder the evolution of a society that is broadly more tolerant, we get nowhere.

Resisting the urge to censor does not mean being passive in the face of hate speech. We should proactively seek to influence cultural evolution, much as humans have always done with biological evolution. Individual online users can help by sharing messages of positivity and humanity from the many individuals and sites that generate them. Social media platforms should follow the lead of Riot Games and Nextdoor, as well as organizations like Games for Change, which uses software-based gaming models to produce positive social outcomes. Their algorithms need the same community-driven incentives for good behavior, an approach that triggers the “social organism” production of hate-repelling “antibodies”: empathy, pathos, acceptance, tolerance and, yes, love.

Like it or not, social media is here to stay. In giving rise to a giant, global pool of shared ideas, it could be the most important change in mass communication since Gutenberg’s bible. But if we don’t manage it properly, it could set us back centuries.

This essay has been adapted from “The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life,” published on Tuesday, Nov. 15 by Hachette Books.


This post is written by Michael Casey who is the Senior Advisor for Blockchain Opportunities at MIT Media Lab. He is also a consultant, public speaker and an author. The co-author of this post is Oliver Luckett who is the CEO at Revilopark. This post was originally published on the authors’ blog on www.LinkedIn.com/pulse. Republished here with their permission.

Leave a Reply