The Challenge of Drawing a Line between Objectionable Material and Freedom of Expression Online

By Philippa Smith

When it comes to debates about free speech that needs to be protected and hate speech that needs to be legislated, the idiom of “drawing the line” is constantly referenced by politicians, journalists and academics.

It has surfaced again as New Zealanders struggle to comprehend the abhorrence of the Christchurch terror attack and the issues surrounding the online publication of the alleged perpetrator’s manifesto, now declared objectionable by New Zealand’s chief censor.

Legislation in the digital age

When it comes to drawing a line between hate speech and matters of public interest, several country-specific laws apply. Some legislation, such as Ireland’s 1989 Prohibition of Incitement To Hatred Act is very particular. In other countries, hate speech is covered more broadly under criminal or penal codes or human rights laws such as those held by Denmark, Germany and New Zealand.

Following the Christchurch terror attack, there have been calls for New Zealand to introduce specific hate speech laws. But in a digital age, with its blurring of national boundaries in virtual spaces, it is proving a challenge to come to any consensus on what is, and what is not, acceptable online behaviour.

The academic world demonstrates the diversity of negative online behaviours through the many descriptors that are emerging in scholarly works. Research has focused on topics such as dangerous speech, excitable speech, offensive speech, extremist discourse, cyber bullying, trolling, doxing and flaming.

These negative online behaviours offer potential outlets for online hate and abuse that can be amplified through wide dissemination via the internet. More subtle ways of distributing hateful language have surfaced through deliberate disinformation, fake news and information laundering. It is no surprise that the so-called “line” between hate speech and free speech is problematic.

A thin, grey line

The task force on internet hate, under the auspices of the Interparliamentary Coalition for Combating Antisemitism, notes that when it comes to hate speech “the line is very grey and difficult to interpret”. The European Commissioner, Věra Jourová, stated last year that the line between prohibiting hate speech and censorship on the internet was “very thin”.

Tech companies have also taken up the idiom, as YouTube’s harassment and cyber bullying policy at one stage appealed to users to “[r]espect people’s opinions online but know when it crosses the line”.

These examples highlight a key issue of the online hate speech/free speech debate. Actually knowing when this figurative line has been crossed has become far more complex to interpret and even more challenging to manage.

Who gets to push delete

Historically, tech companies have been reluctant to take responsibility for user-generated content that appears on their platforms. They argued their point in favour of freedom of expression.

Some governments have clearly had enough of online toxicity and called on companies such as Facebook, Google and Twitter to intervene and take down offensive material. These include Germany and the network enforcement law (NetzDG) its government implemented in 2018. It is the most rigorous legislation so far. Companies risk a fine if they fail to take down offending material.

The UK government has been preparing a white paper to support its intention to legislate to improve online safety, and in France President Emmanuel Macron last year called for the larger internet platforms to be liable for publishing “hate content”. New Zealand is the most recent nation to lambast Facebook for enabling the livestreaming of the Christchurch massacre – though it has little power to enforce any action.

Free speech advocates are wary about any moves that dictate who controls the delete button. Whether it relies on government legislation or the humans and algorithms employed by tech companies, the decision-making process as to what content classifies as “extremist, hateful and illegal” is problematic and may impinge on free speech.

Other concerns about internet interference prevail. This includes the potential sanitising of the world once all the “bad stuff” has been taken down, or the “chilling effect” where people are reluctant to have their voices heard online because of speech-restricting laws that may result in their prosecution.

Countering hate

Deciding where to draw the line between internet hate and free speech will be an ongoing exercise because online hate can take so many forms and be interpreted in different ways. Perhaps it is time to broaden our thinking when it comes to responding to hate speech.

It may be that this “drawing the line” idiom needs to be reconsidered. After all, it dates back to the late 1700s and relates to the actual drawing of boundaries in the game of tennis or to the separation of opposition parties in parliament to prevent sword fights.

We need to pay greater attention to educating the public on how to counter negative online behaviour. Empowering people to take up effective counter-speech initiatives would be a first step in the battle against hate.

A number of academics, including myself, are developing various taxonomies of counter-speech initiatives to see what might be most effective. Susan Benesch, the director of the Dangerous Speech Project, says that critiquing a poster of inflammatory material in a “civil and productive” way can work and, in some cases, lead an offender to apologise. Certainly, presenting counter arguments through discussion groups or websites can serve to delegitimise those arguments that express hateful ideologies. When this occurs in a public online space it also exposes counter arguments to a wider audience.

Meanwhile a number of organisations such as the Anti-defamation League in the US, the Online Hate Institute in Australia and the No Hate Speech movement in Europe have developed various tools and educational resources, including training sessions for bloggers, journalists and activists. These aim to educate people of all ages to apply critical skills to counteract online hate.


 is a Senior Lecturer in English and New Media at Auckland University of Technology, Auckland, New Zealand.

This article was originally posted on The Conversation.com website. Republished here with permission.

Disclosure statement: Philippa Smith receives funding from an InternetNZ research grant for her study of free speech and counterspeech on the internet.
The Conversation

Leave a Reply