Online Actions and Offline Consequences: Moderation and Risk Mitigation, not Censorship
By Abby Reynolds
7th June 2021
“Is the freedom in free speech the same as the freedom to be protected from violence, or are these two different valences of freedom?”
— Judith Butler
When I describe what our work at CaliberAI involves to people, I am inevitably met with assertions of people’s rights to freedom of speech, to be allowed to say what they like, when they like. When I explain that we build AI tools to monitor for the presence of hate speech online, these protestations often descend into long rants about how ‘PC culture’ has gone too far, how people have become too sensitive and how everything and anything is offensive these days. It can be bewildering.
When I consider what it means to me for speech to be ‘free’, the ability to insult strangers on the internet with hate filled rhetoric and racist slurs is not the first thing that comes to mind. Freedom of speech is a human right. Every individual should have the right to express their ideas without fear of retaliation. But, as Judith Butler so eloquently described, freedom to speech does not take precedence over another human’s inherent right to safety and respect. The innate divide between the ability to exercise one’s opinion and the right to protect marginalized communities from hateful rhetoric has become something of a liminal space in this age of social media.
At CaliberAI, our focus is on risk mitigation in line with governments and legislative bodies the world. The Digital Services Act, the Online Harms Bill and the Online Media & Safety Regulation Bill (in the EU, UK and Ireland respectively) aim to define the notion of “harm” online, institute a “duty of care” and work towards the reduction of harm proliferated through social media sites. When we speak of “harm” online, we are referring to a concept more broadly known as hate speech.
While there is currently no internationally accepted definition as to what constitutes hate speech, it is defined by the United Nations as “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor”. It is rooted in prejudice and feeds division. Hate speech laws are intended, not to stifle freedom of expression, but to preserve both public order and promote human dignity.
From President Trump’s use of ‘dog-whistle’ language to incite the attempted insurrection on January the 6th, to the racist abuse suffered by Premier League footballers such as Marcus Rashford, hate speech has been a hotly debated subject in recent times. The harm that is freely perpetrated and profilerated online may not always have immediately apparent effects but physical violence can often be traced directly back to its origins in social media.
Take the example of the incel community. For those who remain blissfully unaware, incel is a shortened version of the phrase “involuntarily celibate” inadvertently coined by a Canadian university student known only be her first name of Alana. Originally intended as a humorous play on her lack of sexual activity in her college years, the term has been hijacked by an mostly online subculture of people who define themselves by their perceived inability to find a romantic or sexual partner despite their desire to achieve such a thing. According to Wikipedia, conversations in incel forums are marked by their “resentment and hatred, misogyny, misanthropy, self-pity and self-loathing, racism, a sense of entitlement to sex and the endorsement of violence against sexually active people”. Southern Poverty Law Centre, an American non-profit organisation specialising in civil rights, have described the subculture as inherent to the “online male supremacist ecosystem” and have deemed them worthy of hate group status. The discussions by these men have often crossed the line from hypothetical violence to real world actions. In 2014, Elliot Rodgers murdered 7 and injured 14. He had spoken extensively online about his desire and plans to commit such an atrocity. His intent to harm others and extensive disparaging of ethnic groups had not been flagged. Since then, a further six mass murders, resulting in 39 deaths, have been committed by men who either self-identify as incels inspired by Rodgers or who had mentioned his name and broad ideology in their internet postings.
But harm does not have to be physical to have an effect. Lesley Jones, Daisy Ridley and Lizzo have all taken leaves of absence from a variety of social media accounts as a result of the damage done to their mental health. In many of these incidents, Twitter was rebuked for its lax approach to harmful content. While the site does claim to ban “hateful content” and “harassment”, it can be slow to remove such subject matter. As former Twitter CEO, Dick Costelo once said, the site “sucks at dealing with abuse and trolls on the platform”.
A 2017 survey commissioned by Amnesty International returned shocking statistics. 46% of female users said that they had received harassment that was misogynistic in nature, 58% reported racist content and 61% claimed that the abuse received online had affected their ability to concentrate and perform. Just 18% of these same users believe that social media companies do enough to combat online harassment. The experiences women are having on Twitter are leading them to self-censor, to limit their interactions with others and, ultimately, to leave the platform.
Aggressive online rhetoric towards women has the potential to reverberate into the real world with the prevalence of this language making misogynistic ideations normalised and even exacerbated. The harassment faced by women in online spaces perpetrates what Liz Kelly coined the “concept of the continuum of violence” and allows for further proliferation of a global range of abuses committed against women. It not only forces women out of spaces intended for the use of all but further silences their voices preventing them from sharing their lived experiences.
It is clear that the current framework of online content moderation has been outgrown as these sites expand and consume more of our daily lives. At CaliberAI, our ultimate goal is to augment publishing of all kinds and increase accountability, transparency and diligence when it comes to internet publishing in particular. Through our bespoke tools, based on expertly curated datasets, custom thresholds and explainability of our algorithms, we aim, not to censor, but to work alongside policy makers, social media platforms and others to ensure the online spaces we inhabit become safer and more accessible for all.