Online Harms: Five Things Policymakers Should do Right Now
By Neil Brady
23rd August 2021
“Section 230 should be revoked, immediately.”
— Joe Biden, Former Vice President of The United States, The New York Times, January 17th, 2020.
“I think it's important that companies...take their responsibility.”
— Margrethe Vestager, Executive Vice President of the European Commission for A Europe Fit for the Digital Age, The Washington Post, July 12th, 2021.
“Some people say Twitter is a sewer...I think Twitter in particular has a case to answer in terms of their own level of editorial.”
— Leo Varadkar, Tánaiste, National LGBT Federation, July 21st, 2021.
Following the populist wave of 2016, a general global consensus has emerged that the absence of legal liability for internet publishing is a key (if not the only) driver of the harm speech, misinformation and disinformation that have fuelled, and continue to fuel, anti-democratic phenomena.
In the European Union for example, under the current draft terms of the Digital Services Act, safe harbour will become conditional upon mandatory “audited risk assessments'', with especially onerous conditions to be placed upon “very large online platforms” (VLOPS). In both Ireland and the United Kingdom, draft versions of both the Online Safety and Media Regulation Bill and Online Safety Bill respectively, go further again, with provision for criminal liability for senior social media company managers. Similar draft legislation is also at various stages of readiness in other Common Law jurisdictions. There are also signs that the judiciary’s thinking is evolving here too. In Canada for example, the courts recently indicated a willingness to link the act of internet harassment to the tort of intimidation, while in February in The United States, The Supreme Court of Texas drew a sharp distinction between “holding internet platforms accountable for the words or actions of their users” and “for their own misdeeds.”
However, as is clear from the gulf between the words and actions of leaders such as Joe Biden and others quoted above, policymakers are navigating the fine lines between freedom of expression, censorship and accountability with difficulty. It is for this reason, as noted in The Financial Times in August, that these laws are the subject of intense lobbying at present.
As policymakers weigh up these considerations, they must distinguish between the strengths of the internet itself, the negligence of the companies that have monteised it thus far and the wave of new ones committed to redressing it. In this context, here are five things policymakers can do right now to address online harm.
1. Impose a Clear Statutory Duty of Care
Internet publishers are currently both legally and financially incentivised to minimise content moderation. This is not normal. Even the most negligent of analogue news publishers are incentivised to do better by defamation law. The creation of a legal duty of care - something we at CaliberAI have publicly declared our support for - would greatly change this, both prompting companies to take steps to mitigate reasonably foreseeable harms and lowering the bar for action when they fail to do so. In the United States in particular, it would provide a proportionate solution to many of the concerns around Section 230.
2. Follow Australia, Ireland and the UK’s Lead With Criminal Liability Provisions for Harm
In 2019, Australia passed legislation that provides for social media executives to be jailed for up to three years for a failure to “expeditiously” remove “abhorrent violent material”. In Ireland, under the Online Safety and Media Regulation Bill, a social media manager can be subject to “a class A fine or imprisonment for a term not exceeding 12 months or both.” These are conditional, restricted provisions and the bar set by the legislation is high but proportionate and reasonable. There is no compelling argument for excluding similar provisions from upcoming E.U. and U.S. reforms.
3. Support Journalist Led AI Tool Builders
Facebook and others are not short of high quality moderation tools, but these are also not without their flaws, and there is no shortage of startups building products that improve upon them. Only a handful of these are led by those most experienced in how to manage the public conversation - journalists. Governments should support the development of the next generation of natural language processing based tools by partnering with and supporting these kinds of startups. Within the E.U. for example, such support could include compelling VLOPS to collaborate and provide access to data, as well as the provision of regulatory sandboxes in which to test experimental ideas.
4. ‘Bake’ Core Values into Mandatory Technology
Policymakers are concerned with a broad gamut of similar but different information related problems, including misinformation, disinformation, defamation and harm or hate. Thus far, they have sought to address this largely through the use of the carrot - by way of educational initiatives for example - but little enough stick. As outlined above, the Digital Services Act and other legislation will change this, but there is no reason legislators should not also be more ambitious when it comes to education, values and ‘the carrot’. They should seek to ‘operationalise’ such things by weaving them into the basic functionality of the web, in much the same way that the E.U. has done in relation to privacy through the use of cookie consent mechanisms. Middleware, as it’s often termed, and of which CaliberAI’s technology is a current example, is the way forward here, but governments must proactively support this approach.
5. Distinguish Between Disinformation, Misinformation, Defamation and Harm, and Legislate Accordingly
As former European Commission Vice-President, Andrus Ansip, noted in 2017, “fake news is bad, but the ministry of truth is even worse.” Since 2016, former United States President Donald Trump’s use of the term ‘fake news’ and the possible role of disinformation in the Presidential Election of that year, there have been increasing efforts to address this nebulous problem. At CaliberAI, we believe that well thought out legislation, geared toward addressing measurable, defined speech and underpinned by substantive penalties, is likely to prove a better route to accountability for digital publication generally. The nefarious, opaque nature of most disinformation means it does not easily lend itself to machine learning solutions, and is more effectively addressed through a combination of education and awareness training (e.g. E.U. vs Disinfo) and the work of national security agencies.