The Torrent, the Tsunami and the Slow Move to Fix Things
By Neil Brady
3rd March 2021
At Caliber AI we are a collaboration of journalists, editors, linguists, computer scientists and others, working to reduce risk and harm in digital publishing. We are constructing a unique database that will give our users pre-publication warning of content that is defamatory or harmful under a range of headings.
In his 2002 book, Media Unlimited: How the Torrent of Images and Sounds Overwhelms Our Lives, sociologist Todd Gitlin reflects on the, in retrospect, quaint media dynamic of that time. Sandwiched between the 1998 incorporation of Google and the social networks of Web 2.0 that followed, its central thesis is easily relatable today. "The torrent of images, sounds, and stories will widen" Gitlin wrote, "but neither its volume, speed, nor bandwidth can be counted on to deepen democracy”. Less than twenty years later, in the wake of a hitherto unfathomable media fuelled storming of the United States Capitol, that danger to democracy has never seemed greater. If the media of 2002 were best likened to a torrent, that of 2021, when 500 million Tweets are sent a day and 500 hours of video is uploaded to YouTube every minute, is surely best compared to a tsunami.
The tsunami has spared few industries but it is its impact on news, in the true sense of the word where information is parsed by those trained to detect bias, untruth and, perhaps most crucially, determine what is in the public’s interest to know, that has been most acute. As Alan Rusbridger opined in his 2018 book, Breaking News, “by early 2017 the world had woken up to a problem...news, the thing that helped people understand their world...news was broken.” There is no doubt that editors and sub-editors have been under ever-increasing pressure to produce more content, faster and with little time to fact-check, consider tone or context for almost two decades now. In parallel, ad revenues have all but collapsed, stretching existing editorial resources thinner than ever. Faith that ‘digital first’ publishers would find a way has also proved misplaced. Instead, thanks to their unassailable dominance of the online advertising market, internet platforms remain the predominant drivers of public conversation, and mass transmission of disinformation, defamation and harm speech to boot.
As democracies, how did we get here? How did such a dysfunctional information ecosystem come to pass? In truth, a combination of accident and design. Design insofar as early internet policy was underpinned by a belief that the advantages of intermediary liability for internet publication outweighed the disadvantages. As former Federal Communications Commission Chief, Reed Hunt noted last year, “we all thought that for people to be able to publish what they want would so enhance democracy...it would lead to a kind of flowering of creativity and...collective discovery of truth.” It was this mindset that produced Section 230 of The Communications Decency Act and the e-Commerce Directive. Design too insofar as the companies that availed of this legal lacuna did so in order to, in the words of Napster Founder Sean Parker, “consume as much of your time and conscious attention as possible.”
But we have also arrived here by accident. While humanity understandably marvelled at the benefits of peer-to-peer protocols in the form of Skype or textual web search query facilities in the form of Google, sight was lost of what was being broken. “We were naïve”, Hunt also mused. “We were naïve in a way that is even hard to recapture”.
At CaliberAI, we believe an inflection point has been reached in the wake of that naïvete, and that a new, higher-risk digital publishing paradigm is in train. This paradigm will not take root overnight. It took twenty years for data-fuelled, liability-immune internet business models to evolve and erode The Fourth Estate. Reform will be similarly gradual. The drawing of lines is plain to see however, in the EU in the form of the Digital Services Act, in the UK in the Online Harms Bill and in Ireland in the Online Media & Safety Regulation Bill, amongst others. To varying degrees, these and other legislative initiatives around the world aim to establish a broad ‘duty of care’, define ‘harm’ and create new obligations to analyse and assess risk, and take “effective mitigation measures.”
The existing information ecosystem cannot be unbuilt entirely, nor should it. There is a delicate balance to be struck here between freedom of speech and censorship, and care must be taken to preserve the increased transparency and voice that are its strengths.
CaliberAI’s mission is to work with publishers and policymakers in order to build tools to do this, and meet these new obligations. To bolster news editing processes and infuse online platforms with machine learning tools optimised for proliferation of civil discourse. To avoid the mistakes of the past, by ensuring construction of tomorrow’s technology is led by those most experienced in “[helping] people understand their world.” To mitigate large language model risk through the use of custom data, carefully inspected for bias, and create models built for explainability and trustworthiness. To publish with diligence.
To find out more please email firstname.lastname@example.org