How AI Can Help Combat Climate Misinformation and Avoid Costly Mistakes


By Hilary White

19th April 2023

If the accuracy of information being exchanged on social media around the climate emergency is to be improved, technology optimised for high quality discourse will be vital.

How AI Can Help Combat Climate Misinformation and Avoid Costly Mistakes

As part of ongoing research for a climate related project I was undertaking recently, I decided to make Twitter a core part of my suite of information resources. Social media would now be used to, not only feed me the latest thinking and science on the climate and biodiversity crises, but the mood music from the coalface too.

For the most part, it worked. Each time I went into my feed, the algorithm would top-load Tweets by scientists, academics and activists, presenting me with links to studies and informed opinion, much of which would have bypassed traditional media.

Often, accompanying these were single blocks of text by people - scientists and commentators alike - who were in deep despair. Images of shooting graphs, dried-up riverbeds, landscapes ablaze and disintegrating glaciers, captioned by fear and loathing.

Such public displays of “over-sharing” are not uncommon on social media, but every so often, another type of comment would also appear. These comments would attempt to minimise the statistic and its implications, perhaps by dismissing a violent weather event as not unusual, or posting a link to information directly contradicting the overwhelming consensus of global climate research. Misinformation and untruth, in short.

Social media’s role in public discussion of the climate crisis is, as with its role in other areas, complicated. It has been crucial in drawing together a global community of climate activists, as well as providing a platform to increase awareness of climate change, but it can also facilitate climate denial at scale, whether inadvertent or intentional, and whether generated by humans or machines. This dynamic played out starkly during the Covid-19 pandemic, prompting Trust and Safety teams everywhere to ramp up their efforts to mitigate vaccine misinformation.

So, can similar attempts to filter and manage online discussion of the climate crisis, particularly through the use of AI tools, help address climate misinformation?

David Robbins is Director of the Centre for Climate and Society at Dublin City University (DCU). A former journalist, Robbins consults media companies about climate coverage, encouraging critical thinking and teaching climate literacy. I asked him about the challenge of automating solutions to climate denial and related areas like ‘greenwashing’.

“People don’t want to receive information that’s contrary to their worldview or the ‘take’ they have decided themselves,” Robbins says. “I’m not sure what can be done to address it bar putting it back to the individual to limit and block who can see their posts and reporting individuals. But the rules of that are fairly narrow - I don’t think you can block people for disputing climate science.”

The advent of generative AI however, and its potential to compound the problems of misinformation and toxic speech on an unprecedented scale, has made the imperative to find solutions here all the more urgent. As ChatGPT has demonstrated, when trained comparatively indiscriminately, generative AI has the potential to, as MIT Technology Review put it last December, 'poison the internet'. When machine learning models are carefully trained and tuned on high-quality data however, the results are qualitatively different.

Last week for example, a University of Zurich team announced ChatIPCC, a conversational AI tool, trained on Intergovernmental Panel on Climate Change (IPCC) reports. As outlined in the abstract of the paper accompanying its launch (available here), training with “scientifically accurate, and robust sources”, helps to enable “the delivery of reliable and accurate information.”

This broadly mirrors CaliberAI’s approach to the fine-tuning of Large Language Models (LLMs), which we have been perfecting since spinning out of Trinity College Dublin (TCD) in 2020. Our team is a unique assemblage of computer scientists, linguists and editors, working together to fine-tune LLMs for multiple categories of ‘problematic’ speech, with our first suite of products built around defamation. In addition to ensuring use of robust sources, we also heavily relied upon ‘invented’ data, free, essentially, of the legal risk currently associated with much generative AI around copyright, data protection etc.

Robbins believes that the values, skills and experience of traditional journalists and editors, in areas such as verification for example, means they’re well placed to provide effective solutions when it comes to climate change and moderation in general.

“So much of the online conversation is based around professional media organisations’ coverage of climate breakdown”, he says. “And if we take journalism at its very basic, it’s about professional verification before we publish...if that awareness can be raised in the mainstream, then it might have an effect on the discussion happening online.”


Contact us

Get a closer look at how our solutions work and learn more about how CaliberAI's technology can integrate with your technology stack and editorial workflow.

Get in touch with sales@caliberai.net