Diversity & Inclusion

At CaliberAI, an inclusive work environment, in which trust, respect and openness are fostered, and where everyone can listen to and learn from each other, is core to our work.

In addition to building digital tools to detect potentially defamatory speech, we also build tools to detect harmful or hateful speech. Diversity of perspective, in terms of demographics (i.e. age, race, faith, gender identity, sexual orientation), experience (i.e. economic or social positioning, educational attainment) and mindset, is crucial to the successful identification of such harmful or hateful speech in particular.

From day one, we have endeavoured to bake this diversity into our products, through careful inspection for machine learning bias for example, and our hiring processes. We are committed to the continued hiring of candidates possessed of both much needed skills and such diverse perspectives, as well as to engaging with traditionally marginalised groups, to help bolster this commitment.

Ultimately, assisted by an Advisory Panel of recognised experts in computing, law, ethics, linguistics and publishing, and guided by foundational international law, including the United Nations Declaration of Human Rights, we strive to further discourse that strikes a healthy balance between respect for diversity of perspective and respect for truth. An environment in which, to paraphrase former Guardian editor C.P. Scott, “comment is free, facts are sacred and both are published with diligence.”

What makes us different

Powered by a unique, high-quality dataset of defamation examples, and managed by a team of annotation experts, CaliberAI's pioneering tools augment human capability to detect language with a high level of legal and defamatory risk.


Unique data

Unique, carefully crafted datasets, training multiple machine learning models for production deployment.

Expert led

Expert annotation, overseen by a diverse, publisher led team with deep expertise in news, law, linguistics and computer science.

Explainable outputs

Pre-processing and post-processing with explainable AI outputs.