Back to FAQs

Frequently Asked Questions

How do our tools work?

CaliberAI's machine learning systems analyse text with respect to its syntactic, lexical and semantic features, before outputting a probability score indicating that the text belongs to one of our relevant categories (Defamatory and/or Harmful or Neutral).

The systems are almost entirely pre-trained on manually labelled data, carefully assembled by domain experts, with extensive experience in the management of public debate, so that they learn to recognise the language patterns of defamatory and harmful content.