Skip to main content
KBS_Icon_questionmark link-ico

Safe & Trusted AI

The UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI), led by a team of renowned experts from King’s College London and Imperial College London, aims to train a new generation of scientists and engineers who are experts in methods of safe and trusted AI.

An AI system is considered to be safe when we can provide some assurance about the correctness of its behaviour, and it is considered to be trusted if the average user can have confidence in the system and its decision making. The CDT focusses particularly on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct, and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system.

Find out more about the CDT.

 

Explore

Postgraduate

Postgraduate

Find out about postgraduate study in the Department of Informatics.

Research

Research

Learn about research in the Department of Informatics at King's.