Skip to main content
KBS_Icon_questionmark link-ico

Centre for Doctoral Training in Safe and Trusted Artificial Intelligence announced

The Department of Informatics will lead a Centre for Doctoral Training (CDT) on Safe and Trusted Artificial Intelligence (STAI) as part of a UK-wide £100 million investment in artificial intelligence by UKRI. The Centre will bring together world-leading experts from King’s and Imperial College to train a new generation of researchers.

ARTICLE Lock

The CDT will train scientists and engineers in model-based AI approaches and their use in developing safe and trusted AI systems. Such systems are safe, meaning that some assurance can be provided about system behaviour, and they are trusted, meaning that people can have confidence in the decisions they make and the reasons for making them. CDT researchers will also be trained in the implications of AI for wider society including, for example, the relevance of safe and trusted AI to legislation and regulation, and to different industry sectors and application domains.

Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct; and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system.

King’s and Imperial are renowned for their expertise in model-based AI and host some of the world’s leaders in the area. Core research areas include:

  • Verification & Testing, to provide guarantees about system behaviour;
  • Logic in Artificial Intelligence, for efficient and expressive knowledge representation and reasoning;
  • Planning, which allows the synthesis of solutions to achieving complex tasks that are correct by construction;
  • Argumentation & Dialogue, which supports explanation and transparent reasoning, and can allow joint decision-making between humans and AI systems;
  • Norms & Provenance, to guide behaviour in the context of organisational structures, and track and explain data, allowing identification and mitigation of anomalies;
  • Human-oriented AI, which aims to support collaboration and communication between machines and humans.

This depth and breadth of expertise in model-based AI is complemented with related expertise in related technical areas such as cybersecurity and data science, and with expertise related to the implications and applications of AI in areas such as security studies & defence, business, law, ethics & philosophy, social sciences & digital humanities, natural sciences & medicine.

Through engagement with the CDT’s diverse range of industrial partners, students will be exposed to the different experiences, challenges, and technical problems involved in both startups and large corporations.

The CDT will fund five cohorts over five years of entry, with around 65 students joining the programme over its lifetime. The first cohort will join in September 2019.

Director of the Centre and Executive Dean of the Faculty of Natural & Mathematical Sciences, Professor Michael Luck said:

‘It's a real privilege for me at King's to be able to lead this major new strategic initiative in artificial intelligence. We now have the opportunity to make a genuine and lasting impact in safe and trusted AI, to train multiple cohorts of researchers over the next few years, and to support UK industry in this area. Together with our collaborators at Imperial, and our wide range of industrial and academic partners, we will be able to drive forward research and application in safe and trusted AI.’

Read more about the Centre


Related links