Skip to main content
KBS_Icon_questionmark link-ico

Safe & Trusted AI

The UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers in safe and trusted artificial intelligence (AI).  

The STAI CDT offers a unique four-year programme, focussed on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Students will engage in various training activities, alongside their individual PhD project, ensuring that not only are they trained in state-of-the-art AI techniques, but also that they acquire a deep understanding of ethical, societal, and legal implications of AI in a research and industrial setting. Through engagement with the CDT’s diverse range of industrial partners, students will be exposed to the different experiences, challenges, and technical problems involved in both startups and large corporations.

Applications are now open for 2019/20 entry, with a deadline of 11:59am (UK time) on 15 July 2019. Apply now.

Apply now

What is Safe & Trusted AI?

AI technologies are increasingly ubiquitous in modern society, with the potential to fundamentally change all aspects of our lives. While there is great interest in deploying AI in existing and new applications, serious concerns remain about the safety and trustworthiness of current AI technologies. These concerns are well-founded: there is now ample evidence in several application domains (autonomous vehicles, image recognition, etc.) that AI systems may currently be unsafe because of the lack of assurance over their behaviour. Even in areas where AI methods function to high standards of correctness, there remain challenges. AI decisions are often not explained to users, do not always appear to adhere to social norms and conventions, can be distorted by bias in their data or algorithms and, at times, cannot even be understood by their engineers. An AI system is considered to be safe when we can provide some assurance about the correctness of its behaviour, and it is considered to be trusted if the average user can have confidence in the system and its decision making.

FEATURE Circuit

Training programme

The CDT focusses on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct; and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system.

King’s College London and Imperial College London are renowned for their expertise in model-based AI and host some of the world’s leaders in the area.

The depth and breadth of expertise in model-based AI is complemented with related expertise in related technical areas such as cybersecurity and data science, and with expertise related to the implications and applications of AI in areas such as security studies & defence, business, law, ethics & philosophy, social sciences & digital humanities, natural sciences & medicine. Find out more about our research.

 

The training programme has been designed with input from the CDT's industrial partners, ensuring that the skills it will develop are relevant and valuable to industry.

Alongside their individual PhD project, students will engage in diverse training activities in three areas:
Technical skills training
Training in responsible research and innovation for AI
Transferable skills training.

Training activities include:
Technical training in model-based techniques for safe and trusted AI
Interdisciplinary training on responsible research and innovation for AI
Training on the philosophy and ethics of AI
Public engagement training
Entrepreneurial mindset training
A group project, run in collaboration with the CDT’s industrial partners
Regular seminars and masterclasses on broad-ranging topics relevant to the development of STAI
A hackathon, framed around challenges co-developed with the CDT’s industrial partners
Diversity and inclusion training, including mentoring practices, impact of diversity and inclusion on group dynamics, and inclusive strategies for good research practice.

In addition, students will have the opportunity to apply for an internship at one of the CDT’s partner organisations and to bid for an enrichment placement at the Alan Turing Institute.

 

Imperial Funding

Imperial Funding

Student scholarships that are available from Imperial College London.

Engagement and partners

Engagement with a broad range of non-academic partners is a key component of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI). This engagement provides assurance that both the research supported by the CDT and the skills developed in our students will be relevant and valuable to industry and society at large, while also informing and supporting UK industry in producing state-of-the-art safe and trusted AI solutions.

FEATURE St Paul's Skyline

  • Amazon Web Services (UK) logo

    Amazon Web Services (UK)

  • Association of Commonwealth Universities logo

    Association of Commonwealth Universities

  • British Library logo

    British Library

  • Bruno Kessler Foundation FBK logo

    Bruno Kessler Foundation FBK

  • BT logo

    BT

  • Codeplay Software Ltd logo

    Codeplay Software Ltd

  • ContactEngine logo

    ContactEngine

  • Ericsson logo

    Ericsson

  • Ernst & Young logo

    Ernst & Young

  • Five AI Limited logo

    Five AI Limited

  • GreenShoot Labs logo

    GreenShoot Labs

  • hiveonline logo

    hiveonline

  • IBM logo

    IBM

  • Mayor's Office for Policing and Crime logo

    Mayor's Office for Policing and Crime

  • Norton Rose LLP logo

    Norton Rose LLP

  • Ocado Group logo

    Ocado Group

  • Royal Mail logo

    Royal Mail

  • Samsung logo

    Samsung

  • Thales Ltd logo

    Thales Ltd

  • The National Archives logo

    The National Archives

  • University of New South Wales logo

    University of New South Wales

  • Vodafone logo

    Vodafone

 

Explore

Research

Research

Learn about research in the Department of Informatics at King's.