Skip to main content

28 April 2023

AI at King's

See examples of AI projects across King's, as featured in the Bringing the Human to the Artificial exhibition.

BTHTTA NewsStoryTitle+Image_780x440px

A non-exhaustive video showcase of projects, work and initiatives from across King’s, giving a sense of the breadth of AI work at King’s.

Computer Science, AI and Smart Cars

Two videos providing a fascinating glimpse into the worlds of smart cars and the incredible technologies that are transforming the automotive industry. Learn how programming languages, algorithms, machine learning techniques, data analysis methodologies, and natural language processing technologies are applied to the design and development of intelligent systems or used to solve complex problems in the context of smart cars, such as predictive maintenance, autonomous driving, and real-time route optimisation.

Find out more

Explore the work of the UKRI TAS Node in Verifiability

Visit the UKRI Trustworthy Autonomous Systems Hub website.

Read Trustworthy Autonomous Systems through Verifiability, a paper that describes research carried out by the UKRI TAS Hub to address a central issue in establishing trustworthiness: verifiability.


Data from mobile devices can give a full and continuous picture of a person’s health at a level of detail that has not been possible until now. RADAR-CNS collected 62 TB data from people with epilepsy, depression or MS and used AI techniques to identify and test possible indicators of health and relapse.

Find out more

Find out more about RADAR-CNS at Bringing the Human to the Artificial.

Visit the RADAR-CNS website.


As artificial intelligence (AI) becomes widely deployed, the need for AI to support interaction with humans becomes ever more acute. A human-machine collaboration can overcome the limits of human and AI individual capabilities. In this kind of collaboration, each works alongside the other to achieve a shared goal. However, to effectively achieve this it is crucial to develop new AI technologies that people can use, understand, and trust. The Trust in Human-Machine Partnership (THuMP) project addresses the technical challenges involved in creating explainable AI (XAI) systems so that people can understand the rationale behind (and trust suggestions made by) the AI.

Visit the THuMP project site.

Find out more


Every day, organisations use computers to make decisions that impact our lives. In response to the growing demand for transparency and accountability around automated decision-making, PLEAD has developed a software service that generates explanations from application data, the Explanation Assistant.

Find out more