Skip to main content

The King’s Prize Doctoral Programme in Safe, Trusted and Responsible Artificial Intelligence (STaR‑AI) brings together leading researchers from across King’s College London to train the next generation of experts in responsible AI. The programme equips graduates to understand the technical challenges of building safe and trustworthy AI, to engage critically with its human and societal implications, and to work confidently across disciplines to ensure AI technologies have positive impact.

King’s longstanding strength in interdisciplinarity provides a distinctive environment for studying AI and its wider consequences. Building on the success of the UKRI Centre for Doctoral Training in Safe and Trusted AI, STaR‑AI is supported by specialists in AI methods, human‑centred approaches, and legal and ethical frameworks from the Departments of Informatics and Digital Humanities, and the Dickson Poon School of Law. Students will gain both technical and non‑technical expertise relevant to responsible AI development across sectors, and will be well prepared for diverse careers, including in academia, research and development, and policy.

The programme welcomes applicants from a wide range of disciplinary backgrounds. Multidisciplinary supervision teams support students working on diverse application areas, enabling cohorts that combine technical, social‑scientific and humanities perspectives. This diversity is central to developing well‑rounded researchers able to meet the demands of a rapidly evolving national and international AI landscape.