AI assistants are now very widely used – from personal products such as Amazon Alexa and Apple Siri in people’s homes to corporate use of AI assistants to ensure optimum productivity, culminating in seven million people using AI assistants on a daily basis in the UK alone. This growth in AI assistants means we need to better understand the various adverse cybersecurity scenarios which may leave them vulnerable to security breaches
The EPSRC funding into Secure AI assistants (SAIS) brings together a cross-disciplinary collaboration between King’s Cybersecurity Centre, Imperial College London and non-academic partners including Microsoft, Humley, Hospify, policy and regulation experts and the general public. The focus of the group seeks to deepen understanding of the systems through which AI models interact with each other, and the wider ecosystems in which they are embedded. The interdisciplinary nature of the research group seeks to deliver a multi-faceted approach to proposing methods to ameliorate the security behaviour of AI assistants.
Commenting on the impact of this research, Dr Jose Such, Reader at King’s Department of Informatics, Director of the King’s Cybersecurity Centre and Principal Investigator of the collaboration, said:
We are delighted to start working on the Secure AI assistantS (SAIS) project funded by EPSRC. This project is of crucial importance to create secure AI Assistants that we can trust and make the most of the exciting functionalities and convenience they bring with them.–
King’s Cybersecurity Centre is an EPSRC-NCSC Academic Centre of Excellence in Cyber Security Research (ACE-CSR). The centre brings together the diverse mass of researchers across King’s College London working on the socio-technical aspects of cyber security, including academics at the Departments of Informatics, the Department of War Studies, the Department of Defence Studies, the Department of Digital Humanities, and the Policy Institute.