We’re at the point now where AI-enabled maritime vehicles are being deployed more widely, affording new opportunities to maintain and monitor offshore windfarms, or to search or survey areas of the sea in a much more sustainable manner."
Dr Caitlin Bentley
09 May 2025
King's scientists receive government funding to enable smooth transition to an AI-powered future
Up to £400,000 from the AI Security Institute will fund research exploring how humans and AI can work together sustainably and safely.

Two King’s Computer Scientists have been funded by the UK government to tackle the complex challenges arising the roll-out of AI in cities and the maritime trade.
Funded by the AI Security Institute and supported by the Department for Science, Innovation and Technology, the Systematic AI Safety grants will enable Drs Caitlin Bentley and Yali Du from the Department of Informatics to explore the challenges around skill retention in an AI-enabled world and empower AI agents to work more cooperatively.

Dr Caitlin Bentley, Senior Lecturer in AI Education – Evolving Human-AI Competencies
AI is increasingly being embedded in maritime operations, whether that be using AI to navigate the seas or autonomous underwater robots performing checks on offshore wind infrastructure while operators check on their progress from the shore.
Part of this implementation is the use of AI to control functions that operators previously attended to, meaning operators will move into roles managing multiple AI agents instead of doing the job themselves.
However, as these are mission-critical functions, a human will always need to be in the loop to supervise in case of emergency. Yet operators’ skills might become eroded if they are overly reliant on AI – posing a risk to systemic safety.
Working with maritime autonomous trainer Seabot and manufacturer Frontier Robotics, Dr Bentley is designing a series of simulated challenges and educational pathways to ensure that experienced operators don’t lose their skills. By training operators to be AI-ready, the team are also creating a lower barrier for entry, which might show that current qualifications that require decades of training or experience could be reduced.
The group hope this may also enable people who have been excluded from working at sea as remote operators to join the workforce, potentially diversifying the traditionally male- dominated maritime industry.
Dr Bentley said, “We’re at the point now where AI-enabled maritime vehicles are being deployed more widely, affording new opportunities to maintain and monitor offshore windfarms, or to search or survey areas of the sea in a much more sustainable manner. But we still need qualified operators to spot when something is going wrong – especially if AI introduces new risks like failing to recognise an object nearby.”
We believe that continuously identifying and preparing for AI-related challenges will equip the maritime sector to navigate future technological shifts, ensuring the security of our energy and shipping infrastructure.”
Dr Martim Brandão
Dr Martim Brandão, Lecturer in Robotics and Autonomous Systems in the Department of Informatics, and researcher on the project said “By providing an educational on-ramp to new operators we can help address the skills shortage, while also protecting the hard-won skills current workers have. We believe that continuously identifying and preparing for AI-related challenges will equip the maritime sector to navigate future technological shifts, ensuring the security of our energy and shipping infrastructure.”

Dr Yali Du, Senior Lecturer in Artificial Intelligence – Evaluating the Cooperative Behaviour of Systems of Generative Agents
Generative AI models, while often associated with Large Language Models like ChatGPT, are increasingly being used ain applications such as driverless cars, smart traffic lights and automated production lines. The growth in these models mean they will increasingly come into contact with each other and will need to work together to make ideas like smart cities workable.
Dr Du will evaluate how common generative AI models on the market work together, exploring if the new behaviour that emerges from the interplay between them is cooperative or exploitative. This will help provide a safety benchmark for how AI interact with each other.
She explained, “The dreams of a smart society with AI-enabled street-planning and driverless cars make interaction between two AI agents inevitable, but without a robust understanding of how they interact, deployment could potentially be dangerous.
The dreams of a smart society with AI-enabled street-planning and driverless cars make interaction between two AI agents inevitable, but without a robust understanding of how they interact, deployment could potentially be dangerous"
Dr Yali Du
“AI models are optimised to do one thing, but when they work together there has to be some give and take.”
In an example where two driverless are both turning into the same road from opposite sides of an interchange, one must let the other pass. If both agents are too altruistic and cooperative, they will wait for each other to go and stop traffic. If they are too self-interested in their own optimised behaviour of getting their passenger from A to B in the shortest time, they will crash into one another.
“We need to test and collect data on the probability of these outcomes to inform recommendations, hopefully enabling a smart-enabled society function safely for all.”
In the future, the team also hope to evaluate how these agents work with human decision making and how they can best respond to a range of different human behaviours.