Skip to main content

26 October 2023

Expectations for the AI Safety Summit

What do members of the King’s Institute for Artificial Intelligence community expect from the upcoming summit?

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries. On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise. Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. A similar pattern of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries. / CC-BY 4.0

Artificial intelligence (AI) is advancing at an unprecedented pace, and as it continues to integrate into our daily lives, the question of AI safety has become paramount. The upcoming AI Safety Summit on 1 and 2 November, organized by the Foreign, Commonwealth & Development Office and the Department for Science, Innovation, and Technology, promises to be a significant event. It will bring together international governments, leading AI companies, civil society groups and experts in research with the aims of considering the risks of AI, especially at the frontier of development, and discussing how they can be mitigated through internationally coordinated action.

What do members of the King’s Institute for Artificial Intelligence community expect from the upcoming summit?

Professor Carmine Ventre, Director of the King’s Institute for AI: Shaping a Responsible AI Agenda

In light of remarkable advancements in AI capabilities, Carmine Ventre asks ‘Should we delay the technological progress or should we rather leave tech companies the freedom to innovate and introduce new products without stringent restrictions?’ These questions can, he states, be put at the forefront of the global discourse through the AI Safety Summit on 1 and 2 November, with the aim of shaping a (regulatory) agenda that ensures the responsible use of AI.

The ideal outcome, according to Ventre, would be ‘a shared vision of the ethical considerations surrounding the use of AI, encompassing issues such as bias, transparency, and accountability. This pertains not only to what the government refers to as 'Frontier AI' but also to the task-specific 'Narrow AI.' Narrow AI presents immense opportunities for different sectors of our society, but we must confront unresolved issues for AI-generated decisions, the level of trust we can place in AI, and the skills of AI users and developers.’

Dr Raquel Iniesta, Institute of Psychiatry, Psychology & Neuroscience: Emphasizing Human-Machine Collaboration and Ethics in AI

A key outcome for Raquel Iniesta should centre on ‘an ethical AI that respects human dignity’. AI has huge potential in some tasks, which we are already seeing today, but it is crucial to not lose sight of the human. There are, according to Iniesta, tasks where humans continue to surpass AI and this should be acknowledged.

The advancement of AI and its increasing inclusion in all facets of life ‘should not be seen as an opportunity to reduce the investment on resources that involves human capability, but to invest on AI resources that enlighten human ability.’

The AI Safety Summit should, according to Iniesta, advocate for 'collaborative action rather than machine-only to avoid general risks of dehumanisation and disempowerment’.

Dr Canh Dang, Faculty of Social Science and Public Policy: An Economic and Business Perspective

Businesses are increasingly integrating AI technologies into their operations. Ensuring the safety and reliability of these systems is paramount for fostering trust and sustaining growth, states Dang. As AI continues to evolve, it becomes integral to consider not only the potential economic benefits but also the ethical and safety implications associated with its deployment.

Dang anticipates engaging in discussions on the ethical considerations surrounding AI applications, especially in industries where decision-making is heavily influenced by machine learning algorithms. As businesses strive to leverage AI for competitive advantage, addressing ethical concerns becomes a strategic imperative. Similarly to Ventre, Dang sees the summit as a platform to explore the delicate balance between innovation and ethical responsibility, paving the way for the development of frameworks that promote both economic prosperity and societal well-being.

He expects the summit to delve into the regulatory landscape, as governments and businesses navigate the challenges of overseeing and ensuring the safe implementation of AI technologies. Understanding the economic implications of regulatory frameworks is crucial for businesses to adapt and thrive in an environment where responsible AI practices are increasingly demanded by consumers and stakeholders alike.

The AI Safety Summit, says Dang, ‘holds the promise of illuminating key intersections between economics, business, and AI ethics. As representatives of King's College London, we are eager to contribute to these conversations and collaboratively shape a future where AI not only drives economic growth but does so with a steadfast commitment to safety, ethics, and societal well-being.’

Percy Venegas Obando, King’s Business School: Uncertainty in AI

Percy Venegas Obando hopes that guidelines around uncertainty in AI will be established. For AI systems that are transparent and accountable, measuring uncertainty in AI-generated responses and establishing clear rules for AI to express uncertainty are crucial.

Dr Emanuele De Luca, Institute of Psychiatry, Psychology & Neuroscience: AI's Impact on Higher Education

Emanuele De Luca is keen to explore how different agents, particularly governments, view the positive and negative potentials of AI in higher education. He sees this as fertile ground for groundbreaking innovation but acknowledges the potential resistance to AI adoption from those protective of current educational practices. The AI Safety Summit may offer some guidance and progress on aiding discussions around AI in higher education from a multitude of perspectives.

Steven Jiawei Hai, Faculty of Social Science and Public Policy: Globalized Technological Innovations and Inclusive Policy Frameworks

Steven Jiawei Hai wants to see sustainability at the core of developments at the AI Safety Summit. Hai views the summit as an opportunity to reach shared understanding on not only the risks of AI but the ways in which it can develop through internationally coordinated action. He emphasises the need for ‘careful definition of the boundaries of the regulatory landscape and a decisive encouragement of “trial and error” in techno-innovations.’

By bringing together valuable expertise from across international governments, leading AI companies, civil society groups and experts in research, the United Kingdom can place itself at the forefront of coordinating ‘a system in which frontier AI has become an emerging solution and challenge for the future of a sustainable human society’.

‘Shared wisdom, and inclusive collaboration could make the journey full of possibilities’, suggests Hai.

In this story

Professor Carmine Ventre

Professor of Computer Science

Raquel Iniesta

Senior Lecturer in Statistical Learning for Precision Medicine

KBS social media logo

Lecturer in Economics