Skip to main content
KBS_Icon_questionmark link-ico
Decorative logo of the India AI Impact Summit ;

King's academics share priorities for global AI summit in India

AI Insights
Jessica Keating

Communications and Engagement Manager, King's Institute for Artificial Intelligence

17 February 2026

As the AI Impact Summit begins in New Delhi, King’s academics outline their priorities for global AI debates – including sustainability, inclusion, trustworthy systems, skills and changing workplaces.

The Summit takes place in New Delhi, India from 16 to 20 February and brings together governments, industry, civil society and researchers to discuss artificial intelligence and global challenges. This is the fourth summit of its kind and the first to be hosted in a developing economy.

The meeting in India follows a series of international events on AI, including previous summits in the UK, South Korea and France, and also follows the launch of the International AI Safety Report in early February, the world’s first comprehensive review of the capabilities and risks of general-purpose AI systems.

To mark the summit, the King’s Institute for Artificial Intelligence has gathered reflections from experts across the university on the outcomes they hope to see and the questions they believe should be at the centre of the global conversation.

2026 must be the year when AI proves its worth beyond the hype.– Professor Elena Simperl, Professor of Computer Science

Moving beyond AI hype

Several academics highlight the need to move beyond hype and focus on whether AI systems are genuinely useful, sustainable and trustworthy in people’s lives.

Dr Shan Luo, Reader in Robotics and AI, said: "As AI is increasingly deployed across sectors, a key challenge is how to scale its capabilities beyond current uses such as chat, code debugging, or short-horizon task support." He points to the need for AI systems that can "plan and act over long horizons, manage complex task sequences, and operate robustly in real-world settings", while addressing the energy and power demands of training and deploying large models.

Meaningful outcomes from the summit, he argues, would include concrete priorities for research on AI’s role in physical systems and more sustainable development pathways. Dr Luo’s work illustrates how this can play out in practice: through a new Open Source AI Fellowship, he is helping government teams to build new artificial intelligence tools that improve public services and support national security.

Professor Elena Simperl, Professor of Computer Science, Co-Director of the King’s Institute for Artificial Intelligence, and Director of Research at the Open Data Institute, said: "It’s time to move from seeing what AI can do, to understanding whether it’s genuinely useful for people - in their work, education, and communities." She emphasises that meaningful impact will depend on AI tools being tested on real tasks, affordable to run, and supported by open standards so that tools and data from different providers can work together. Simperl is also involved in major international initiatives that support more trustworthy AI in practice, including Participatory Harm Auditing Workbenches and Methodologies (PHAWM) a £3.5m project that develops participatory harm auditing workbenches to help regulators and end‑users without technical backgrounds scrutinise and improve AI systems that affect them.

The gap between AI capability and AI wisdom is widening.– Professor Oguz Acar, Professor of Marketing & Innovation and Head of Generative AI

Professor Oguz Acar, Professor of Marketing & Innovation and Head of Generative AI at King’s Business School, said: "We're getting better at building models that can do things, but not proportionally better at knowing what they should do, or who decides." He added that the summit should "grapple with the fact that the people building AI and the people affected by it are largely different populations" and consider "what does genuine participation look like when the technology moves much faster than democratic deliberation?"

Spotlight on sustainability, literacy and inclusion

AI innovation and sustainability are not co-benefits in many instances.– Dr Gabrielle Samuel, Lecturer in Environmental Justice and Health

For Dr Gabrielle Samuel, Lecturer in Environmental Justice and Health, discussions at the summit must confront the full social and environmental footprint of AI, not just its carbon emissions. She called for regulation "to move to slow science and prioritise responsible, sustainable AI innovation", and for an honest debate about AI’s role in sustainability that recognises the communities who bear the greatest costs. She warned that current debates about the so‑called ‘twin transition’ – the idea that digital transformation and green goals naturally reinforce each other – can overlook how "AI is being developed with a capitalist agenda that obscures social and environmental harms caused by the arms race countries are so obsessed with being part of."

Samuel’s concerns connect with wider work at King’s on the environmental sustainability of AI. For example, Samuel and King's colleague Dr Georgia Panagiotidou are exploring the effect of carbon-trackers in Machine Learning teams, aiming to bring environmental considerations into the ML pipeline in a sustainability-by-design manner.

This summit should focus on improving AI literacy and ensuring equitable access to resources.– Dr Liane Canas, AI+ Fellow

Two priorities stand out for the summit, according to Dr Liane Canas, AI+ Fellow at King’s: improving people’s understanding of AI and ensuring fair access to the tools needed to develop and use it. "Many users, including policymakers and everyday users, often trust AI outputs without fully understanding their limitations, biases, or potential failure modes."

At the same time, she highlights that "researchers and institutions in low-resource environments often lack the computational infrastructure and datasets needed to develop robust and locally relevant models", making it harder for them to benefit from and shape AI. From the summit, she hopes to see "concrete strategies for regulation, public education on the usage and limitations of AI, strategic planning of AI development, and frameworks that ensure fairness across diverse populations."

Professor Elisabeth Kelan, Professor of Leadership and Organisation, said: "The AI Impact Summit offers an important opportunity to examine how technological innovation can support inclusion rather than reproduce existing inequalities." She emphasised that AI systems have "gendered and intersectional effects, shaping who benefits from automation and augmentation and whose work becomes undervalued or overlooked" and called for "clearer frameworks for assessing differential impacts and stronger commitments to designing AI systems that expand rather than narrow pathways to equitable participation."

These questions are echoed across King’s in research and events that examine how AI is entangled with power and inequality, for example Dr Nessa Keddo’s work on the relationship between race, media and AI technology and Dr Christoffer Guldberg’s work on AI as a decolonial tool for peace and justice.

AI and workforce futures

I worry about what AI means for the future of collective domains of professional expertise if we increasingly use AI to augment and substitute core tasks.– Professor Damian Grimshaw, Professor of Employment Studies

For Professor Damian Grimshaw, Professor of Employment Studies at King’s Business School, the summit should also address what AI means for professional integrity and careers. "One of my major concerns about AI is that many employees with strong professional expertise may feel pressured to use an AI tool to speed up their work (such as to keep up with their team or under direction from a line manager) but at the expense of professional integrity concerning the quality and/or honesty of the service they are providing or the product they are making," he said. "We may also be normalising lower levels of quality in the work we perform."

Professor Grimshaw also raised questions about how AI is reshaping workplace relationships and career paths. "We already see that people are less likely to ask for human advice about how to improve a skill since asking an AI tool is easier," he said. "We also see that senior employees may prefer to use an AI tool to undertake junior-level tasks rather than hire younger people to work with. Our social relationships are changing and career paths are changing with uncoordinated use of AI technologies."

Research from King’s shows that we are already seeing the early impact of AI exposure on labour market trends. There is an urgent need for coordinated interventions across education, training and workforce development. AI and Workforce Futures is King’s programme responding to this challenge. It brings together the university’s world‑leading AI research, policy expertise, employer partnerships and cross‑faculty strengths to shape how the UK prepares for the future of work as a national convenor of debate and policy insight.

Dr Caitlin Bentley, Senior Lecturer in AI Education, brings a workforce lens to these questions as well. "A key challenge I hope the Summit will address regards how to ensure that increasing AI deployment provides equitable benefits across all communities and regions," she said. She argued that meaningful outcomes will require addressing "workforce transformation at scale" and stressed the need for governance frameworks that "emphasise people’s collective agency in navigating these transitions, rather than merely focusing on addressing instrumental 'skills gaps'."

King's representation in India

As Dr Caitlin Bentley joins delegates in New Delhi this week, she does so as a leading voice in responsible AI education and Deputy Chair of the Skills Pillar for UKRI Responsible Artificial Intelligence UK (RAi UK). Caitlin sees the meeting as a test of whether global AI debates can move beyond rhetoric.

The key concern for me is whether the Summit will genuinely redistribute power towards increasing collective agency, or reinforce existing inequalities under the banner of innovation and progress.– Dr Caitlin Bentley, Senior Lecturer in AI Education

Caitlin’s participation, alongside the reflections of King’s colleagues across disciplines, underlines how King’s academics are engaging with debates on the global stage about how AI can be developed and governed in ways that are useful, just and trustworthy.

In this story

Shan Luo

Shan Luo

Reader in Engineering

Elena Simperl

Elena Simperl

Professor of Computer Science

Oguz A. Acar

Oguz A. Acar

Professor of Marketing & Innovation

Gabrielle Samuel

Gabrielle Samuel

Lecturer in Environmental Justice and Health

Liane Canas

Liane Canas

AI+ Academic Senior Fellow

Elisabeth Kelan

Elisabeth Kelan

Professor of Leadership and Organisation

Damian Grimshaw

Damian Grimshaw

Professor of Employment Studies

Caitlin Bentley

Caitlin Bentley

Senior Lecturer in AI Education

AI Insights

Reflections, commentary and analysis from artificial intelligence researchers and academics at King's College London.

Latest news