Skip to main content
KBS_Icon_questionmark link-ico
Hero Desktop Reimagining AI Futures ;

Reimagining AI Futures - Sana Khareghani: AI for Everybody

Hidden consequences and worsening inequality, Former Head of the UK Government’s Office for AI and Professor of Practice Sana Khareghani tells us about how government, industry, academia and civil society can work together to shape AI governance that benefits all.

Whenever I hear about the creation of new artificial intelligence (AI) application, I’m reminded of a piece of wisdom from before the current ‘Age of AI’, in 1993.

In Steven Spielberg’s ‘Jurassic Park’, Jeff Goldblum utters this line when looking at the theme park gone wrong, “Your scientists were so preoccupied with whether or not they could, that they didn’t stop to think if they should”.

Like the dinosaur creators in question, AI practitioners today need to stop and ask themselves before designing their next model, “Does the world need this?”

Responsible AI

Responsibility is paramount when it comes to AI, and that needs to be an end-to-end consideration. Fairness, accountability, and transparency, need to be built into the design process of any AI model, and that includes how data is collected and processed. Researchers need to think if their models can make a responsible impact on the world around them, limiting behaviours like damaging bias.

An AI model designed to parse through job applications, for example, must ensure that it takes active steps to rectify any biases around limited female representation in historical training data. A failure to do so would result in an AI model enforcing damaging gender inequalities in the labour market, an irresponsible impact.

Who is working to make AI infrastructure, adoption and governance responsible?

Three bodies work to ensure that responsible AI is the only AI we see in the UK. The first is academia, where developing responsible AI has always been at the heart of their work – whether that be in King’s or any other institution.

The second is government, which is both working in partnership with academic institutions to fund initiatives driving responsible AI, and shaping regulation that puts responsible AI guidelines in place.

The last is industry, who are leading the way when it comes to sharing access to data and powerful computing technology. We hear a lot about ethics from industry now, and that happens because technologists are making progress faster than regulators. ChatGPT, for example, has learned to outperform other machines in complex tasks like composition in a matter of months. This speed necessitates industry to keep thinking with their ethical and responsibility hats on – asking questions like “just because we can, doesn’t mean we should”.

Like the scientists of Spielberg's famous 'Jurassic Park', AI practitioners today need to stop and ask themselves before designing their next model - does the world need this?"– Professor Sana Khareghani

AI technologies have the power to transform productivity in the workplace and beyond, but the scale of that impact will depend on how many people adopt these technologies. To do that, the public need to feel confident that they understand AI technologies, and that these technologies are (a) safe and responsible, and (b) helping address the challenges they face.

As it stands, we’re not doing a good job of communicating about these technologies, or in fact including civil society enough in the creation of solutions. To rectify this, the onus is on government, industry and AI experts to do the hard work to clearly communicate what these technologies can do, the impact they could bring to people’s lives and to better include civil society in the discourse. Without that, there is going to be limited adoption of this technology, and therefore limited benefits.

This involves a delicate balancing act, as the adoption of AI comes with both risk and reward.

The hidden costs of AI – can it be responsible?

Artificial intelligence technologies could split open the digital divide between the Global North and Global South, with consequences for the global impact of AI.

AI and digital technology cannot be a priority for many countries in the Global South as they grapple with more immediate concerns like security, poverty, health and welfare. However, uneven funding of AI and access to digital platforms that generate data, which in turn train AI models, threatens to cut much of the global population out of the conversation.

By having only a minority of the global population represented in the design process of AI, as well as the data sets used to train them, we miss on the creative problem solving that’s part of the rich tapestry of different lived experience. We risk having AI solving the problems of the few rather than the many."– Professor Sana Khareghani

By having only a minority of the global population represented in the design process of AI, as well as the data sets used to train them, we miss on the creative problem solving that’s part of the rich tapestry of different lived experience. This means that the problems that AI technologies will be able to solve will be the problems of the few rather than the many, or worse yet, we will have solutions that are looking for problems rather than directly addressing the challenges people are experiencing.

Inclusive international collaborations are vital to bridge these gaps and ensure fair representation of the whole global community, and countries like the UK need to take an active role.

AI also has large-scale potential costs for the environment. It’s well documented that large, energy hungry models and the data needed to train them is having an impact on worldwide carbon emissions. As we look at the scale of the solutions we’re building, we need to ensure that these are scaled appropriately to the problems we’re trying to solve, and that responsible AI is environmentally responsible as well.

Ultimately, these are exciting times. Computer Scientists, Engineers, and everybody in between is excited by the change we can achieve by using these tools. But we can’t just rush into a solution.

We need to stop and think about the impact this work might have on the wider world, from entrenching bias to carbon emissions. Responsibility starts with people, and it’s the work of all of us across academia, government, industry and civil society to come together to secure an equitable AI future for all.

Latest news

LDC 2024

30 April 2024

Do sanctions deter?

Russia successfully insulated itself from sanctions following its illegal invasion of Ukraine. For…