Skip to main content
KBS_Icon_questionmark link-ico
Hero Desktop Reimagining AI Futures ;

Reimagining AI Futures - Dr Elizabeth Black: Trustworthy AI

The AI revolution is here, but can it be trusted? Dr Elizabeth Black, Director of the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence, talks us through what’s needed to help us trust AI, how safe and trusted also means diverse and inclusive, as well as the trade-offs we’ll face in a future governed by AI.

Artificial intelligence (AI) is poised to bring many benefits to society, from improving healthcare, to addressing climate change, to revolutionising workplace productivity.

Yet the way AI has the potential to impact us is complex. As individuals we’re increasingly seeing AI being used to decide things like whether we receive a mortgage or a job interview, and as a society AI has the capacity to shape the kind of work that’s available, the ideas we’re exposed to and how wars play out.

It’s vital that we stop and think about what we’re doing when we put these new technologies out into the world, to make sure that we can trust these systems to be safe and to do good.

What is safe and trusted AI?

When we talk about ‘safe’ AI we mean we have some formal, technical guarantees about its behaviour – assurances that it will behave as intended and that it won’t violate any safety constraints.

For an AI system to be ‘trusted’, we need to have well-placed confidence in the decisions that it makes and the impact it will have. This means that the system needs to be able to explain – to people from all walks of life – why it has made a certain decision. It also means that individuals need to be aware when and how AI is being used, say if AI is being used to decide whether you get a mortgage.

Part of that trust means that we need confidence that the AI system is going to have a positive impact on our society – that it won’t cause harm, it won’t exacerbate existing inequalities or introduce new ones, and it won’t negatively impact our lives.

The work we do in the UKRI Centre for Doctoral Training in Safe and Trusted AI (CDT STAI) looks to build that trust through a broad spread of work, which includes looking at how AI explanations can be tailored to different socio-cultural groups, investigating how AI can promote positive interactions on social media, and considering how we can align AI behaviour with human values.

Trust means confidence that an AI system is going to have a positive impact on our society – that it won’t cause harm, it won’t exacerbate existing inequalities or introduce new ones, and it won’t negatively impact our lives."– Dr Elizabeth Black

How can we make AI that we can trust?

We need technical solutions to guarantee the safe behaviour of an AI system and how to explain a decision that it makes. But we also need to reflect on the potential consequences of AI from a social perspective, how AI will impact upon the human values we care about.

These challenges can’t be solved in isolation. We need a holistic multi-disciplinary understanding of both the technical challenges around developing safe, trustworthy and responsible AI, as well as the wider human and societal implications.

Mitigating AI risk around the things we care about requires technical solutions to be built on an understanding of humans and their social environment – what makes a good explanation, how can I be assured of the safety of an AI system, what values are important – meaning we need to understand the wide-ranging ways AI might impact our lives. We need input from people with diverse experiences to ensure that AI innovation is driven by the needs and values of our whole society, so that its benefits will be felt by all.

To do this we need a fundamental shift in the way we train AI experts. The UKRI CDT STAI bridges the gap between technical and social to train a generation of experts who have the multi-disciplinary skills to drive responsible AI development.

Challenges around safe and trusted AI

If AI isn’t safe and trustworthy, it can do significant harm in the world. We’ve seen that autonomous vehicles are more likely to hit certain ethnic groups because of a lack of training data of pedestrians with darker skin – reflecting biases in the way we gather data and in society at large.

One promising approach to delivering safe and trustworthy AI is the bringing together of the two main types of approach to AI: data-driven and symbolic.

What are the consequences of delegating more and more tasks to AI? Might we lose the ability to construct well-formed arguments, or to think critically about the world? We must think seriously about these questions now, before it’s too late."– Dr Elizabeth Black

Data-driven machine learning approaches are trained on vast sets of data to learn to identify patterns, make predictions and generate content. These are the types of AI system that you typically read about in the press, things like generative AI and ChatGPT. These systems are usually ‘black-box’, meaning we can’t tell how the model comes to its decisions or verify its behaviour, and they will learn to replicate whichever biases and societal inequalities are captured in the data they’re trained on.

Symbolic AI systems on the other hand have explicit beliefs about what they believe to be true about the world, and explicit reasoning procedures that they apply to these beliefs. This explicit model of reasoning means it’s easier to see why an AI is making a decision, making it well-suited to ensuring safety and trust as it more easily supports explanations and guarantees that a model is operating correctly and safely.

We need to find ways to combine both data-driven and symbolic approaches in hybrid AI systems to build scalable AI systems that people can verify and trust. Hybrid AI also has the potential to be more environmentally sustainable by reducing the need for energy hungry training from large datasets.

What we might stand to lose with AI

Large language models like ChatGPT have shown the potential AI has to change the way we do things, with people using them to write emails and essays, and companies replacing customer service functions with them.

But what are the consequences of delegating more and more tasks like this to AI? Might we lose the ability to construct well-formed arguments, or to think critically about the world? How will our relationships with other humans be impacted? What will AI mean for the types of work that we do? We must think seriously about these questions now, before it’s too late.

Undeniably, AI has the potential to significantly benefit society, if we get it right. We need to be able to trust that AI will uphold and promote the values that are important to us, like fairness, safety, accountability and privacy. We need to make sure that AI will empower us to tackle the societal challenges we face, not create new ones.

In this story

Elizabeth Black

Elizabeth Black

Reader in Artificial Intelligence

Latest news

LDC 2024

30 April 2024

Do sanctions deter?

Russia successfully insulated itself from sanctions following its illegal invasion of Ukraine. For…