Skip to main content
KBS_Icon_questionmark link-ico
Hero Desktop Reimagining AI Futures ;

Reimagining AI Futures - Dr Mike Cook: Building the Future of AI

What is AI? Who is it for and who stands to benefit? Senior Lecturer Dr Mike Cook examines the roots of the technology and how its control by corporate giants has shaped its narrative and development – ultimately asking what kind of AI future we want to see.

When most people think about artificial intelligence (AI) they don’t think about copy and paste. However, creator Professor Larry Tesler has some words of wisdom which accurately conveys the state of AI today. “AI is whatever hasn’t been done yet”.

He suggests that AI is something that isn’t quite finished yet, any solution to a problem we haven’t quite figured out yet and we’re not certain what it’s place in society will be. Twenty years ago, Google Maps would have been considered cutting-edge AI, now it’s simply a part of your phone.

This is important, because it shows that AI is social, as well as technical, and that’s its definition can change. It also shows us that AI is a tool, and like all tools it can be used or misused by those in charge.

What is AI today?

AI, as we know it today, is a branch of Computer Science which gathers techniques that are used to solve problems by exhibiting human-like intelligence. These different technologies can be seen as a toolbox, with different tools for different problems.

The newest, shiniest tool in the box is machine learning. But, like you can’t just use a drill to build a house, you can’t apply machine learning to all problems which need AI solutions. If you have a problem that requires you to spot patterns in data and then take those patterns and apply it to new data, language translation might be the perfect tool for the job. Or automated theorem proving, a computer program which proves the behaviour and function of machines, might be best if you want to ensure someone won’t be hurt be a digital medical device.

Who is AI for? (It’s for everybody)

The past ten years have seen massive improvements in certain kinds of AI that benefit from cash injection. The more servers or data you buy for a Large Language Model like ChatGPT, the better it becomes.

Twenty years ago, Google Maps would have been considered cutting-edge AI, now it’s simply a part of your phone. AI is a tool, and like all tools it can be used or misused by those in charge."– Dr Mike Cook

By generating hype and headlines through this spending power, relatively few companies like OpenAI and DeepMind have been able to seize the AI market. Not only does this mean they have been able to control the messaging around AI, but also disproportionately influence government’s approach to it – like at this year’s AI Safety Summit in the UK.

While some of these companies may present as institutions for the public good, like all business they ultimately have one goal – making money.

With AI taking a larger role in our society, whether that be in the NHS, our public sector, or in even in entertainment, having just a few companies control that technology is not a good thing. It threatens to make an AI future built for the very few. We need as many people as possible to be involved in design, development, and discussions around AI to make sure that everyone is involved in shaping the future, not just the people who stand to profit.

Why everyone is entitled to an opinion about AI

It’s easy to think that when we see a technology like AI that it’s too complicated to understand, and therefore we can’t have opinions on it. But that simply isn’t the case.

I may not know how to build a nuclear power station, but I’m still allowed to have an opinion on how it happens and how it affects my life. Similarly with AI, we should tell governments and companies how we want it to impact our day-to-day.

We have a responsibility to educate ourselves, but misinformation and clashing media narratives can be hard to navigate. Universities like King’s have an important part to play in talking to people and addressing their fears, as well as giving them a sense of optimism for the future.

Governments also need to play a more active role. Regulation of AI is key, but it’s also important to educate the population so they feel safe and confident about the future that’s been built around them.

If a new drug were to come out tomorrow and it hadn’t been tested to see if it was safe or had long-term side effects, we might ask some critical questions before we started using it. But with AI, we’re not asking those same questions. As it stands, we’re not sure if the technology we’re increasingly relying on in our legal system and health system is safe.– Dr Mike Cook

Scientific discovery or company product?

Communication is increasingly important because of the way new AI systems are being presented to us, as scientific discoveries instead of what they are, which are products.

If a new drug were to come out tomorrow and it hadn’t been tested to see if it was safe or had long-term side effects, we might ask some critical questions before we started using it. But with AI, we’re not asking those same questions. As it stands, we’re not sure if the technology we’re increasingly relying on in our legal system and health system is safe, or what their long-term effects might be.

The result of this uncritical adoption is that these systems are making decisions about our lives without us really knowing if they’re safe or if we’re happy having them there.

The Future of AI

So, what can we do? If we think AI is just a linear path of scientific discovery, then we’ve only got one road. But I believe we have a choice of where to go and all of us need to feel empowered to say what kind of AI future we want instead of a few individuals in industry or government.

What we really need is transparency, and that will involve talking to one another and the press honestly about AI and the dangers it poses today. It might also mean we push back against the massive, difficult to understand, data hungry models that we’re using now to solve every problem.

If you understand how something works then you feel safer around it, more confident using it and you can think about what it means for your future.

All that’s left is to ask yourself; what AI future do you want?

 

In this story

Michael Cook

Michael Cook

Senior Lecturer in Computer Science

Latest news