Skip to main content
KBS_Icon_questionmark link-ico
David Watson op-ed 1903 x 558 ;

AI Pragmatism: The only way to growth (without the AI arms race)

Two thousand years ago, the philosophers Plato and Aristotle pre-empted the modern AI debate between Elon Musk and the EU.

Ever the realist, Plato argued that justice was like maths – that there existed an objective, ‘ideal’ form of every concept that could be worked towards. His student Aristotle argued that ideas like justice were socially constructed and contextually dependent – the ‘ideal’ was a dangerous myth.

These debates of Ancient Athens cast a long shadow. We see this contrast today in the rise of AI (artificial intelligence), which has sparked intense debates. Silicon Valley techno-utopians call for unfettered deployment of AI to solve all of society’s ills, clashing with sceptics and regulators who question the wisdom or even the possibility of ‘optimising’ social problems like inequality.

As these voices rage on, we get distracted from what really matters in this debate. Companies deploy larger and larger models doing more things, hoping to grab a larger slice of society’s pie and the profit margins that come with it – often without much needed guardrails, ethical direction or consideration for the planet.

So how do we move forward in a polarised world? Ensuring that we reap the benefits of AI, like better healthcare, while mitigating against the poor decisions AI models sometimes can make.

The answer, unsurprisingly, is pragmatism – but not as we’ve seen before.

Pragmatism isn’t compromise

‘We need to find a way that works’ is something that politicians say a lot about AI – but it’s an unhelpful generalisation.

When solving any problem, you need to define your terms. The question we should be asking is, ‘What is the concrete problem we are trying to solve, and what does success look like? How do we solve it in a way that’s better than what we’re doing now?’

This necessitates setting clear targets and then evaluating how useful those targets are – that’s pragmatism.

Applying this to AI, if you take the example of AI speeding up loan applications, it is socially good if loans are approved more quickly so people can get their money faster. However, if you build an AI that just focusses on speed, it may unfairly reject people based on biases in the training data, which often includes traces of discriminatory policies, for example with respect to racial profiling.

While acknowledging that AI decision making can be biased and even dangerous, we also need to acknowledge that there are outcomes we can and do want to optimise for. AI makes mistakes because we often optimise for the wrong thing, but that doesn’t always have to be true.

The debate is too often split between two extremes – those who believe AI can change the world and don’t confront what’s at stake with untrammelled AI ‘efficiency’, and those who believe ‘optimisation’ is a dirty word and AI cannot improve any aspect of the human experience. Taken to its logical conclusion – abandoning AI altogether – this latter view risks throwing out the baby with the bathwater.

There is a middle ground, and in our latest paper we called this ‘Sociotechnical Pragmatism’. Pragmatism is effective because it evaluates what the measurements of ‘evaluation’ are, alongside the results of AI efficiency, and in the world of AI implementation you need both.

In other words, identifying what is and isn’t worth optimising for is vital in order to get the most benefit from AI. Pragmatism is not a begrudging compromise, but a progressive ideal – the only realistic way forward.

 

The dangers of blind AI scepticism

It’s important to note that AI can exhibit well-documented, harmful behaviour, such as autonomous vehicles misclassifying people with darker skin, increasing the risk of collision.

Yet it is unlikely that AI is going to stop in its tracks anytime soon. People will continue to iterate and deploy the technology. By refusing to acknowledge its promise, AI sceptics can fail to engage with meaningful and useful regulation – leaving power in the hands of potentially reckless technologists.

In the UK, there is currently only a piecemeal framework of official standards for high-risk AI applications such as in medicine. While promising faster diagnosis, these technologies also risk further entrenching inequalities in healthcare by codifying racial biases in treatment, negatively impacting minority groups. In these cases, we need AI sceptics to engage in the conversation about rigorous improvement and optimisation targets to set these models, and what guardrails to put in place where deficiencies cannot be overcome.

It’s been an argument thousands of years in the making, but pragmatic approaches do more to promote, fair, explainable and useful AI than either industry accelerationists or academic naysayers.

Failing to engage with policy discussions seriously through the prism of pragmatism opens up reckless technologists to knock down much needed guardrails in AI development at the world’s peril. We need pragmatism, and we need to judge it on its own merits.

About the author

Dr David Watson is a Lecturer in Artificial Intelligence in the Department of Informatics.

Our Divided Planet

Our world today faces challenging and often connected problems whether conflicts, climate change impacts, health inequalities, social injustice or political polarisation. At King’s we are working to bring together the different elements of this complex puzzle to work out where the missing pieces fit and to help bridge the gaps. In ‘Our Divided Planet’, see how our analysis, insights and innovations are helping to address global challenges and working towards a more united world.

Planet earth rearranged as a slider puzzle against a background with stars

In this story

David Watson

David Watson

Lecturer in Artificial Intelligence

Latest news