Skip to main content
KBS_Icon_questionmark link-ico
people of world ;

Beyond silos: Why AI Regulation calls for an Interdisciplinary Approach

Are we facing an information apocalypse due to unregulated AI? What damage has already been done to society and what should our next steps be? Dr Sanjay Modgil, Reader in Artificial Intelligence in the Department of Informatics urgently argues for an interdisciplinary approach to harness the potential power of the technology and mitigate against unforeseen risks.

The lid has been prised loose from Pandora’s Box, affording us a glimpse of the coming age of Artificial Intelligence (AI); an age that will dwarf the transformative impact of earlier technological revolutions.

The potential benefits are enormous. From radically improved healthcare to revolutionising productivity in the workplace and enabling the green energy transition, AI is set to radically change our experience of the world and the human condition. 

But there are legitimate concerns about AI risks. Indeed, researchers, developers and business leaders alike have called for more focus on regulating AI so as to mitigate against long-term existential threats. These include AI’s role in subverting concepts such as ‘truth’ and the very idea there are such things as ‘facts’; concepts that are required for societal consensus and the possibilities of collective action to address global threats such as climate change.

Indeed, the recent startling advances in the capabilities of large language models (LLMs) such as ChatGPT, that almost no-one expected to see so soon, has prompted calls for pausing further development of large-scale AI systems.

However, existing regulation is ill-equipped to deal with the unconstrained use of LLMs in generating misinformation that will massively pollute our already contaminated informational ecology. The prospects for reactive regulation are also slim, given the widespread availability of LLMs, and the pressures on companies to not fall behind in the race to profit from their commercial potential.

 

On the other hand, there are many who claim that warnings about long-term existential threats are overstated, that we are taking our eye off more immediate threats, and that unwarranted fear-mongering may result in over-regulation that then hampers AI innovation and the benefits that AI will bring.

But should existing regulatory proposals, such as those currently proposed by the UK and EU, err on the side of being light-touch?

 

Perhaps a more nuanced understanding of how AI may amplify existential risks can help answer this question and inform a more imaginative approach to AI regulation. An approach that, in contrast to these current proposals, is centred around an inter-disciplinary advisory group – a SAGE AI if you will – that in addition to AI researchers and technologists, includes anthropologists, philosophers, psychologists, social and cognitive scientists, economists and representatives from civil society.

 

The role of such a SAGE AI would be to promote and monitor ongoing interdisciplinary research into the short, medium and long-term societal impact of AI. Such a body would review and consolidate this research to advise regulatory authorities, while continually engaging with AI researchers, developers and businesses. The hope would be that a SAGE AI could help shape AI regulation to anticipate its development and uses, before AI systems are launched and made widely available.       

 

Consider the significant societal challenges we are already facing: the polarisation of societies into rival “tribes” with increasingly entrenched political and cultural beliefs. Could regulation, informed by a SAGE AI, have helped mitigate the role that social media’s use of AI filtering and recommendation algorithms has played in exacerbating our contemporary post-truth polarised predicament?

Contrast this proposal with what is currently on offer. The recent UK white paper on AI regulation proposes the empowerment of existing siloed regulators to come up with tailored approaches to regulation for specific sectors. The EU AI Act proposes prescriptive legislation, spanning sectors and focussing on existing “prohibited” and “high-risk” AI systems, while the US AI Bill of Rights focusses on five high level principles intended to guide AI development and use. But none of these proposals advocate centralised mechanisms for feeding through analysis of AI’s societal impact to those involved in regulation or working on AI R&D. 

That said, the UK Government’s recent AI Safety Summit did facilitate multidisciplinary discussion around societal impacts of AI, and the resulting ‘Bletchley Park Declaration’, signed by 28 countries, arguably demonstrates a commitment to a shared understanding of the opportunities and risks posed by frontier AI. Whether this feeds through to a more inclusive multidisciplinary shaping of regulation remains to be seen.

ChatGPT4 and other powerful AI systems have been released, unregulated, into the wild. There is no going back. But going forward, we must minimise the risk of unexpected consequences, and forearm ourselves against those that we anticipate, while also positioning ourselves to reap the transformative benefits of AI. To do that we need smart regulation. An interdisciplinary understanding of AI and its impact on society needs to be front and centre when it comes to thinking about AI safety and regulation.

In this story

Sanjay Modgil

Sanjay Modgil

Reader in Artificial Intelligence

Net Gains? Living Well With Technology

Our experts discuss some of the challenges of living well with technology: showing what we can learn from a long history of technological innovation as well as addressing the challenges of…

Latest news