Skip to main content
KBS_Icon_questionmark link-ico
Hero Desktop Reimagining AI Futures ;

Reimagining AI Futures - Dr Sanjay Modgil: A future of AI regulation beyond silo - Part 2

The information apocalypse still looms, but what other long-term effects could AI have? In the second part of his blog for Reimaging AI Futures, Dr Sanjay Modgil looks ahead to the world of issues a SAGE-AI could advise on.

In my last blog post I spoke about how social media was operationalising the evolutionary propensity towards opinions which validate our own beliefs, and how recommendation algorithms are essentially technological incarnations of our innate confirmation bias. I ended by saying that this dangerous positive feedback loop of cherry-picking information had potentially dire impacts for the way we as a society can work together to solve existential problems like climate change.

We’ve seen how a relatively unsophisticated use of AI has, in combination with other societal developments, led to unexpected consequences. After all, the early internet pioneers harboured utopian dreams of an information superhighway that would erode barriers to a shared global vision, and not strengthen them!

As AI becomes more intelligent, and as we delegate more to it so that it acts with more independence, we may reap other unexpected consequences. As AI becomes more integrated into human society, these consequences will become more difficult to control and undo (consider our current limited options for addressing the socially divisive impact of social media and the free-for-all use of LLMs). 

But our ability to anticipate the long-term effects of a multi-faceted issue like AI’s impact on society, to “expect the unexpected”, will require an interdisciplinary SAGE AI shaping of regulation that strikes a balance between high-risk light-touch and overly restrictive heavy-handed approaches.

Contrast this proposal with what is currently on offer. The recent UK white paper on AI regulation proposes the empowerment of existing siloed regulators to come up with tailored approaches to regulation for specific sectors. The EU AI Act proposes prescriptive legislation, spanning sectors and focussing on existing “prohibited” and “high-risk” AI systems.

But neither have centralised mechanisms for feeding through analysis of AI’s societal impact to those involved in regulation or working on AI R&D.That said, the upcoming UK AI safety summit is scheduling multidisciplinary discussion around societal impacts of AI.

It remains to see whether the summit will instigate a redrafting of the regulatory landscape, as the government’s Frontier AI taskforce is suggesting an advisory body primarily composed of technologists.

Pandora’s box is open and ChatGPT4 and other powerful AI systems have been released. To minimise the risk of unexpected consequences, and forearm ourselves against those that we anticipate, while also reaping the transformative benefits of AI, we need smart regulation."– Dr Sanjay Modgil

Looking to the future, what other issues could a SAGE-AI advise on? Philosophers, psychologists and researchers in Digital Humanities are now raising concerns about AI systems triggering the human ‘anthropomorphic’ instinct to ascribe rich human-like mental lives to entities, and in particular robots, that exhibit human like behaviours.

While our interactions with robots promises to be transformative, there is also the potential for serious societal disruption without adequate and intelligent regulation. Could robot use to support human carers (e.g., in the under-resourced care sector) be more nourishing, more acceptable by those who are being cared for, if their humanoid designs suggest that they genuinely care and experience empathy?  

But then how would widespread use of care robots impact the extent to which we humans feel we are responsible for caring for our elderly?  On the other hand, sex robots will be designed to simulate arousal and reciprocal attraction. How will treatment of these humanoid robots as subservient ‘sex slaves’, while simultaneously being thought of as conscious, affect the capacities of their owners to develop respectful sexual relations with other humans, possible leading to tragedy?

A SAGE AI could promote and consolidate research that seeks to answer these questions, and advise that we regulate against humanoid sex robots, while advocating their use in care settings?

Pandora’s box is open and ChatGPT4 and other powerful AI systems have been released. There is no going back. We must minimise the risk of unexpected consequences, and forearm ourselves against those that we anticipate, while also positioning ourselves to reap the transformative benefits of AI. To do that we need smart regulation. An interdisciplinary understanding of AI and its impact on society needs to be front and centre when it comes to thinking about AI safety and regulation.

In this story

Sanjay Modgil

Sanjay Modgil

Reader in Artificial Intelligence

Latest news