Skip to main content
KBS_Icon_questionmark link-ico
Hero Desktop Reimagining AI Futures ;

Reimagining AI Futures - Dr Sanjay Modgil: Why AI Regulation calls for an Interdisciplinary Approach - Part 1

The information apocalypse, a polluted ecology of fact and a technological amplification of cognitive biases. Dr Sanjay Modgil talks how to thread the needle between over- and under-regulation of AI and mitigate against damaging long-term impact with a SAGE-AI.

The lid has been prised loose from Pandora’s Box, affording us a glimpse of the coming age of Artificial Intelligence (AI); an age that will dwarf the transformative impact of earlier technological revolutions. The potential benefits are enormous. From radically improved healthcare to revolutionising productivity in the workplace and enabling the green energy transition, AI is set to radically change our experience of the world.

AI risks and current legislation

But there are legitimate concerns about AI risks. Indeed, researchers, developers and business leaders alike, have called for more focus on regulating AI so as to mitigate against long term existential threats. These include AI’s role in subverting concepts such as the idea of ‘truth’ that underpin social cohesion and the possibilities of collective action to address global threats such as climate change.

Indeed, recent startling advances in the capabilities of large language models (LLMs) such as ChatGPT, that almost no one expected to see in so short a time scale, has prompted calls for pausing further development of large scale AI systems. However, existing regulation is ill-equipped to deal with the unconstrained use of LLMs in generating misinformation that will massively pollute our already contaminated informational ecology. The prospects for reactive regulation are also slim, given the widespread availability of LLMs, and the pressures on companies to not fall behind in the race to profit from their commercial potential.

An inter-disciplinary approach to AI regulation

But should existing regulatory proposals, such as those currently proposed by the UK and EU, err on the side of being light-touch? Perhaps a more nuanced understanding of how AI may amplify existential risks can help answer this question and inform a more imaginative approach to AI regulation.

An approach that, in contrast to these current proposals, is centred around an inter-disciplinary advisory group similar to the body which steered Britain through the recent pandemic – a SAGE AI if you will. In addition to AI researchers and technologists, this group would include experts such as anthropologists, philosophers, economists, psychologists, as well as social and cognitive scientists.

The role of such a SAGE AI would be to promote and monitor ongoing interdisciplinary research into the short, medium and long-term societal impact of AI. Such a body would review and consolidate this research to advise regulatory authorities, while continually engaging with AI researchers, developers and businesses. The hope would be that a SAGE-AI helps shape AI regulation to anticipate its development and uses, before AI systems are launched and made widely available.

The lid has been prised loose from Pandora’s Box, affording us a glimpse of the coming age of Artificial Intelligence (AI); an age that will dwarf the transformative impact of earlier technological revolutions."– Dr Sanjay Modgil

AI, confirmation bias and poisoning the internet

Consider the significant societal challenges we are already facing: the polarisation of societies into rival “tribes” with increasingly entrenched political and cultural beliefs. Could regulation, informed by a SAGE-AI, have helped mitigate the role that social media’s use of AI filtering and recommendation algorithms has played in exacerbating our contemporary post-truth polarised predicament? To answer this question, consider the following interdisciplinary understanding of how these algorithms effectively operationalise the confirmation bias-- the human instinct to selectively attend to evidence and opinion that support, and so further entrenches, our beliefs.

Our distant ancestors lived in small farming communities, in which the confirmation bias might have served to entrench these groups’ shared tribal beliefs; that is, beliefs relating to values, governance, resource allocation, religion, mythology etc. The effect would be to strengthen bonds amongst tribal members, and so promote cooperation and a shared resolve to repel incursions from rival groups. We have thus evolved to experience dopamine mediated rewarding feelings when our tribal beliefs are confirmed.

Our ancestors relied only on each other to mutually reinforce tribal beliefs. However, with the internet, the available information is now not only vastly greater, but has the potential to expose us to misinformation and extremist views on an unparalleled scale. Moreover, the “attention economics” of the internet, and in particular social media platforms such as Facebook, have incentivised how this vast repository of online information is filtered for our consumption.

Our search and click histories effectively provide a profile of the tribal beliefs that we engage with. Algorithms then selectively feed us with more of the same, and the rewarding feelings accompanying confirmation and reinforcement of our cherished tribal beliefs entices us to spend more time online, increasing exposure to revenue generating adverts. Thus, AI filtering and recommendation algorithms are technological incarnations of our innate confirmation bias, selectively feeding, confirming and entrenching our existing opinions.

Unflinching disagreement about the facts undermines the path forward when dealing with known existential threats like pandemics. Are we confident, for example, that there would be sufficient vaccine uptake to guarantee herd immunity when faced with a far more virulent outbreak than COVID-19?– Dr Sanjay Modgil

In concert with the increasing amounts of online fake news and misinformation, and a host of other societal developments, these algorithms may then contribute to leading us down rabbit holes to ever more extreme versions of these beliefs, and polarising societies to the detriment of societal well-being

In the absence of effective regulation, the available content for reinforcing and radicalising tribal beliefs is set to not only increase by orders of magnitude, but also undergo further massive pollution, given the widespread availability of LLMs for generating unlimited amounts of misinformation and fake content that is then posted online.

In short, without effective regulation today, an information apocalypse is nigh.

See part two for what an interdisciplinary approach to regulation could achieve.

In this story

Sanjay Modgil

Sanjay Modgil

Reader in Artificial Intelligence

Latest news

LDC 2024

30 April 2024

Do sanctions deter?

Russia successfully insulated itself from sanctions following its illegal invasion of Ukraine. For…