Skip to main content

28 March 2024

Study examines the framework guiding development and regulation of AI

In a new study, Dr Mehmet Ismail, from the Department of Political Economy at King’s, offers new insights into the theoretical boundaries of Artificial General Intelligence (AGI).

AI

His research, Exploring the constraints on artificial general intelligence: a game-theoretic model of human vs machine interaction, introduces a novel game-theoretic approach to understanding AGI.

The prospect of AGI has sparked intense debate among AI researchers and practitioners, as well as philosophers, ethicists, policymakers, and the general public. Some view superhuman AI as a desirable and inevitable goal, potentially bringing unprecedented benefits to humanity. Others warn of the existential risks and moral dilemmas that superhuman AI could entail.

In 2015, an open letter signed by more than 150 prominent AI experts called for more research on how to maximize the societal benefits of AI systems and ensure the alignment of superhuman AI with human values and interests.

AGI represents the frontier of AI research, aiming to create machines that surpass human intelligence across all domains. “The starting point of my paper is that for an AI system to be called general it should reach ‘superhuman performance’ not only in zero-sum games but also in general-sum games, where the outcome isn't just about winning or losing,” said Dr Ismail.

To date, AI systems have achieved superhuman performance only in zero-sum games such as chess and backgammon where winning or completing a task has a clear meaning. Dr Ismail adds: “Despite this, most economic and social interactions are not zero-sum and ‘winning’ is not well-defined in general-sum games.”

A central contribution of Dr Ismail’s study is the establishment of a game-theoretical definition of AGI within general-sum contexts, which is crucial for understanding AI-human dynamics beyond mere zero-sum competition.

The central question he poses is: ‘What conditions are necessary for the existence of AGI?’

The research, published in the journal Mathematical Social Sciences, explores this question with four key assumptions in human vs machine interaction: (1) the human player is rational, (2) strategically unpredictable, (3) has access to the machine’s strategy, and (4) the machine is a superhuman AGI system.

Dr Ismail mathematically shows that only three of these four assumptions can be true simultaneously. When all four assumptions are applied simultaneously, they lead to a logical contradiction. Put differently, if the first three assumptions are satisfied for the human player, an AGI cannot exist.

In simpler terms, if the human player does not know how to choose the best action, does not have access to or understand the strategies of the machine in question, or the player’s actions are predictable, then an AGI system may outperform the human player in all possible zero-sum and general-sum games.

It is up to the policy-makers to decide to what extent such an overachieving AI system is in the best interests of the society and to what extent institutional framework should be established for the interactions between humans and AGI systems.

Dr Mehmet Ismail

By identifying and examining the key assumptions and their inconsistencies behind an AGI system, Dr Ismail's study contributes to the better understanding of the theoretical framework guiding AGI's development and regulation.

You can read Dr Ismail’s study in full here: https://www.sciencedirect.com/science/article/pii/S0165489624000350#sec3

In this story

Mehmet160

Lecturer in Economics