Skip to main content

02 April 2026

AI could soon run science itself, researchers suggest

A new paper argues that future AI systems may transform science by moving from being a tool into an autonomous discoverer.

Dark blue background with straight and bent light blue lines

Published in the journal Frontiers in Artificial Intelligence, the review suggests that ‘closed-loop’ AI systems could one day carry out the entire scientific method independently, from generating hypotheses to running experiments and refining theories.

“We are moving towards systems that can not only assist science, but will actively do science without a scientist in the loop,” said Dr Hector Zenil, Senior Lecturer/Associate Professor at King’s Institute for AI and the School of Biomedical Engineering & Imaging Sciences, who led the review from a consortium of world leaders in AI for Science.

Traditionally, scientific discovery follows a series of steps, including identifying patterns in data, proposing hypotheses, designing and running experiments, analysing results and refining explanations. While AI has already been integrated into many of those steps, human researchers are still mostly relied on to generate hypotheses and interpret results.

Advanced AI systems could help close this loop, linking these stages into an iterative process where hypotheses, experiments and analysis continually inform one another in an ‘agentic’ manner. For example, AI could propose hypotheses based on data, design potential experiments or simulations to test them, analyse the outcomes and refine the models in repeated cycles.

However, the researchers suggest that when this happens, scientists may no longer be in control.

Such systems could explore vast numbers of hypotheses far beyond human capacity, potentially leading to breakthroughs in many fields. But the authors caution that this shift raises challenges.

Science may soon face a situation similar to discovering extraterrestrial intelligence. Beyond LLMs, future AI systems may explore hypothesis spaces so vast and far removed from human intuition that human scientists may never fully reach or catch up.

Dr Hector Zenil, Senior Lecturer/Associate Professor at King’s Institute for AI and the School of Biomedical Engineering & Imaging Sciences

This has been described as a form of ‘alien science’, where models, explanations or discoveries work in practice but are difficult for humans to understand.

Rather than replacing scientists, the researchers argue that the most realistic (and desired) scenario is human–machine collaboration. AI systems may help explore large spaces of possible explanations or experimental strategies, while human researchers guide the overall goals and evaluate the significance of the results.

But this raises a question about how far scientists should be willing to rely on systems they may not fully understand. “In the future, we may have to choose between keeping scientific discovery within the limits of human comprehension or allowing machines to push knowledge into territories we cannot fully follow,” commented Dr Zenil.

The paper builds on a previous work published in npj Artificial Intelligence.

In this story

Hector Zenil

Senior Lecturer / Associate Professor