Skip to main content

05 August 2025

The age of AI Sci: how GenAI and LLMs are reshaping the foundations of science

A newly-published paper explores the influence of GenAI on how scientists practice science today and what it means for the scientific method and scientific discovery going forward.

A tablet computer is positioned on a table in a lab, indicating a workspace for scientific research and experimentation.

From drafting papers and automating literature reviews to designing experiments and even generating new hypotheses, artificial intelligence (AI) is transforming the foundations of scientific practice, according to a new review published by Nature journal npj Artificial Intelligence.

The review was led by Dr Hector Zenil, an Associate Professor / Senior Lecturer of King’s College London’s School of Biomedical Engineering & Imaging Sciences and a team of international researchers. It explores generative AI’s (GenAI’s), specifically, Large Language Models’ (LLMs’), influence on how scientists practice science today and what it means for the scientific method and scientific discovery going forward.

In the thought-provoking perspective paper, the researchers explain that, far from being a futuristic speculation, GenAI is here now and in every corner of scientific practice, with real implications for the role of human understanding in science.

LLMs can explore large search datasets and use the information to generate hypotheses that would be otherwise impossible for humans to develop. And, this capability could mean the GenAI-human feedback loop soon closes entirely, with the ability to design and carry out experiments with little to no human intervention a likely future development.

Still, while the technologies may generate useful results, they are limited in their capacity to meaningfully describe the mechanistic and causal principles behind the research, as the paper’s authors note through the example of DeepMind’s AlphaFold system.

AlphaFold famously helped solve one of science’s great challenges – the protein folding problem – by predicting a protein’s end shape with remarkable precision. It was unable, however, to offer any new insights into why they fold as they do without the contribution of human expertise.

This makes AlphaFold, and other GenAI technologies like it, powerful predictive tools, but tools that are so far incapable of adding or contributing much to new fundamental knowledge or first principles to textbooks.

Accepting an age of GenAI Sci and LLMs then would seem to mean accepting an age of ‘post-explanatory’ science, where the predictive power of scientific discovery is valued over its ability to further our understanding of the world, the paper’s authors suggest.

Along with GenAI’s limitations in contributing new knowledge, the paper also explores other risks, including the likelihood of integrating past errors, biases, or blind spots into new science because of the large, existing datasets the platforms are built upon, as well as concerns about reproducibility, intellectual ownership, and the deskilling of researchers.

The researchers remain optimistic about the opportunities of AI Sci however, seeing its integration into scientific practice not as a replacement for human thinking, but as a powerful extension of it – one that could open new research possibilities that otherwise might have taken years to pursue.

According to the paper, the scientific community’s challenge now is to decide what kind of science we want to preserve in human hands, and what we are willing to delegate to machines.

This is no longer about whether AI can do science. It is about whether we understand the science that AI does — and whether that still counts as science at all.

Dr Hector Zenil, Associate Professor / Senior Lecturer, School of Biomedical Engineering & Imaging Sciences

In this story

Hector Zenil

Senior Lecturer / Associate Professor