We wanted to only include studies where AI could be used in clinical practice, with the wide variety of pathology that comes to the average hospital's radiology department, as opposed to AI that's only tested on curated datasets of normals and one type of abnormality e.g. brain haemorrhage. Researchers in medical AI should therefore aim to train and validate AI models with data that is representative of the environment in which AI will be deployed.Dr Sid Agarwal, PhD Candidate, Cancer Imaging
14 June 2023
Study finds insufficient evidence to recommend AI for abnormality detection
A King’s College study has found the use of Artificial Intelligence (AI) detection models is only adequate as a tool to improve radiologist efficiency rather than as a replacement for radiologists.
Researchers from the Department of Cancer Imaging within the School of Biomedical Engineering & Imaging Sciences; Siddharth Agarwal, David Wood, Mariusz Grzeda, Marc Modat and Thomas C Booth, were part of the group that conducted the study.
The research group have released an AI model to detect all abnormalities in magnetic resonance imaging (MRI). Prior to this, the group were keen to understand all other AI models used clinically to detect abnormalities in either computed tomography (CT) or MRI, and to see if implementation of AI in clinical settings resulted in improved patient outcomes. They then performed a comprehensive systematic review.
The aim was to determine the diagnostic test accuracy and summarise the evidence supporting the use of AI models performing first-line, high-volume neuroimaging tasks.
Contributing researcher Dr Sid Agarwal said: "Most studies evaluating AI models that detect abnormalities in neuroimaging are either tested on unrepresentative patient cohorts or are insufficiently well-validated, leading to poor generalisability to real-world tasks. We found it surprising that not many other groups had undertaken similar work; at the time of the review only two other studies had validated AI for MRI in clinical cohorts.”
Out of 42,870 records screened, and 5,734 potentially eligible full texts, only 16 studies met sufficient rigour to be eligible for inclusion. The majority of studies were excluded due to either insufficient validation or unrepresentative clinical cohorts.
AI models are available on the market, however, many have not been sufficiently well validated in clinical practice, and most have not demonstrated any tangible benefit to patients or healthcare systems. Whilst AI models have great promise, they need to be rigorously evaluated before they can be deployed safely in hospitals.Dr Thomas Booth, Reader in Neuroimaging
The research paper has been published in Clinical Neuroradiology. You can read the paper here.