We used an 'understanding-informed' approach in this work, instead of using generic bias mitigation methods, and showed that this led to fairer outcomes without sacrificing accuracy. In other words, if you take the time to understand where bias originates, you can design models that work better for everyone, not just the majority.
Dr Tiarna Lee, PhD graduate, School of Biomedical Engineering & Imaging Sciences
10 February 2026
Study shows how to mitigate racial bias in AI used for heart imaging
King’s College London researchers have found that understanding the root causes of bias could help mitigate it in cardiac segmentation models.

AI is increasingly being used to aid medical diagnosis, prognosis and treatment planning. However, AI models have been shown to exhibit bias in performance by demographic group, potentially leading to inappropriate treatment choices and poor outcomes.
This bias has been demonstrated in cardiac magnetic resonance (CMR) segmentation models, which partition a scan into distinct regions or segments to facilitate its analysis and subsequent diagnosis for the patient.
Past research led by Dr Tiarna Lee, a recent PhD graduate in the School of Biomedical Engineering & Imaging Sciences, investigated the root cause of racial bias in CMR segmentation models, finding that bias arises due to the influence of areas outside the heart itself, particularly subcutaneous fat and imaging artefacts created by the MRI scanner.
In the current study, Dr Lee and colleagues looked at ways to mitigate racial bias in CMR segmentation tools. They evaluated three established approaches for mitigating bias: oversampling, importance re‑weighting and group distributionally robust optimisation (Group DRO), as well as combinations of these methods. Oversampling, which increases the presence of underrepresented groups during training, was the most effective, significantly improving segmentation accuracy for black patients while maintaining performance for white patients.
The team then investigated whether addressing the root cause of racial bias in CMR segmentation tools would help mitigate it. They created a pipeline in which the CMR images were first automatically cropped to focus solely on cardiac structures, eliminating the bias-inducing features outside the heart. They found that this pipeline improved performance of the segmentation tool for both black and white patients, and reduced bias. When cropping was combined with oversampling, the reduction in bias was even greater.
To test how these techniques would perform in real‑world clinical settings, the team evaluated the models on an external clinical validation dataset. All approaches demonstrated high segmentation performance and no statistically significant racial bias, underscoring the potential of relatively simple training adjustments to improve fairness in AI‑assisted cardiac care.

