Skip to main content

28 April 2023

Ethics of AI-Based Medical Tools: In Search of Autonomy, Beneficence, Non-Maleficence & Justice

Read about King's work on the ethical implications of integrating AI-based medical tools into diagnosis and treatment, as featured in the Bringing the Human to the Artificial exhibition.

Ethics Case Study Image BTHTTA crop 780x440 AdobeStock_269430438

There are very many examples of AI-based medical tools that can improve understanding and management of health outcomes for better patient diagnosis and treatment. 

Yet with AI-based tools becoming more common as part of daily routine in the medical domain, practitioners and researchers must consider the ethical implications of integrating AI recommendations on treatment choices and diagnosis of disease.

From the development of AI tools to their potential deployment into clinical care, we identify several ethical challenges that closely connect with the four ethical principles on which any medical professional should base their own conduct. These are as follows.

Maximising patient autonomy in informed treatment decisions:
Respect for Autonomy

Acting in a patient’s best interests:
Beneficence

Treating patients as ends in themselves:
Non-Maleficence

Distributing medical benefits fairly:
Justice

Among others, these principles raise questions around authority and ethical responsibility in the case of physician and machine collaborating, avoiding dehumanisation so that patients are not regarded as mere mechanical systems, and transparency so that people can understand the rationale for diagnoses and decisions.

Project Lead

Raquel Iniesta
Department of Biostatistics & Health Informatics, Institute of Psychiatry, Psychology & Neuroscience, King’s College London

Funder

The National Institute for Health Research (NIHR) Maudsley Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London

In this story

Raquel Iniesta

Senior Lecturer in Statistical Learning for Precision Medicine