Explore artificial intelligence and automated decision-making at this King's exhibition.
28 April 2023
Ethics of AI-Based Medical Tools: In Search of Autonomy, Beneficence, Non-Maleficence & Justice
Read about King's work on the ethical implications of integrating AI-based medical tools into diagnosis and treatment, as featured in the Bringing the Human to the Artificial exhibition.
There are very many examples of AI-based medical tools that can improve understanding and management of health outcomes for better patient diagnosis and treatment.
Yet with AI-based tools becoming more common as part of daily routine in the medical domain, practitioners and researchers must consider the ethical implications of integrating AI recommendations on treatment choices and diagnosis of disease.
From the development of AI tools to their potential deployment into clinical care, we identify several ethical challenges that closely connect with the four ethical principles on which any medical professional should base their own conduct. These are as follows.
Maximising patient autonomy in informed treatment decisions:
Respect for Autonomy
Acting in a patient’s best interests:
Treating patients as ends in themselves:
Distributing medical benefits fairly:
Among others, these principles raise questions around authority and ethical responsibility in the case of physician and machine collaborating, avoiding dehumanisation so that patients are not regarded as mere mechanical systems, and transparency so that people can understand the rationale for diagnoses and decisions.
Department of Biostatistics & Health Informatics, Institute of Psychiatry, Psychology & Neuroscience, King’s College London
The National Institute for Health Research (NIHR) Maudsley Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London