Skip to main content

04 November 2021

Autonomous systems in healthcare: why trust matters

Rachel Hesketh and Mark Kleinman

There is huge potential for autonomous systems to transform healthcare services, but there needs to be trust in the technology for it to succeed

TAS health News story

Trusted autonomous systems in healthcare: A policy landscape review

Read the research

Rachel Hesketh is a Research Associate in the Policy Institute, King's College London and Mark Kleinman is Professor of Public Policy in the Policy Institute, King's College London.

Autonomous systems – those which are able to take actions with little or no human supervision – are believed to hold huge promise for transforming health and care systems, improving patient outcomes, reducing costs and enabling new medical discoveries.

Despite their very wide range of potential applications, and high levels of development activity, these technologies are as yet little used in health and care settings, and early applications are likely to embody the simplest of these technologies. This situation presents policymakers with both questions and opportunities – questions around whether the adoption of these technologies is being impeded by barriers, and opportunities to fully consider, while there is time to do so, the potential risks or drawbacks associated with their application.

In both cases, issues of trust will be very relevant. Are there features of autonomous systems in health and care that undermine their trustworthiness in the eyes of the medical profession, patients and the public, and how can these be addressed (for example, through design or regulation)? Are there other reasons why, in practice, trust in these systems may be limited?

We’ve explored these issues in a new report for the UKRI Trusted Autonomous Systems (TAS) Hub – part of a £33 million TAS Programme, which aims to develop socially beneficial autonomous systems that are both trustworthy in principle and trusted in practice by individuals, society and government.

Following the Nuffield Council on Bioethics’ work on AI in healthcare, we identified eight policy-related issues with the application of autonomous systems in health.

Reliability and safety: Systems can make mistakes and algorithms can contain errors, which may be difficult to spot and could be replicated at scale. The risk of automation bias, where busy healthcare professionals do not critically assess the outputs of autonomous systems, has also been raised.

Transparency and accountability: Some autonomous systems produce their outputs in opaque ways that cannot be interpreted by humans. These so-called “black box” systems pose questions around how to ascribe accountability and liability for errors.

Data bias, fairness and equity: AI models may embody biases that mean they do not deliver accurate predictions for some groups. This highlights the necessity of using high quality, representative datasets to develop algorithms. The introduction of autonomous systems in healthcare could affect inequalities in access to care.

Public acceptance: Polling points to mixed public opinion around the use of autonomous systems in healthcare. What seems to matter for the public is that AI technologies do not fully replace the clinician-patient relationship

Effects on patients: There are concerns that, if autonomous systems begin to replace some patient-clinician interactions, some insights into patient health and wellbeing could be missed. The use of robots in care settings also raises a variety of questions for patient wellbeing.

Effects on healthcare professionals: It is however possible that clinicians’ roles could be changed in undesirable ways if the use of autonomous systems leads to the de-skilling or sidelining of professionals.

Data privacy and security: Access to relevant data is clearly essential to develop AI technologies for use in health and care settings, but past incidents, such as the collaboration between the Royal Free NHS Trust and Google DeepMind, and the Care.data experience, may have dented public and professional confidence

Malicious uses of AI: There is the risk that AI technologies could be used for surveillance or to gather information on people’s health without their knowing, and the vulnerability of autonomous systems to adversarial attacks and data breaches has also been raised.

The understanding of “autonomy” is somewhat different in health compared with other settings. While an autonomous vehicle clearly has wide scope to make its own decisions and act on them, AI technologies in health are seen more as just one of many inputs into decision-making by clinicians, with responsibility remaining firmly in human hands. Fully autonomous systems, where humans are removed from the decision-making process, are likely only to materialise in the distant future in healthcare contexts.

For autonomous systems to be trusted, and worthy of trust, in the sensitive context of health and care, policymakers and researchers will need to first consider a range of knotty issues. Some – such as questions of safety and bias, data privacy and cyber security – relate to systems that are likely to be deployed in the near term. Others are more relevant over the longer term, such as accountability with “black box” systems and humans’ interactions with robot carers. Thinking carefully about these issues today will ensure that the benefits of autonomous systems in health are shared by all, while the risks associated with them are managed.

Related links