Skip to main content
KBS_Icon_questionmark link-ico
Driverless car ;

Trustworthiness by design: Developing specifications for Autonomous Systems

Professor Luc Moreau and Professor Mohammad Reza Mousavi

01 February 2024

Autonomous Systems are computer-controlled technologies that take decisions with little or no human intervention. They have the potential to be adopted across numerous domains, including transport and logistics through delivery robots and driverless cars, as well as healthcare through robots-assistive care. Given their huge growth in recent years, we are entering a future where they are no longer confined to structured industrial settings, and will increasingly become part of our daily lives. As such, it is crucial that these systems are trusted and trustworthy.

Different research disciplines define trustworthiness in different ways and nowhere is this more apparent than in understanding the trust relationship between humans and Autonomous Systems (AS). We consider AS to be trustworthy when the design, engineering, and operation of these systems generates positive outcomes and mitigates potentially harms. This trustworthiness is built on many important factors, including:

  • explainability, accountability, and understandability to different users;
  • the robustness of AS in dynamic and uncertain environments;
  • the assurance of their design and operation through verification and validation (V&V) activities;
  • confidence in their ability to adapt functionality;
  • security against attacks on the systems, users, and deployed environment; governance and regulation of their design and operation; and
  • consideration of ethics and human values in their deployment and use

As such, engineering trustworthy and trusted AS involves different processes, technology, and skills than those required for traditional software solutions. Yet adaptation to these new requirements, has been slow with many practitioners in the AS and artificial intelligence (AI) domains learning through experience and failure rather than through rigorous and mathematically-founded techniques. Best practices have started to emerge, but there is increasing evidence of the need for rigorous specification techniques for developing and deploying AI applications.

Engineering trustworthy and trusted AS involves different processes, technology, and skills than those required for traditional software solutions. Yet adaptation...has been slow with many practitioners...learning through experience and failure rather than through rigorous and mathematically-founded techniques. – Luc Moreau and Mohammad Reza Mousavi

Even when not life-critical, actions and decisions made by AS may have serious consequences. If we are to use them in our businesses, at doctor's surgeries, on our roads, or in our homes, we must build AS that works for the people and domains in which they are deployed. However, specifying requirements for AS and AI remains more a craft than a science. For example, machine-learning (ML) applications are often specified based on optimisation and efficiency measures, rather than well-specified quality requirements that relate to stakeholder needs, and further research is needed.

Through the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme, as part of a major consortium of university and sector partner we conducted cross-disciplinary fundamental research to develop recommendations to ensure that AS are safe, reliable, resilient, ethical, and trusted. Organised around six research projects, the programmes focused on different aspects of trust in AS, such as resilience, trust, functionality, verifiability, security, and governance and regulation.

The King’s team involved many researchers across several departments. In an exposition published in the Communications of the ACM, Professor Luc Moreau and Professor Mohammad Reza Mousavi provide an overview on the role of explainability and verification and verifiability for developing trustworthy systems.

Specifications must represent the different aspects of the overall system in a way that is natural to domain experts, and that also facilitates modelling and analysis, provides transparency of how the AS works, and offers insights into the reasons that motivate its decisions." – Luc Moreau and Mohammad Reza Mousavi

How to specify AS for verifiability

For a system to be verifiable, a person or a tool needs to be able to check its correctness with respect to its requirements and specification. The main challenge is in specifying and designing the system in a way that makes this process as easy and intuitive as possible. For AS in particular, specific challenges include capturing and formalising requirements, including functionality, safety, security, performance and, beyond these, any additional non-functional requirements purely needed to demonstrate trustworthiness, such as handling flexibility, adaptation and learning, and managing the inherent complexity and heterogeneity of both the AS and the environment it operates in.

Specifications must represent the different aspects of the overall system in a way that is natural to domain experts, and that also facilitates modelling and analysis, provides transparency of how the AS works, and offers insights into the reasons that motivate its decisions. To specify for verifiability, a specification framework needs to offer a variety of domain abstractions to represent the diverse, flexible, and possibly evolving requirements AS are expected to satisfy. Furthermore, the underlying verification framework should connect all these domain abstractions to allow an analysis of their interaction. This is a key challenge in specifying for verifiability in AS.

There, a significant challenge is finding rigorous techniques for the specification and verification and verifiability of safety-critical AS, where requirements are often vague, flexible, and may contain uncertainty and fuzziness."– Luc Moreau and Mohammad Reza Mousavi

AS can be distinguished using two criteria: the degree of autonomy and adaption, and the criticality of the application, which can range from harmless to safety-critical. There are different techniques required for robust verification and verifiability (V&V) at different stages of the system life cycle. The need for runtime V&V emerges when AS operate in uncontrolled environments, where there is a need for autonomy and learning and adaptation. There, a significant challenge is finding rigorous techniques for the specification and V&V of safety-critical AS, where requirements are often vague, flexible, and may contain uncertainty and fuzziness.

V&V at design time can only provide a partial solution, and more research is needed to understand how best to specify and verify learning and adaptive systems by combining design-time with runtime techniques. Finally, identifying the design principles that enable V&V of AS is a key pre-requisite to promote verifiability to a first-class design goal alongside functionality, safety, security, and performance.

How can explainability by design contribute to AS specifications?

There are increasing calls for explainability in AS, with emerging frameworks and guidance pointing to the need for AI to provide explanations about decision making. A challenge with specifying such explainability is that existing frameworks and guidance are not prescriptive: what is an actual explanation and how should one be constructed? Furthermore, frameworks and guidance tend to be concerned with AI in general, not AS.

To better understand this, we used the case study of automated decision-making in loan applications as foundations for a systematic approach. Within this context, explanations can act as external detective controls, as they provide specific information to justify the decision reached and help the user take corrective actions. But explanations can also act as internal detective controls—that is, a mechanism for organisations to demonstrate compliance to the regulatory frameworks they must implement.

The study and design of AS includes many facets; not only black-box or grey-box AI systems, but also the system's various software and hardware components, the curation and cleansing of datasets used for training and validation, the governance of such systems, their user interface, and crucially the users of such systems with a view of ensuring that they do not harm but benefit these users and society in general.

It no longer suffices to focus on the explainability of a black-box decision system. Its behaviour must be explained...[and] explainability should not be seen as an afterthought but as an integral part of the specification and design of a system, leading to explainability requirements to be given the same level of importance as all other aspects of a system."– Luc Moreau and Mohammad Reza Mousavi

There are typically a range of stakeholders involved, from the system designers to their hosts and/or owners, their users (consumers and operators), third-parties, and, increasingly, regulators. In this context, many questions related to trustworthy AS must be addressed holistically, including:

  • What is an actual explanation and how should one be constructed?
  • What is the purpose of an explanation?
  • What is the audience of an explanation?
  • What is the information it should contain?

It no longer suffices to focus on the explainability of a black-box decision system. Its behaviour must be explained, with more and less details, in the context of the overall AS. However, to adequately address these questions, explainability should not be seen as an afterthought but as an integral part of the specification and design of a system, leading to explainability requirements to be given the same level of importance as all other aspects of a system.

As autonomous systems play greater roles in our daily lives and interact more closely with humans, we need to build systems worthy of trust regarding safety, security, and other non-functional properties."– Luc Moreau and Mohammad Reza Mousavi

In the context of trustworthy AS, emerging AS regulations could be used to drive the socio-technical analysis of explainability. A particular emphasis would have to be on the autonomy and the handoff between systems and humans that characterises trustworthy AS. Tailoring explanations to the audience is also critical from users and consumers to businesses, organisations, and regulators. Finally, we should be encouraging a culture where post-mortem explanations, in cases of crash or disaster situations involving AS, leads to improvements in architectural design for explainability.

A future of trustworthiness by design

As autonomous systems play greater roles in our daily lives and interact more closely with humans, we need to build systems worthy of trust regarding safety, security, and other non-functional properties. One of the challenges is in formalising knowledge so that it can be easily grasped by humans and becomes interpretable by machines. Prominent examples include the specification of driving regulations for AVs, and the specification of human knowledge expertise in the context of AI-based medical diagnostics.

How to specify and model human behaviour, intent, and mental state is a further challenge common to all domains where humans interact closely with AS, such as in human-robot collaborative environments in smart manufacturing. Whilst there are technical aspects to overcome these challenges, there are also research challenges that require a holistic, human-centered approach, focused on responsibility and accountability, and enabling explainability from the outset. Fundamental to this is a sound understanding of human behaviour and expectations, as well as the social and ethical norms applicable when humans directly interact with AS.

We conclude that specifying for trustworthiness requires advances on the technical and engineering side, informed by new insights from social sciences and humanities research. Thus, tackling this specification challenge necessitates tight collaboration of engineers, roboticists, and computer scientists with experts from psychology, sociology, law, politics, economics, ethics, and philosophy. Most importantly, continuous engagement with regulators and the general public will be key to trustworthy AS.

In this story

Luc Moreau

Luc Moreau

Head of Department of Informatics

Mohammad Reza Mousavi

Mohammad Reza Mousavi

Professor of Software Engineering

Latest news

LDC 2024

30 April 2024

Do sanctions deter?

Russia successfully insulated itself from sanctions following its illegal invasion of Ukraine. For…