AS can be distinguished using two criteria: the degree of autonomy and adaption, and the criticality of the application, which can range from harmless to safety-critical. There are different techniques required for robust verification and verifiability (V&V) at different stages of the system life cycle. The need for runtime V&V emerges when AS operate in uncontrolled environments, where there is a need for autonomy and learning and adaptation. There, a significant challenge is finding rigorous techniques for the specification and V&V of safety-critical AS, where requirements are often vague, flexible, and may contain uncertainty and fuzziness.
V&V at design time can only provide a partial solution, and more research is needed to understand how best to specify and verify learning and adaptive systems by combining design-time with runtime techniques. Finally, identifying the design principles that enable V&V of AS is a key pre-requisite to promote verifiability to a first-class design goal alongside functionality, safety, security, and performance.
How can explainability by design contribute to AS specifications?
There are increasing calls for explainability in AS, with emerging frameworks and guidance pointing to the need for AI to provide explanations about decision making. A challenge with specifying such explainability is that existing frameworks and guidance are not prescriptive: what is an actual explanation and how should one be constructed? Furthermore, frameworks and guidance tend to be concerned with AI in general, not AS.
To better understand this, we used the case study of automated decision-making in loan applications as foundations for a systematic approach. Within this context, explanations can act as external detective controls, as they provide specific information to justify the decision reached and help the user take corrective actions. But explanations can also act as internal detective controls—that is, a mechanism for organisations to demonstrate compliance to the regulatory frameworks they must implement.
The study and design of AS includes many facets; not only black-box or grey-box AI systems, but also the system's various software and hardware components, the curation and cleansing of datasets used for training and validation, the governance of such systems, their user interface, and crucially the users of such systems with a view of ensuring that they do not harm but benefit these users and society in general.