Skip to main content
KBS_Icon_questionmark link-ico
Hand and robot ;

Challenges of AI - Explainability

The very capability we seek through Machine Learning (ML) and Artificial Intelligence (AI) ­– the ability to react to large volumes of diverse inputs, beyond the reach of human cognitive ability – is also ML’s Achilles Heel as such complexity is often opaque in terms of the decision-making process that precedes a decision. The answer to this challenge could lie with ‘Explainability’ – if the ML can explain what and how it is working at every stage and ensure that explainability is intrinsic to the algorithmic approach, our confidence and therefore trust should be satisfied even when the algorithm is operating at a level of complexity way beyond simplistic comprehension.

In general, ML systems are extremely good at extracting patterns from data, enabling them to learn to make predictions and classifications on new data in the future. For this reason, they have been successfully applied to a wide range of real-world applications, including image classification, machine translation and automated decision making. On the other hand, many of the leading techniques for machine learning have several major drawbacks; namely: a lack of explainability, an inability to generalise from small amounts of data, and a difficulty incorporating existing knowledge.

 

Three of the key concepts are:

 

  1. Interpretability: The learned knowledge can be translated into plain English, making it inherently explainable.

 

  1. Generalisation: Logic-based systems can generalise from very few examples, making it possible to learn complex knowledge without needing large datasets.

 

  1. An ability to build on previous knowledge: Logic-based systems do not need to learn everything from scratch, and can instead start from an existing knowledge base, containing anything known before the learning starts or even previously learned knowledge.

 

Many ML systems learn models that are not easily interpretable (even by experts in the field), meaning that it is often difficult to generate a meaningful explanation for a model's decisions. Using Black Box ML techniques to make decisions which cannot be explained can cause legal, ethical and operational issues. Black Box models cannot be verified or audited before deployment; meaning that no guarantees can be made of their behaviour. Furthermore, if a Black Box model makes a sub-optimal decision, it is extremely difficult to analyse why the mistake has been made, or to determine what needs to be done to correct the model. Both of these factors are a bar to being taken into the military inventory given the requirement placed upon those nations who have ratified Addition Protocol 1 to the Geneva Conventions to subject all new means and methods of warfare to a weapons review superimposing the framework of international humanitarian law onto the capability in question to ask inter alia whether it can be employed in a way that distinguishes between combatant and non-combatant, lawful military target and civilian object.

 

Any rules learned by ML must be easily and accurately translated into plain English: the learned model can then be explained to users, and verified and audited prior to deployment. If the learned model appears to make a mistake, it is easy to generate an explanation of the decision. Such explanations not only provide a route to accountability, but also give an insight into the gaps in the learned knowledge that led to it, which can then be filled by providing further examples for learning.

 

One technique that could be approached to offer a level of explainability is expressed utilising the language of Answer Set Programming (ASP). ASP is a purely declarative language for knowledge representation and reasoning, which is particularly well suited to common-sense reasoning. ASP is in widespread use within decision support systems and enables ML to learn a variety of declarative non-monotonic, common-sense theories, including for instance Event Calculus theories and user preference models from real user data.

 

In addition to the simple deterministic rules learned by other systems, ML should be able to learn rules which model non-deterministic behaviour, choices and preferences. ASP enabled ML can be applied to many different domains and has opened up a variety of new applications that were previously out of scope for Logic-based Machine Learning systems. In particular, it has been applied to Event Detection, Natural Language Understanding, Learning Game Rules and Reinforcement Learning as exemplars: these can support areas of military capability that can benefit from ASP enabled ML.

 

There are two ways that ML should be applied to choose decisions/actions. In the simplest setting, we assume that in each possible scenario, there is a single `correct' decision. In this case, ASP ML can be applied to learn deterministic rules which, given a scenario, predict this ‘correct’ decision. However, it is more realistic that given a scenario, some decisions/actions will be invalid and others will be valid and, amongst the valid decisions, some are likely to be preferred to others. In this case, the generality of ASP ML is a huge advantage, as it allows learning a set of constraints which rule out invalid decisions/actions and simultaneously learn a set of preferences which rank the valid decisions/actions. Given a new scenario, the learned rules would then be able to predict not only the set of valid decisions/actions, but also the most appropriate, even if time constrained decision/action from the valid set, within a most likely time constrained decision/action need.

 

A final word on Probabilistic Logic-based ML: the rules learned by ASP ML are non-deterministic, in that they can capture the possible outcomes of a set of actions; for example, ASP ML can learn that when a coin is flipped, it is either heads or tails, but not both. However, it does not attempt to learn the probabilities of these non-deterministic possibilities. ML is more effective when probabilistic rules are learnt. An example of this in action could be that through observing the actions an adversary has taken, it is possible to learn the policies, heuristics and strategies that are being used to make their decisions, allowing us to predict the actions they will take in future scenarios.

 

In real-world scenarios, there is unlikely to be a large dataset of examples for each opponent we encounter. Using a Logic-based ML system will enable learning to potentially predict an opponent's actions from relatively few examples. While the myriad of variables could be somewhat clouded by the spontaneity of human psychology, however in ML versus ML conflicts of the future this spontaneity may well be negated.

Explainability is the key to trust and therefore an enabler to deployment of ML as an advantage in future air and space capabilities.

 

Read blog 5 here

 

Latest news