Skip to main content
KBS_Icon_questionmark link-ico
Hand and robot ;

We Need to Talk About Trust

In our last blog post, we discussed how Artificial Intelligence (AI), and the challenges presented by Black Box Machine Learning, pose questions around trusting technology enabled solutions.

The promise and threat of AI in Defence cannot be ignored, whether it be in applications that provide a pilot with a targeting solution or the deployment of swarms of drones to saturate an enemy’s formation or air defence capability. AI enabled capabilities are increasingly relevant as force multipliers in an age of great-power competition set against the reality of declining western military mass. AI enabled nano technologies, control of public works, offensive cyber and data mining from the internet of things will all become increasingly relevant to both sub and above threshold competition strategies. However, whatever the context, trust is fundamental to such technological development. Without trust in identifiable and repeatable outcomes, novel weapons and methods of warfare will not survive contact with the prevailing framework of international humanitarian law, much less the court of public opinion.

 

As humans we tend to base our trust in people on how similar they appear to be to us and on the depth of our shared knowledge and experiences, so how can we form the same bonds of trust with AI and, given the challenges posed by confirmation bias and similar effects, how can we objectively evaluate its performance?

 

The need for AI enabled technology to be trustworthy is implicit in all aspects of its development: when, how and what it learns. In the non-military context driverless cars have been developed on the premise that the suite of sensors will prevent accidents, cut down on congestion, and never breach road traffic rules whilst taking passengers safely to their destinations. Driverless cars are, therefore, fundamentally advanced on the premise that they are a safer alternative to allowing humans to control cars. However, as soon as a driverless car is involved in an incident, there is an immediate clamour to interrogate the technology to understand how the incident could have happened, in the absence of a malfunction, without conceding the possibility that accidents cannot be totally eradicated even in the field of non-human intelligence. Underpinning the demand to understand how the incident occurred appears to be the premise that being as good as a human is not good enough and that anything short of perfection is a failure.

 

Translating this to the military context, the question of what is good enough is amplified especially where AI enabled capabilities may be used in the kill chain. The object of the projection of force is often to take life but international humanitarian law demands that a number of parameters are adhered to when doing so, to ensure that civilians and civilian property are not disproportionately harmed by the consequences of the action taken.

 

Therefore, it could be argued that for AI enabled capabilities to be acceptably good, they need to produce outcomes that are superior to those achieved through human decision making and action in any given circumstances, particularly when it comes to mitigating the effects of armed conflict on those touched by those effects but not taking an active part in the conflict. However it is not realistic to demand that any technology provide a 100% positive outcome every time it is employed – so what is the measure of acceptability and how does it impact on concepts of trust? Will society ever trust a technology in the absence of a recognition that there is a margin of error inherent in every decision no matter whether taken as the result of human or other-than-human intelligence and that the more complex the task and decision-making process the more likely unforeseen and unintended consequences may result?

 

Before we can attempt to seek to answer these questions, we first need to examine our relationship with technology. AI does not exist in a vacuum; it is not an end but rather a means that only has meaning in the context of enabling the performance of a function to deliver an output. Therefore, I argue that the level of trust that humans need to invest in an AI enabled activity is directly proportionate to the visibility and acknowledged functionality of the technology in context and its perceived relevance to our lives and value systems.

 

It is therefore perhaps unsurprising that lethal autonomous weapons – where the decision to apply lethal force is delegated to be carried out without further reference to a human operator – pose the most contentious potential use of AI. The question of the future of technologies that may lead to lethal autonomous weapons is being examined by the UN Human Rights Council. Within the framework of the Convention on Certain Conventional Weapons, arguments have been advanced by those seeking a ban or moratorium on the development of lethal autonomous weapons centering on the inability of the current technology to process, understand and abide by the international laws and norms that we currently operate under, on the basis that the nuances of the legal and ethical frameworks cannot be reduced to ones and zeros. Furthermore, it has been argued that whether or not an AI enabled capability could operate within the current legal framework of the law of armed conflict, it would be unethical to allow a machine to make the decision to take human life as the machine cannot understand the value of human life and therefore the moral cost of taking it.

 

However, it could be argued that modern warfare began crossing that particular Rubicon at the point that mechanized warfare become the norm as whilst a human nominally remains responsible for the decision to take life, the distance created between the actor and the act obscures the consequences of pulling the trigger, dropping a bomb or initiating a mine to the point that the humanity of the enemy has faded from conscious thought. From the time man moved from hitting another with a rock to throwing the rock from distance one of the primary objects of innovations in military hardware has been to facilitate killing the enemy whilst removing the need to place one’s own forces at risk.

 

Modern technologies are just the latest in a long lineage of developments that seek to make the projection of force and the consequent taking of life less dangerous and morally challenging to the friendly protagonist. This imperative has taken a quantum leap forward with the introduction of computing and sensors hoovering up data to present to the operator as embodied in fourth and fifth generation fighter aircraft, loiter munitions and smart mines to name but a few all of which have made humans progressively incidental to the decision to take life in any meaningful way. How much meaningful human control is exercised by a pilot when acting on targeting information produced by a suite of sensors that has identified threats, synthesized the information to prioritize them before presenting the pilot with a targeting solution is questionable. Whilst the pilot can be said to be making the ultimate decision to prosecute a target, in reality what can they add by way of assurance, as they are unable to interrogate the data and its sources that have ultimately led to the generated targeting solution upon which their go/no go decision is premised.

 

This clearly raises the issue of how we conceptualize our relationship with the technology we employ. We could centre our relationship with AI on the concepts of accuracy, reliability, and the explainability of the outputs of AI enabled capabilities. Such metrics are observable and consistent reliable execution of tasks by AI may begin to offset our levels of social discomfort with AI enabled capabilities undertaking actions that were previously the preserve of human beings and in directing AI enabled capabilities to perform tasks that we cannot. Of course, a level of social comfort will only be achievable if the metrics chosen and the observable outputs themselves align with the prevalent legal frameworks and ethical norms.

 

Repeatability of outcomes will allow familiarity and generate the conditions required to begin developing higher-level societal and organisational trust leading the way to normalising the use of ever more AI enabled highly automated and eventually autonomous capabilities. In this way the expanding use of AI should make the case for ever greater automation and autonomy as a central pillar of future air and space strategy.

 

The changing relationship between AI enabled capabilities and humans is likely to shape the future narrative beyond assertions of how such capabilities will replace humans and engage instead with the question of how such capabilities will augment the entire spectrum of human activity. This will require a reevaluation of the nature of our relationship with AI enabled capabilities to determine whether such capabilities will ultimately be considered team members rather than purely tools at our disposal. This is a crucial point as whilst we can simply trust in a tool’s functional performance, we need to understand what motivates our team members to act, and this is never more the case than in the projection of military power.

 

The human need to understand motivation is illustrated in the portrayal of police officers on screen, where a maverick cop breaking the rules, solely with the motivation of bringing villains to justice, is perceived very differently from someone breaking the rules for personal gain. Thus the motivation for action is the key to understanding the level of trust one should have in an individual (human or AI) and how far one can rely on them/it to ‘do the right thing’ as we perceived it. Trust is more than having confidence that the mission will be accomplished, it is having insight into the motivating factors that lay behind all decision making that shapes how the mission is accomplished and laying oneself vulnerable to the possibility that one’s trust is misplaced.

Looking ahead, trust requires a baseline at which the function of a capability is deemed acceptable for use. As AI enabled systems continue to be developed, or continue to learn, baseline levels of trust will need to be re-evaluated and managed within a construct of continuing trustworthiness, similar in concept to continuing airworthiness. Similarly, it will be essential to understand the context around the use of an AI enabled system, including where humans are involved in that system and the consequences of the AI enabled weapon system making legally appropriate decisions within the framework of the law of armed conflict.

 

The continued important of the current legal framework is clear as, in the absence of a new law created by treaty or state practice, adherence to international humanitarian law provides the Redline against which AI enabled military systems can be developed. However, development of AI enabled systems to meet international humanitarian law is not without significant difficulty, assuming it is possible. Within the current limitations of the technology, we have to acknowledge and engage with the questions and challenges that are raised particularly in respect of the ability of such weapon systems to operate within a legal framework in which absolutes are not the norm, and application of the law requires interpretation of the legal framework bounded by an appreciation of context. Such matters should not preclude, but rather drive forward, development in such technology within a legal and ethical framework.

 

The technologies that will support military applications are already being developed in the form of multi-use capabilities drawn from both the military and civilian sectors. It may appear trite but today’s driverless car is tomorrows driverless Vehicle-Borne Improvised Explosive Device, and whilst amateur drones are available at little cost to be purchased on the internet they will continue to be repurposed to deliver munitions by those motivated to do so outside of the application of State sanctioned force. In the face of adversary innovation States need to agree the base lines for use of increasingly autonomous technologies in the projection of force to ensure that increasingly automated and autonomous weapons systems can be judged by an objective metric and those who pursue applications that are inconsistent with the law of armed conflict can be held to account.

 

Read blog 4 here

Latest news