Skip to main content
KBS_Icon_questionmark link-ico
Hand and robot ;

Machine Learning and Black Boxes

What can Artificial Intelligence achieve today and what will be delivered in the next decade or so? The answer is further development of Machine Learning (ML) or, as it is sometimes termed, Narrow Artificial Intelligence. ML is already present in a high proportion of applications in today’s world, using the capabilities of Autonomy in finding suitable data, analysing this data to output information, and sensing and controlling platforms – all while operating at a level of cognitive ability far beyond the capabilities of even the most capable human.

However, there is a catch. Humans are spontaneous, conscious and think with bias. The answer to most complex questions and the inputs that create the sequence of a decision are somewhat experience driven – and not just a trained response to a given situation. For the human, relevant training sets the sequence of reaction, however, the deeper subconscious also has part to play to decision-making and in high stress situations this subconscious Threat or Reward (Fight or Flight) reaction is often the driving force behind a decision.

 

This could be viewed as a potentially limiting factor of ML, although this characteristic could also be viewed as a positive as emotional bias is removed from the decision-making process. The caveat, of course, is that this is a learned reaction in the case of ML and only time will tell if ML will develop a pseudo personality. Potentially this could be caused by a bias introduced into the coding of the original algorithm that a human has inadvertently added to the mix during initial production or the addition of biased training data.

 

Perceived limitations aside, ML is changing the way we think, the way we procure, the way we support and the way we deliver in all aspects of life, and therefore, in all aspects of air and space power. The UK is not the only nation to have spotted this as both a threat and opportunity. Friendly and potential adversaries, both state and non-state are all pressing forwards with pace to achieve AI (ML) delivered capability.

 

More disconcerting for nation states is you do not need an industrial base in traditional air and space to engage in the technology which is allowing non state actors to gain an effective foothold in this game. Once obtained, ML driven autonomy has the potential for parasitic fit to extant platforms and solutions to enhance and weaponise capabilities from previously designed benign platforms and systems. And off the shelf solutions using ML can support military capability needs, such as commercially available drones or augmented reality training packages.

 

Recent announcements highlight concerns that an AI arms race is spinning up. This is further reinforced as more of individual national defence budgets are being diverted from traditional physical capability procurement into the development of AI and Autonomy.

 

Machine Learning Today - how can we build effective, evergreen, and extendable ML and employ this within UK Air and Space Power to enable, gain and maintain advantage?

 

The most expeditious route for system development is through the procurement of ‘Black Box’ ML. Black Box refers to the actual Machine Learning Algorithm, the piece that makes sense of the inputs. The operator can identify and control the inputs, but once these have entered the black box then they have no idea of what is happening.

 

Operators might, over time, be able to predict expected outputs, but with the high possibility that they will be unable to explain ‘why?’ This loss of understanding is magnified in the context of overseas procured AI enabled capabilities as the AI developing nation is often unwilling to allow the procuring nation to fully understand the technology and functional decision-making frameworks at play. This has particular implications for military applications, or for that matter any application where life is or could be threatened.

 

There are a number of external tools that can be employed to try and work out how the outputs from the ML were obtained from the inputs by way of reverse engineering the capability, but this could be viewed as potentially closing the gate after the horse has bolted, or more importantly trying to understand whether the tool is potentially somewhat skewed after we are committed to procuring it.

 

Interestingly, the Black Box approach, while less expensive and readily available, is not considered suitable for high impact and more trust challenged uses of AI. Clearly, this is particularly relevant if and when commanders and leaders are called upon to defend a decision or action that was taken by an AI enabled capability. How would they go about explaining a decision made by ML – they would probably need to be able to answer the following:

  1. What has our ML learned?
  2. How can we test, what we have taught our ML and what does our ML know?
  3. What decision making framework has our AI been given?
  4. What decision is our ML likely to make?
  5. What decision did our ML make?
  6. How do we know what decisions our ML will make if it learns more?
  7. What will our ML learn?
  8. Do we know what our ML knows?
  9. How do we explain what our ML did?
  10. Do we need to explain why our ML made the decision it did?
  11. How much detail is required in any explanation?
  12. Who’s responsible for what our ML does?
  13. Does responsibility change as our ML learns more?

 

Once understood the capability that can be derived from black box solutions could also be thought of as more secure, as the algorithm sits beyond reach of human influence, however, but recent developments have shown that Black Box algorithm can be interfered with through the manipulation of inputs, which can then of course affect the outputs.

 

Read blog 3 here

 

Latest news