Military and Strategic Journal
Issued by the Directorate of Morale Guidance at the General Command of the Armed Forces
United Arab Emirates
Founded in August 1971

2022-08-01

AI in Armies: Trust Holds the Key

Artificial intelligence (AI) is now changing how humans think and make decisions. Looking ahead, it will increasingly affect how humans prioritise various cognitive processes, adapt their learning, their behaviours and training, and broadly transform their institutions. These changes are still not entirely evident across militaries. 
 
Today’s armed forces do not differ dramatically in organisational structure from the professional armies of post-Napoleonic Europe. Too many people are still involved in military tasks that technology can do better and faster, and not enough attention is paid to rethinking humans’ cognitive contribution to human–machine teams that will be needed to address future questions of command and control (C2).
 
 
The QinetiQ and RUSI’s report’s insights were drawn from the broader literature related to AI, human cognition, military decision-making and theories of trust. The research, conducted between September 2021 and February 2022, benefited substantially from interviews with a wide range of experts and users from across defence, academia and industry.
 
Turing Test
The concept of AI originates with the famous Turing test of 1950, which occurred a few years before the coining of the term. It is easier to conceptualise AI by focusing on what it does rather than what it is. AI ‘seeks to make computers do the sorts of things that minds can do’.
 
It can be understood as the capacity for virtual information processing in pursuit of a specific task. Just as ‘intelligence’ (or ‘the mind’) has many dimensions and varied uses, so does AI. 
 
There are three levels of AI: artificial narrow intelligence, typically referred to as ‘narrow AI’; artificial general intelligence, sometimes referred to as human level AI; or the more powerful artificial super intelligence that exceeds human levels of intelligence. 
 
Three Types
Within narrow AI, there are further categories, although the techniques are not wholly discrete and are often used in combination. The most common distinction is between symbolic AI, often described as being based on logic, and sub-symbolic or non-symbolic AI, based on adaptation or learning. 
 
Symbolic AI relies on sequential instructions and top-down control, making it particularly well suited to defined problems and rules-based processes. Non-symbolic AI, within which neural networks are a common approach, involves parallel, bottom-up processing and approximate reasoning; this is most relevant to dynamic conditions and situations in which data is incomplete. 
 
There are three common types of machine learning, differentiated by the type of feedback that contributes to the agent’s learning process: supervised learning; unsupervised learning; and reinforcement learning. 
 
In supervised learning, the system is trained to generate hypotheses or take specific action in pursuit of target values or outputs (referred to as labels) based on specific inputs (for example, image recognition). Unsupervised learning has no set specifications or labels and there is no explicit feedback; rather, the system learns by finding patterns in the data (for example, DNA sequence clustering). Reinforcement learning depends on a feedback loop that steadily reinforces the system’s learned behaviour through a trial-and-error or reward-and punishment mechanism (for example, advanced robotics or driverless cars).
 
Fundamental Aspect
Trust as a term is widely used in computer science. More importantly, trust remains a fundamental aspect of public and user acceptance of AI. National policies, regulations and expert advice on AI today routinely underscore the need for ‘trustworthy AI’. 
 
Various factors determine (human) trust in technology, including but not limited to the trustor’s level of competence and disposition to trust, and the overall environment or context (including broader cultural and institutional dynamics). Beyond such human- and environment-specific considerations, what defines the level of trust a person or organisation has in AI are the technology’s performance, process (how it generates specific outputs) and purpose. All three shape the design and deployment of AI-enabled systems. 
 
Finance Sector
Today, AI shapes the content delivered by network platforms such as Google and Facebook, and determines what content gets removed or blocked. AI-enabled decision-support systems that retain a human element are also proliferating, in use for everything from medical diagnoses to improving manufacturing processes. 
 
In few places has AI so fundamentally changed the human–machine relationship as in finance. AI is now responsible for the vast majority of high frequency trading. Thousands of micro-decisions performed in milliseconds have the power to transform entire fortunes, sometimes with ruinous consequences as the Flash Crash of 2010 demonstrated. Human decisions are no longer necessary for the efficiency of financial markets and, indeed, may even be counterproductive. The invisible algorithm would seem to have overtaken the invisible hand. 
 
‘On the Loop’
Military applications of AI, particularly in relation to mission support and operational uses, differ in some fundamental aspects from day-to-day civilian activities. In civilian life, AI has the opportunity to train and learn against real-life examples constantly, drawing on vast amounts of easily accessible data.
 
As well as being current policy for the U.S., the UK and NATO, among others, there is a general belief that humans will retain a critical role in decisions. The U.S. Department of Defense’s AI strategy directs the use of AI ‘in a human-centred manner’ that has the potential to ‘shift human attention to higher-level reasoning and judgment’. 
 
Weapon systems design incorporating AI should ‘allow commanders and operators to exercise appropriate levels of human judgment over the use of force’ and ensure ‘clear human-machine interface’.
 
References to humans always being ‘in the loop’ and ‘fully in charge of options development, solution choice, and execution’ — a common refrain in previous assessments of our increasingly automated future – have been replaced by a more nuanced view. 
 
Only in the case of fully autonomous systems is human intervention removed entirely. Ultimately, however, attempts to define levels of autonomy can be misleading as they assume a simple separation of cognitive activities between humans and machines. 
 
A 2012 U.S. Defense Science Board report describes how: 
There exist no fully autonomous systems, just as there are no fully autonomous soldiers, sailors, airmen or marines. Perhaps the most important message for commanders is that all systems are supervised by humans to some degree, and the best capabilities result from the coordination and collaboration of humans and machines.
 
Most defensive weapon systems, from short-range point defence to anti-ballistic missile systems, operate with advanced automation that allows them to detect and destroy incoming missiles without human intervention. Algorithms are literally calling, as well as taking, the shots. Such systems, in which the human is said to be ‘on the loop’, operate within a limited design space following rigorous prior human testing, so their span of control is constrained. 
 
Though mistakes can never be fully eliminated, the risk of not responding or of responding late in most cases may exceed the risk of occasional accidents. Defence against ever-faster, particularly hypersonic, missiles will continue to drive adoption of AI in missile defence. 
 
Cyber warfare presents another area where AI has clear advantages over humans, which often necessitate that the human remains out of the loop. 
 
Heuristic Approach
When there is uncertainty or lack of knowledge, humans apply a heuristic approach to approximate solutions to complex problems. Heuristics is what drives intuitive thinking; it relies on rules of thumb, typically informed by experience and experimentation. As such, it can suffer from biases and blind spots, but it can also serve as a very powerful and effective form of rapid cognition. Machines lack human-like intuition, but they do rely on heuristics to solve problems. 
 
The key difference with human reasoning is that machines do not need memory or ‘personal’ experience to be able to ‘intuit’ or infer. They draw on huge databases and a superior probabilistic capacity to inform decision-making. Powerful simulations, combined with advanced computing power, offer an opportunity to test and ‘train’ algorithms at levels of repetition unimaginable for humans. ARTUµ had undergone more than a million training simulations in just over a month before it was declared mission ready.
 
Even with significant advances in the field of Explainable AI (XAI), there will still be reasons for caution, particularly in situations requiring complex decision-making. AI is generally not good at seeing the ‘big picture’ or making decisions based on what is relevant. Like humans, it can mistake correlation or chance events for causation. Both humans and machines are bound to experience ‘normal accidents’ when dealing with complexity.
 
Five Trust Points 
Building and sustaining trust involves five main ‘trust points’ – points at which the question of having an appropriate level of trust is crucial. These are: 
Deployment trust: The purpose for which AI is used; 
Data trust: The data inputs being used;  
Process trust: How the data is processed; 
Output trust: The outputs generated by the AI; and 
Organisational system trust: The overall ecosystem for 
optimising use of AI. 
Collectively, the trust points define an overall level of trust, and are multiplicative: if trust in one is ‘zero’, the whole will be ‘zero’. The trust level for each can vary – at different times and over time – as long as overall trust is positive.
 
Deployment Trust 
Deployment trust is complicated because most Western defence activity at scale assumes coalition operations and not every ally or partner may have a common view of what is an acceptable military use of AI. Defence ministries and governments need to become better at communicating their approaches, uses and safeguards in relation to the use of AI, including to allies, without giving too much away to adversaries who can develop strategies to neutralise (or worse) the advantages of AI-enabled capabilities. 
 
Data Trust 
Armed forces need the ability to define, structure, cleanse and analyse data, as well as develop and maintain the underlying infrastructure (such as connectivity, security and storage capacity). This is a multi-disciplinary team effort requiring ‘full-stack’ data scientists who can work on all stages of the data science lifecycle. 
 
The modern battlefield will require even greater diversity of skills, including psychologists, lawyers and communications experts. Attracting and retaining these specialists in the numbers required will be difficult given the demand for such skills in the commercial world. 
 
Process Trust 
Process trust refers to how the AI system operates, including how data is processed (aggregated, analysed and interpreted). 
Process trust must extend beyond the technology itself. It requires trust in the human processes that feed, work alongside and receive technology’s outputs. Equal importance must be placed on those other activities that together constitute the overall process. This includes the processes by which people are trained and developed, and how the teams are formed. 
 
Output Trust
Trust in AI outputs is critical for decision-makers to act on the information they receive. Even with human-provided intelligence, it is not unknown for commanders to demand new intelligence to support their preconceptions if the original information points in a different direction (a kind of ‘decision-based evidence making’). 
And with the proliferation of data, different interpretations will be possible, legitimately or to fit preconceptions. 
 
Interacting with the technology and seeing its outputs in action will generate trust if the experience is positive. In the operational environment, verification will be easiest when describing things that can be known and checked (for example, data on one’s own forces and, potentially, the laydown of adversary forces). It is more difficult to approximate the adversary’s intent, hence higher levels of output trust will be needed. This would include greater accuracy in descriptions and more testing of inferences drawn from big-data processing. 
 
Organisational Ecosystem Trust 
Ecosystem trust concerns the trust needed to adapt the wider organisational system to maximise the value of AI. The C2 system as a whole must be configured to exploit the benefits of AI-enabled decision-making, with appropriate checks and balances to operate within acceptable levels of risk. 
 
Ecosystem trust is needed to ensure that the structures – including the organisation of military headquarters, the role of the commander and the balance of centralised versus more diffuse or distributive powers of decision-making – are ready to harness AI’s opportunities. Without it, incremental approaches to AI adoption tend to encourage a passive or reactive approach to changes in structures and the overall ecosystem. By contrast, a dedicated strategy to realise the transformative power of AI would force an early rethink of the organisation needed to underpin such a strategy. This requires rethinking the traditional military structures, but there is no consensus on how far to go. 
 
Balanced Team
Advanced AI, if it can be said to have a motivation or bias, is likely to be logical and task-oriented (in Strength Deployment Inventory terms, Green and Red). A balanced team will increasingly need humans who can sustain team relationships, both internally and across teams. 
 
Human–machine teams, therefore, will be different although they might have some analogies with purely human teams that include neurodiverse colleagues for whom empathy or understanding of emotional cues are difficult. As with neurodiverse teams, human–machine teams will benefit from the value the diversity of team members bring to the whole, but also need adjustments to be made to maximise the opportunities for team performance. 
 
Enhancing use of enterprise AI in business support activity will offer opportunities for exploring how human–machine teams can work together most effectively, as well as potentially delivering the hoped-for reduction in running costs and moving humans up the value chain to undertake more meaningful work. 
 
Career Management
The new styles of leadership, new skills and enhanced understanding of technology, data and risk that are needed will also require new approaches to career management. 
Military career management systems move people (too) frequently, yet it takes time to form effective teams with the requisite levels of trust. Militaries might slow down movement of key people, and perhaps even teams, so that a senior headquarters team is managed as a collective entity rather than as individuals. However, current HR practices make it unlikely that either the armed forces, or indeed industry, will be willing to hold people in positions indefinitely in anticipation of future requirements. 
 
Need for Action
We need to reassess the conditions for and implications of trust in human–machine decision-making. Without that trust, effective adoption of AI will continue to advance more slowly than the technology and, importantly, behind the rate at which it is being adopted by some of our adversaries. 
 
In all but the rarest cases, trust in AI will never be total; in some cases, users may consciously consent to lower levels of trust. That trust needs to be considered across five different elements, which the authors call ‘trust points’. We should never rely on any single point to generate overall trust. 
 
Most often overlooked is the need for trust at the level of the organisational ecosystem. This requires rethinking the organisation of armed forces and their C2 structures. 
 
In seeking to answer how trust affects the evolving human–AI relationship in military decisionmaking, this paper has exposed several key issues requiring further research: 
How we build the trust necessary to reconfigure the organisation of command headquarters, their size, structure, location and composition, at tactical, operational and strategic levels; how we adapt military education to better prepare commanders for the age of AI; how we optimise and transform collective training across all domains to improve command involving greater collaboration with artificial agents;  and, how we define the needs and objectives of AI and humans within human–machine teams. 
 
Absent fundamental changes in how we access, train and grow people in leadership positions, and how we reform the institutions and teams within which they operate, we risk getting the trust balance in the human–machine relationship wrong and will fail to harness AI’s full transformative potential.
 
Reference Text/Photo:
www.qinetiq.com, Trust in AI: Rethinking Future Command by QinetiQ and RUSI
 

Add Comment

Your comment was successfully added!

Visitors Comments

No Comments

Related Topics

Middle East Accelerates Clean Energy Transition

Read More

FREMM,The Future of Naval Frigates

Read More

Middle East military braces up for Electronic Warfare

Read More

WORLD BEYOND GPS

Read More

new world of the torpedo

Read More

Glimpse at Missile Systems of the Future

Read More
Close

2024-02-26 Current issue
Pervious issues
2017-05-13
2014-03-16
2012-01-01
2014-01-01
2021-06-01
2021-02-21
2022-06-01
2021-09-15
.

Voting

?What about new design for our website

  • Excellent
  • Very Good
  • Good
Voting Number 1647