Military and Strategic Journal
Issued by the Directorate of Morale Guidance at the General Command of the Armed Forces
United Arab Emirates
Founded in August 1971

2022-01-01

NATO Bolsters Tech Advantage

Big data and bytes joining bombs and bullets have added a new dimension to the future war scenarios, transforming the international security environment into a complex and dire one unimaginable so far. 
 
To preserve peace, prevent coercion and deter aggression, the North Atlantic Treaty Organization (NATO) has been sharpening its technological edge. The drafting of its first-ever strategy on artificial intelligence (AI) is a glaring example.
 
A series of initiatives hints at a future path where NATO’s adoption of technology will be more profound and innovative. 
 
NATO Defence Ministers, at their October 2021 meeting, agreed on the Alliance’s first AI strategy, which includes standards of responsible use of Al, in accordance with international law.
 
Allies signed an agreement to establish a NATO Innovation Fund to invest in cutting-edge technologies. The fund is expected to invest 1 billion euros with innovators across the Alliance working on emerging and disruptive technologies.
 
NATO is also creating a Defence Innovation Accelerator for the North Atlantic, DIANA, which will provide a network of technology test centres and accelerator sites to better harness civilian innovation for security.   
Here is a sneak peak at NATO’s AI Strategy culled from a series of articles by current and former NATO staff, Zoe Stanley-Lockman, Edward Hunter Christie, and Dr Ulf Ehlert.
 
Focus on Interoperability
Due to its cross-cutting nature, AI – the ability of machines to perform tasks that typically require human intelligence – will pose a broad set of international security challenges, affecting both traditional military capabilities and the realm of hybrid threats, and will likewise provide new opportunities to respond to them. 
 
AI will have an impact on all of NATO’s core tasks of collective defence, crisis management, and cooperative security. With new opportunities, risks, and threats to prosperity and security at stake, the promise and peril associated with this technology are vast for any single actor to manage alone. Cooperation is inherently needed to mitigate international security risks, as well as to capitalise on the technology’s potential to transform enterprise functions, mission support, and operations.
 
Militarily, future-proofing the comparative advantage of Allied forces will depend on a common policy basis and digital backbone to ensure interoperability and accordance with international law. 
 
With the fusion of human, information, and physical elements increasingly determining decisive advantage in the battlespace, interoperability becomes all the more essential. The aim of NATO’s AI Strategy is to accelerate AI adoption by enhancing key AI enablers and adapting policy, including by adopting Principles of Responsible Use for AI and by safeguarding against threats from malicious use of AI by state and non-state actors.
 
By acting collectively through NATO, Allied governments ensure a continued focus on interoperability and the development of common standards. 
 
Responsible Use
Adopting AI in the defence and security context calls for effective and responsible governance, in line with the common values and international commitments of Allied nations.  Allied governments have committed to Principles of Responsible Use as a key component of NATO’s AI Strategy.
 
Allies and NATO commit to ensuring that the AI applications they develop and consider for deployment will be in accordance with the following six principles:
Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.
 
Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability.
Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. 
 
Reliability: AI applications will have explicit, well-defined use cases. 
 
Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.
 
Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.
Building the principles of responsible use into the front end of AI development is important because, the later they are considered, the harder it may be to ensure they are upheld. 
 
Sticking to Principles
These enduring principles are foundational to the adoption of detailed best practices and standards. Allies and NATO can leverage NATO’s consultative mechanisms and NATO’s specialised staff and facilities to work actively towards that goal. 
 
NATO’s own standardisation and certification efforts can be bolstered by coherence with relevant international standard-setting bodies, including for civilian AI standards.
 
These principles can also be operationalised via other mechanisms including review methodologies, risk and impact assessments, and security certification requirements like threat analysis frameworks and audits, among others. 
Further, NATO’s cooperative activities provide the basis to test, evaluate, validate, and verify (TEVV) AI-enabled capabilities in various different contexts. 
 
NATO’s experience not only in operations, but also in trials, exercises, and experimentation provide several avenues in which Allies and NATO can test principles against intended use cases. 
 
This is reinforced by NATO’s scientific and technical communities, which have worked on issues such as trust, human-machine and machine-machine interactions, and human-systems integration, among many others.
 
In addition to these existing activities, the implementation of the AI Strategy will also benefit from connections with NATO’s forthcoming DIANA programme. 
 
Allied Test Centres affiliated with DIANA could be used to fulfil the aims set out in the definitions of the principles. In the future, use of these Test Centres can help ensure that AI adoption and integration are tested for robustness and resilience. 
 
Flexible Approach
With the ethical aspects of adoption that the principles underscore, NATO has the chance to signal – and follow through on – responsibility at the core of its outreach efforts. This includes engagement with start-ups, innovative small and medium enterprises, and academic researchers that either have not considered working on defence and security solutions, or simply find the adoption pathways too slow or restrictive for their business models. 
 
In contrast to the development of traditional military platforms, AI integration entails fast refresh cycles and requires constant upgrading. 
 
With hostile state and non-state actors increasing their investments in Emerging and Disruptive Technologies including AI, this more flexible approach to adoption is all the more urgent. With its focus on TEVV and collaborative activities, the AI Strategy sets the framework for technological enablers to out-adapt competitors and adversaries. Focussing on agility and adaptation, NATO can make defence and security a more attractive sector for civilian innovators to partner with, while allowing them to maintain other commercial opportunities. 
 
Way Forward
To be sure, the implementation of accelerated, principled, and interoperable AI adoption depends not just on technology, but equally on the talented and empowered people who drive the technological state-of-the-art and integration forward. 
 
NATO has dedicated attention to other AI inputs, notably through the development of a NATO Data Exploitation Framework Policy. 
With actions to treat data as a strategic asset, develop analytical tools, and store and manage data in the appropriate infrastructure, the Data Exploitation Framework Policy sets the conditions for the AI Strategy’s success.
 
As Allies and NATO seek to fulfil the aim of this AI Strategy, the linkages between responsible use, accelerated adoption, interoperability, and safeguarding against threats are critical. These linkages will also apply to NATO’s follow-on work on other Emerging and Disruptive Technologies, including the development of principles of responsible use. 
This entails further coherence between the work strands on these technologies, understanding that NATO’s future technological edge – and threats the Alliance will face – may depend on their convergence.
 
Social Media
Any attempt at shaping the trajectory of a given technology faces a dilemma between today’s knowledge about the future and the available means to affect or change that future. 
David Collingridge was the first to frame the major challenge of policy making on emerging technologies: “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”
 
We are literally caught between a rock and a hard place. For a nascent technology, we cannot know all its future applications, nor can we anticipate all its future impacts. Still, at this time, we can exert some control over its development path. 
 
In the future, when that technology is mature, we see its full impact. We can thus define what we would like to change. Because the technology is already in the market, broadly distributed and widely used at that time, our means of control are limited.
 
We are struggling with a fundamental characteristic of technology development: the principle uncertainty inherent to an open-ended process without a knowable end-state. We cannot know in advance the future target for today’s policy intervention. But what can we actually do? Would it not be a fair choice to accept the limits of our knowledge, to simply “let things run their due course”?
 
Think about social media. These services promised connectivity across the planet, facilitating new forms of meaningful information-sharing and enabling global communities of unprecedented scale and scope. Their free-of-charge operation is naturally attractive to users, but “behind the scenes” they rely on an advertising-backed business model. 
 
For that to work, users should ideally stay connected 24/7 in order to feed the evermore-sophisticated micro-targeting algorithms. Such addictive behaviour and the increasing manipulation facilitated by it are not in the users’ interest. Nor are echo chambers, hate speech, and the tampering with democratic elections in the interest of our societies.
 
While the promise of social media is compelling, we made two cardinal mistakes. First, we accepted proprietary platforms operated by business enterprises. Second, we forgot that the purpose of business is profit, not philanthropy.
 
The case of social media demonstrates that the users’ immediate choices can counteract their longer-term interests. 
 
Growing Challenges
Today, we face multiple emerging technologies that promise to disrupt our established ways, including AI, bio- and quantum technologies. 
 
Take one specific area: the combination of AI, Big Data (as input to AI), and autonomy (as one of the main applications of AI). This technology area promises to disrupt the information sphere and “change everything”, from maintaining situational awareness to supporting decision-making, from predictive maintenance to cyber defence.
 
Amidst the euphoria about opportunities, we must afford a sober reality check and ask ourselves critical questions on how we want to develop, feed, and use such systems: would we consider the Chinese Social Credit System as a role model for collecting data? Should we accept black-box algorithms for data processing, when they present results, but cannot explain their plausibility? Should we apply AI in critical decision-making, where we seek to maintain human oversight?
 
Most of the key technologies operate in the information domain. Given their superior connectivity and speed, their development is particularly challenging to follow, let alone anticipate. Yet, developers focus on civilian applications with global consumer markets in mind, and the Big Tech companies pushing these developments have become the most influential non-state actors on the planet.
 
All of these factors increase the complexity of the problem space, while at the same time accelerating the speed to technological evolution. In short: our challenges keep growing, while our response time shrinks.
Baseline for Consultations
The Alliance’s success with AI will depend on new and well-designed principles and practices relating to good governance and responsible use. Certain Allied governments have already made certain public commitments in the area of responsible use, addressing concepts such as lawfulness, responsibility, reliability, and governability, among others.
 
In parallel, Allies have taken part in the Group of Governmental Experts on Lethal Autonomous Weapon Systems under the auspices of the United Nations, leading to the formulation of 11 guiding principles.
There is a good case for viewing work on adopting AI and work on principles of responsible use as complementary and synergistic. 
 
The technical characteristics required to ensure that these and other objectives are met will necessarily be part of the design and testing phases of relevant systems. The relevant engineering work will be an opportunity to refine understanding, leading to more granular and more mature principles. 
 
Further work in the area of TEVV will be essential, as will support from relevant Modelling and Simulation efforts. NATO’s well-established strengths in the area of standardisation will help frame these lines of effort, while also ensuring interoperability between Allied forces.
 
Overarching principles such as those developed in some national cases, as well as under UN auspices, offer a baseline for further consultations among Allies, as well as points of reference concerning existing national positions.
 
Reference Text / images
 

Add Comment

Your comment was successfully added!

Visitors Comments

No Comments

Related Topics

Enhancing Situational Awareness with Remote Sensing Technologies

Read More

Signal from Ukraine:Warzones without Drones a Remote Possibility

Read More

See Without Being Seen

Read More

ENHANCING CORE SOLDIER CAPABILITIES

Read More

Research Sustainable Aviation Fuel: Clean Way Forward for Green Air Travel

Read More

New START and the Nuclear Triad: The U.S. Perspective

Read More
Close

2024-04-02 Current issue
Pervious issues
2017-05-13
2014-03-16
2012-01-01
2014-01-01
2021-06-01
2021-02-21
2022-06-01
2021-09-15
.

Voting

?What about new design for our website

  • Excellent
  • Very Good
  • Good
Voting Number 1647