Strategic competition in the age of AI

Digital processor chip rendering with the letters "AI" in the center, image by putilov_denis/Adobe Stock.

Image by putilov_denis/Adobe Stock

What is the issue?

Artificial intelligence (AI) has the potential to bring about transformative changes across all aspects of society, the economy and policy, including defence and security. The UK aims to be a leading player in the implementation of AI for civil and commercial purposes, as well as in the responsible development of defence AI. This requires a clear and nuanced understanding of the emerging risks and opportunities related to the military use of AI, as well as how the UK can effectively collaborate with others to address or leverage these.

In March 2024, the Defence AI & Autonomy Unit (DAU) of the UK Ministry of Defence (MOD) and the Foreign, Commonwealth and Development Office (FCDO) jointly commissioned a brief scoping study from RAND Europe on this issue.

How did we help?

The aim of this study was to conduct an initial exploration into how the military application of AI could present strategic-level risks and opportunities, recognising that existing research has predominantly focused on the tactical level or non-military areas such as AI safety. Specifically, this one-month study sought to pinpoint elements of a potential conceptual framework that could enhance understanding of the strategic impacts of AI to support an informed response from the UK Government, including the Defence sector.

The research team utilised a mixed-method approach, incorporating a review of existing literature and interviews with government officials, industry experts, think tanks and academia. To underpin this investigative study, the research team drew on:

  • A narrative literature review of approximately 200 academic or 'grey' sources, selected from a longlist of 1,500.
  • Semi-structured interviews conducted with over 50 stakeholders and experts representing various sectors including government, UN, NATO, defence industry, AI firms, academia, think tanks and non-governmental organisations.
  • Seven external workshops or webinars, as well as two parliamentary inquiries conducted concurrently with the study.
  • Iterative refinement of a conceptual framework in collaboration with the MOD and FCDO.

What did we find?

AI is most effectively understood as a dual-use collection of versatile technologies; hardware-driven yet software-centric. In contrast to conventional military technologies, these technologies are widely accessible and rapidly spreading. Innovation is primarily propelled by the private sector for commercial purposes, rather than by government or defence entities. While collective comprehension of military applications and ramifications of AI is improving, the starting point remains relatively low. Frequently, discussions tend to prioritise specific high-profile issues such as lethal autonomous weapon systems (LAWS) or artificial general intelligence (AGI) to the detriment of other subjects. The focus is often on the tactical aspects rather than the strategic implications, on risks rather than opportunities, or on the immediate effects of military AI rather than the potentially more significant second- and third-order consequences in the long term.

Among the numerous risks and opportunities scrutinised in depth in this report, the most critical ones include:

  • Information manipulation, such as AI deepfakes, which have the potential to not only exacerbate political, economic and social challenges but also distort military decision-making during critical situations.
  • Empowerment of non-state actors with asymmetric capabilities that can undermine the supremacy of state militaries or, in a worst-case scenario, introduce new forms of mass destruction (e.g. bioweapons).
  • The interconnected effects of AI on the balance between offensive and defensive capabilities among adversaries, on the dynamics of escalation towards military conflict, and on the stability of nuclear deterrence. These concerns are particularly significant amid escalating superpower competitions and in a world already grappling with other sources of insecurity (e.g. Ukraine, Israel-Iran, Taiwan, migration, climate change).
  • The potentially catastrophic safety and security risks linked to any future emergence of Artificial General Intelligence (AGI).

What can be done?

To tackle these challenges, countries must promptly formulate a thorough action plan that takes into account the intricate interaction of technological progress in AI, geopolitical rivalries and the evolving norms surrounding AI within the global framework. This plan should utilise a range of methods to influence various stakeholders, utilising all diplomatic, informational, military and economic (DIME) tools to establish a proactive set of initiatives to:

  • Enhance the responsible integration of AI and maximise its advantages for Defence.
  • Restrict the deployment of military AI by non-state actors, terrorist groups or hostile/rogue states, while also imposing consequences on them to shape their behaviour.
  • Influence global, minilateral and bilateral governance frameworks for military AI.

This should also draw upon insights from other domains, as explored in this report, and the impetus of recent high-level initiatives on AI. Notable instances include the Bletchley Summit, the Responsible AI in the Military Domain (REAIM) summit and the Political Declaration on Military AI.