The greater use of artificial intelligence (AI) and autonomous systems by the militaries of the world has the potential to affect deterrence strategies and escalation dynamics in crises and conflicts. Up until now, deterrence has involved humans trying to dissuade other humans from taking particular courses of action. What happens when the thinking and decision processes involved are no longer purely human? How might dynamics change when decisions and actions can be taken at machine speeds? How might AI and autonomy affect the ways that countries have developed to signal one another about the potential use of force? What are potential areas for miscalculation and unintended consequences, and unwanted escalation in particular?

This exploratory report provides an initial examination of how AI and autonomous systems could affect deterrence and escalation in conventional crises and conflicts. Findings suggest that the machine decisionmaking can result in inadvertent escalation or altered deterrence dynamics, due to the speed of machine decisionmaking, the ways in which it differs from human understanding, the willingness of many countries to use autonomous systems, our relative inexperience with them, and continued developments of these capabilities. Current planning and development efforts have not kept pace with how to handle the potentially destabilizing or escalatory issues associated with these new technologies, and it is essential that planners and decisionmakers begin to think about these issues before fielded systems are engaged in conflict.

Key Findings

Insights from a wargame involving AI and autonomous systems

  • Manned systems may be better for deterrence than unmanned ones.
  • Replacing manned systems with unmanned ones may not be seen as a reduced security commitment.
  • Players put their systems on different autonomous settings to signal resolve and commitment during the conflict.
  • The speed of autonomous systems did lead to inadvertent escalation in the wargame.

Implications for deterrence

  • Autonomous and unmanned systems could affect extended deterrence and our ability to assure our allies of U.S. commitment.
  • Widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.
  • Different mixes of human and artificial agents could affect the escalatory dynamics between two sides.
  • Machines will likely be worse at understanding the human signaling involved deterrence, especially deescalation.
  • Whereas traditional deterrence has largely been about humans attempting to understand other humans, deterrence in this new age involves understanding along a number of additional pathways.
  • Past cases of inadvertent engagement of friendly or civilian targets by autonomous systems may offer insights about the technical accidents or failures involving more-advanced systems.

Recommendations

  • Conduct further work on deterrence theory and other frameworks to explicitly consider the potential effects of AI and autonomous systems.
  • Evaluate the escalatory potential of new systems.
  • Evaluate the escalatory potential of new operating concepts.
  • Wargame additional scenarios at the operational and strategic levels.

Order a Print Copy

Format
Paperback
Page count
122 pages
List Price
$22.00
Buy link
Add to Cart

Topics

Document Details

  • Availability: Available
  • Year: 2020
  • Print Format: Paperback
  • Paperback Pages: 122
  • Paperback Price: $22.00
  • Paperback ISBN/EAN: 978-1-9774-0406-0
  • DOI: https://doi.org/10.7249/RR2797
  • Document Number: RR-2797-RC

Citation

RAND Style Manual
Wong, Yuna Huh, John Yurchak, Robert W. Button, Aaron B. Frank, Burgess Laird, Osonde A. Osoba, Randall Steeb, Benjamin N. Harris, and Sebastian Joon Bae, Deterrence in the Age of Thinking Machines, RAND Corporation, RR-2797-RC, 2020. As of September 11, 2024: https://www.rand.org/pubs/research_reports/RR2797.html
Chicago Manual of Style
Wong, Yuna Huh, John Yurchak, Robert W. Button, Aaron B. Frank, Burgess Laird, Osonde A. Osoba, Randall Steeb, Benjamin N. Harris, and Sebastian Joon Bae, Deterrence in the Age of Thinking Machines. Santa Monica, CA: RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR2797.html. Also available in print form.
BibTeX RIS

Funding for this research was provided by gifts from RAND supporters and income from operations. The research was conducted within the International Security and Defense Policy Center of the RAND National Defense Research Institute.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.