Military Applications of Artificial Intelligence

Ethical Concerns in an Uncertain World

Forrest E. Morgan, Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, Derek Grossman

ResearchPublished Apr 28, 2020

The authors of this report examine military applications of artificial intelligence (AI) and consider the ethical implications. The authors survey the kinds of technologies broadly classified as AI, consider their potential benefits in military applications, and assess the ethical, operational, and strategic risks that these technologies entail. After comparing military AI development efforts in the United States, China, and Russia, the authors examine those states' policy positions regarding proposals to ban or regulate the development and employment of autonomous weapons, a military application of AI that arms control advocates find particularly troubling. Finding that potential adversaries are increasingly integrating AI into a range of military applications in pursuit of warfighting advantages, they recommend that the U.S. Air Force organize, train, and equip to prevail in a world in which military systems empowered by AI are prominent in all domains. Although efforts to ban autonomous weapons are unlikely to succeed, there is growing recognition among states that risks associated with military AI will require human operators to maintain positive control in its employment. Thus, the authors recommend that Air Force, Joint Staff, and other Department of Defense leaders work with the State Department to seek greater technical cooperation and policy alignment with allies and partners, while also exploring confidence-building and risk-reduction measures with China, Russia, and other states attempting to develop military AI. The research in this report was conducted in 2017 and 2018. The report was delivered to the sponsor in October 2018 and was approved for distribution in March 2020.

Key Findings

A steady increase in the integration of AI in military systems is likely

  • The various forms of AI have serious ramifications for warfighting applications.
  • AI will present new ethical questions in war, and deliberate attention can potentially mitigate the most-extreme risks.
  • Despite ongoing United Nations discussions, an international ban or other regulation on military AI is not likely in the near term.

The United States faces significant international competition in military AI

  • Both China and Russia are pursuing militarized AI technologies.
  • The potential proliferation of military AI to other state and nonstate actors is another area of concern.

The development of military AI presents a range of risks that need to be addressed

  • Ethical risks are important from a humanitarian standpoint.
  • Operational risks arise from questions about the reliability, fragility, and security of AI systems.
  • Strategic risks include the possibility that AI will increase the likelihood of war, escalate ongoing conflicts, and proliferate to malicious actors.

The U.S. public generally supports continued investment in military AI

  • Support depends in part on whether the adversary is using autonomous weapons, the system is necessary for self-defense, and other contextual factors.
  • Although perceptions of ethical risks can vary according to the threat landscape, there is broad consensus regarding the need for human accountability.
  • The locus of responsibility should rest with commanders.
  • Human involvement needs to take place across the entire life cycle of each system, including its development and regulation.

Recommendations

  • Organize, train, and equip forces to prevail in a world in which military systems empowered by AI are prominent in all domains.
  • Understand how to address the ethical concerns expressed by technologists, the private sector, and the American public.
  • Conduct public outreach to inform stakeholders of the U.S. military's commitment to mitigating ethical risks associated with AI to avoid a public backlash and any resulting policy limitations for Title 10 action.
  • Follow discussions of the Group of Governmental Experts involved in the UN Convention on Certain Conventional Weapons and track the evolving positions held by stakeholders in the international community.
  • Seek greater technical cooperation and policy alignment with allies and partners regarding the development and employment of military AI.
  • Explore confidence-building and risk-reduction measures with China, Russia, and other states attempting to develop military AI.

Order a Print Copy

Format
Paperback
Page count
223 pages
List Price
$49.00
Buy link
Add to Cart

Topics

Document Details

  • Availability: Available
  • Year: 2020
  • Print Format: Paperback
  • Paperback Pages: 223
  • Paperback Price: $49.00
  • Paperback ISBN/EAN: 978-1-9774-0492-3
  • DOI: https://doi.org/10.7249/RR3139-1
  • Document Number: RR-3139-1-AF

Citation

RAND Style Manual
Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World, RAND Corporation, RR-3139-1-AF, 2020. As of September 11, 2024: https://www.rand.org/pubs/research_reports/RR3139-1.html
Chicago Manual of Style
Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. Santa Monica, CA: RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR3139-1.html. Also available in print form.
BibTeX RIS

Research conducted by

This research was commissioned by the United States Air Force and conducted within the Strategy and Doctrine Program of RAND Project AIR FORCE.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.