Download eBook for Free

FormatFile SizeNotes
PDF file 3.7 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 Format Price
Add to Cart Paperback223 pages $49.00

Research Questions

  1. Is the United States constrained in its development or employment of military AI in ways that China and Russia are not?
  2. What does the Air Force need to do to maximize the benefits potentially available from AI-enabled systems while mitigating the risks they entail?
  3. What are the U.S. public's attitudes toward military AI and the ethical questions its applications raise?

The authors of this report examine military applications of artificial intelligence (AI) and consider the ethical implications. The authors survey the kinds of technologies broadly classified as AI, consider their potential benefits in military applications, and assess the ethical, operational, and strategic risks that these technologies entail. After comparing military AI development efforts in the United States, China, and Russia, the authors examine those states' policy positions regarding proposals to ban or regulate the development and employment of autonomous weapons, a military application of AI that arms control advocates find particularly troubling. Finding that potential adversaries are increasingly integrating AI into a range of military applications in pursuit of warfighting advantages, they recommend that the U.S. Air Force organize, train, and equip to prevail in a world in which military systems empowered by AI are prominent in all domains. Although efforts to ban autonomous weapons are unlikely to succeed, there is growing recognition among states that risks associated with military AI will require human operators to maintain positive control in its employment. Thus, the authors recommend that Air Force, Joint Staff, and other Department of Defense leaders work with the State Department to seek greater technical cooperation and policy alignment with allies and partners, while also exploring confidence-building and risk-reduction measures with China, Russia, and other states attempting to develop military AI. The research in this report was conducted in 2017 and 2018. The report was delivered to the sponsor in October 2018 and was approved for distribution in March 2020.

Key Findings

A steady increase in the integration of AI in military systems is likely

  • The various forms of AI have serious ramifications for warfighting applications.
  • AI will present new ethical questions in war, and deliberate attention can potentially mitigate the most-extreme risks.
  • Despite ongoing United Nations discussions, an international ban or other regulation on military AI is not likely in the near term.

The United States faces significant international competition in military AI

  • Both China and Russia are pursuing militarized AI technologies.
  • The potential proliferation of military AI to other state and nonstate actors is another area of concern.

The development of military AI presents a range of risks that need to be addressed

  • Ethical risks are important from a humanitarian standpoint.
  • Operational risks arise from questions about the reliability, fragility, and security of AI systems.
  • Strategic risks include the possibility that AI will increase the likelihood of war, escalate ongoing conflicts, and proliferate to malicious actors.

The U.S. public generally supports continued investment in military AI

  • Support depends in part on whether the adversary is using autonomous weapons, the system is necessary for self-defense, and other contextual factors.
  • Although perceptions of ethical risks can vary according to the threat landscape, there is broad consensus regarding the need for human accountability.
  • The locus of responsibility should rest with commanders.
  • Human involvement needs to take place across the entire life cycle of each system, including its development and regulation.


  • Organize, train, and equip forces to prevail in a world in which military systems empowered by AI are prominent in all domains.
  • Understand how to address the ethical concerns expressed by technologists, the private sector, and the American public.
  • Conduct public outreach to inform stakeholders of the U.S. military's commitment to mitigating ethical risks associated with AI to avoid a public backlash and any resulting policy limitations for Title 10 action.
  • Follow discussions of the Group of Governmental Experts involved in the UN Convention on Certain Conventional Weapons and track the evolving positions held by stakeholders in the international community.
  • Seek greater technical cooperation and policy alignment with allies and partners regarding the development and employment of military AI.
  • Explore confidence-building and risk-reduction measures with China, Russia, and other states attempting to develop military AI.

Research conducted by

This research was commissioned by the United States Air Force and conducted within the Strategy and Doctrine Program of RAND Project AIR FORCE.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.