Exploring the Civil-Military Divide over Artificial Intelligence

James Ryseff, Eric Landree, Noah Johnson, Bonnie Ghosh-Dastidar, Max Izenberg, Sydne J. Newberry, Christopher Ferris, Melissa A. Bradley

ResearchPublished May 11, 2022

Artificial intelligence (AI) is anticipated to be a key capability for enabling the U.S. military to maintain its military dominance. The U.S. Department of Defense (DoD)'s engagement with leading high-tech private sector corporations, for which the military is a relatively small percentage of their customer base, provides a valuable conduit to cutting-edge AI-enabled capabilities and access to leading AI software developers and engineers. To assess the views of software engineers and other technical staff in the private sector about potential DoD applications of AI, a research team conducted a survey that presented a variety of scenarios describing how the U.S. military might employ AI and asked respondents to describe their comfort level with using AI in these ways. The scenarios varied several factors, including the degree of distance from the battlefield, the destructiveness of the action, and the degree of human oversight over the AI algorithm. The results from this survey found that most of the U.S. AI experts do not oppose the basic mission of DoD or the use of AI for many military applications.

Key Findings

An unbridgeable divide between Silicon Valley and DoD does not appear to exist

  • Respondents from Silicon Valley technology firms and alumni of universities with top-ranking computer science departments are comfortable with a variety of military applications for AI.

There is a meaningful difference in the comfort level for AI applications that involve the use of lethal force

  • About one-third of respondents from the three surveyed Silicon Valley technology corporations were uncomfortable with lethal use cases for AI.

Tech workers have low levels of trust in leaders—even their own

  • Software engineers and other technology workers have low levels of trust in individuals who hold leadership positions.
  • Technology workers trust CEOs of technology companies almost as little as they trust elected officials or the heads of federal agencies.

Tech workers are most concerned about cyber threats to the United States

  • More than 75 percent of respondents from all three populations also regarded China and Russia as serious threats to the United States.

Tech workers support the use of military force to defend against foreign aggression

  • Survey respondents strongly supported using military force to defend the United States and its NATO allies from foreign aggression, with nearly 90 percent of participants finding the use of military force to be justified under these circumstances.

Silicon Valley tech workers have little personal connection to the military

  • Less than 2 percent of Silicon Valley respondents had served in the U.S. armed forces.
  • Almost 20 percent of software engineers working at defense contractors had previously served in the U.S. military.

Recommendations

  • Mechanisms should be explored to expand collaborations between DoD and Silicon Valley companies regarding threats posed by cyberattacks, a potential application for AI that Silicon Valley engineers see as a critical global threat.
  • Expansion of engagements among personnel involved with military operations, DoD technical experts, and Silicon Valley individual contributors (nonmanagerial employees) working in technical roles should be explored to assess possible conduits for developing greater trust between the organizations.
  • The potential benefits of DoD engaging Silicon Valley engineers on some of the details of how DoD would use AI should be explored; also, review how the military considers the nuanced and complex situations in which AI would be used.
  • The value of establishing opportunities for DoD and Silicon Valley employees to engage over shared values and principles and the potential benefits of doing so should be investigated. The recently published DoD ethical principles for AI demonstrate that DoD itself is uncomfortable with some potential uses for AI: This could serve as the foundation for a conversation with Silicon Valley engineers about what AI should and should not be used for.
  • Another potentially fruitful area for investigation would be assessing the benefits and adapting various types of engagements to help the most innovative and experienced U.S. AI experts learn how DoD accomplishes its mission and discover how their talents and expertise can contribute to solving DoD's and the nation's problems.

Order a Print Copy

Format
Paperback
Page count
106 pages
List Price
$34.00
Buy link
Add to Cart

Topics

Document Details

  • Availability: Available
  • Year: 2022
  • Print Format: Paperback
  • Paperback Pages: 106
  • Paperback Price: $34.00
  • Paperback ISBN/EAN: 978-1-9774-0902-7
  • DOI: https://doi.org/10.7249/RRA1498-1
  • Document Number: RR-A1498-1

Citation

RAND Style Manual
Ryseff, James, Eric Landree, Noah Johnson, Bonnie Ghosh-Dastidar, Max Izenberg, Sydne J. Newberry, Christopher Ferris, and Melissa A. Bradley, Exploring the Civil-Military Divide over Artificial Intelligence, RAND Corporation, RR-A1498-1, 2022. As of September 9, 2024: https://www.rand.org/pubs/research_reports/RRA1498-1.html
Chicago Manual of Style
Ryseff, James, Eric Landree, Noah Johnson, Bonnie Ghosh-Dastidar, Max Izenberg, Sydne J. Newberry, Christopher Ferris, and Melissa A. Bradley, Exploring the Civil-Military Divide over Artificial Intelligence. Santa Monica, CA: RAND Corporation, 2022. https://www.rand.org/pubs/research_reports/RRA1498-1.html. Also available in print form.
BibTeX RIS

This research was sponsored by the U.S. Department of Defense's Office of Net Assessment and conducted within the Acquisition and Technology Policy Center and the Forces and Resources Policy Center of the RAND National Security Research Division (NSRD).

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.