Download

Download eBook for Free

FormatFile SizeNotes
PDF file 1.4 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback106 pages $34.00 $27.20 20% Web Discount

Research Questions

  1. What factors influence how comfortable and uncomfortable software engineers feel with potential applications of AI for the U.S. military?
  2. Is there a correlation between the degree of trust that software engineers have in societal institutions—specifically, in DoD—and their perception of the acceptability of building AI applications for DoD?
  3. Do software engineers perceive the countries that DoD has identified as strategic competitors as a meaningful threat to the United States?
  4. What types of news media and other sources of information are software engineers relying on to inform them about events related to DoD?

Artificial intelligence (AI) is anticipated to be a key capability for enabling the U.S. military to maintain its military dominance. The U.S. Department of Defense (DoD)'s engagement with leading high-tech private sector corporations, for which the military is a relatively small percentage of their customer base, provides a valuable conduit to cutting-edge AI-enabled capabilities and access to leading AI software developers and engineers. To assess the views of software engineers and other technical staff in the private sector about potential DoD applications of AI, a research team conducted a survey that presented a variety of scenarios describing how the U.S. military might employ AI and asked respondents to describe their comfort level with using AI in these ways. The scenarios varied several factors, including the degree of distance from the battlefield, the destructiveness of the action, and the degree of human oversight over the AI algorithm. The results from this survey found that most of the U.S. AI experts do not oppose the basic mission of DoD or the use of AI for many military applications.

Key Findings

An unbridgeable divide between Silicon Valley and DoD does not appear to exist

  • Respondents from Silicon Valley technology firms and alumni of universities with top-ranking computer science departments are comfortable with a variety of military applications for AI.

There is a meaningful difference in the comfort level for AI applications that involve the use of lethal force

  • About one-third of respondents from the three surveyed Silicon Valley technology corporations were uncomfortable with lethal use cases for AI.

Tech workers have low levels of trust in leaders—even their own

  • Software engineers and other technology workers have low levels of trust in individuals who hold leadership positions.
  • Technology workers trust CEOs of technology companies almost as little as they trust elected officials or the heads of federal agencies.

Tech workers are most concerned about cyber threats to the United States

  • More than 75 percent of respondents from all three populations also regarded China and Russia as serious threats to the United States.

Tech workers support the use of military force to defend against foreign aggression

  • Survey respondents strongly supported using military force to defend the United States and its NATO allies from foreign aggression, with nearly 90 percent of participants finding the use of military force to be justified under these circumstances.

Silicon Valley tech workers have little personal connection to the military

  • Less than 2 percent of Silicon Valley respondents had served in the U.S. armed forces.
  • Almost 20 percent of software engineers working at defense contractors had previously served in the U.S. military.

Recommendations

  • Mechanisms should be explored to expand collaborations between DoD and Silicon Valley companies regarding threats posed by cyberattacks, a potential application for AI that Silicon Valley engineers see as a critical global threat.
  • Expansion of engagements among personnel involved with military operations, DoD technical experts, and Silicon Valley individual contributors (nonmanagerial employees) working in technical roles should be explored to assess possible conduits for developing greater trust between the organizations.
  • The potential benefits of DoD engaging Silicon Valley engineers on some of the details of how DoD would use AI should be explored; also, review how the military considers the nuanced and complex situations in which AI would be used.
  • The value of establishing opportunities for DoD and Silicon Valley employees to engage over shared values and principles and the potential benefits of doing so should be investigated. The recently published DoD ethical principles for AI demonstrate that DoD itself is uncomfortable with some potential uses for AI: This could serve as the foundation for a conversation with Silicon Valley engineers about what AI should and should not be used for.
  • Another potentially fruitful area for investigation would be assessing the benefits and adapting various types of engagements to help the most innovative and experienced U.S. AI experts learn how DoD accomplishes its mission and discover how their talents and expertise can contribute to solving DoD's and the nation's problems.

Table of Contents

  • Chapter One

    Background

  • Chapter Two

    Survey Design and Survey Populations

  • Chapter Three

    Survey Execution

  • Chapter Four

    Survey Results and Analysis

  • Chapter Five

    Key Findings and Conclusions

  • Chapter Six

    Future Opportunities and Areas for Further Investigation

  • Appendix A

    Survey Methodology

  • Appendix B

    Survey Instrument

  • Appendix C

    Aggregate Survey Results

This research was sponsored by the U.S. Department of Defense's Office of Net Assessment and conducted within the Acquisition and Technology Policy Center and the Forces and Resources Policy Center of the RAND National Security Research Division (NSRD).

This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.