Artificial Intelligence and the Military


Sep 7, 2017

U.S. ground troops patrol while robots carry their equipment and drones serve as spotters

U.S. ground troops patrol while robots carry their equipment and drones serve as spotters

Illustration by U.S. Army

This commentary originally appeared on RealClearDefense on September 7, 2017.

The Department of Defense (DoD) is increasingly interested in Artificial Intelligence (AI). During a recent trip to Amazon, Google, and other Silicon Valley companies, Secretary of Defense James Mattis remarked that AI has “got to be better integrated by the DoD.” What do we mean by the term AI? In particular, what does “deep learning” mean? What are the advantages, disadvantages, and risks of using AI? What are potential additional military applications for AI?

What Is AI?

AI is poorly understood in part because its definition is constantly evolving. As computers master additional tasks previously thought only possible by humans, the bar for what is considered “intelligent” rises higher. Recently, one of the most productive areas in the field of AI has been in technologies that can train software to learn and think on its own. This area is moving swiftly and appears to be accelerating. Simultaneously, “old school” AI using rule-based approaches are being abandoned. In the next decades, AI systems that can be trained, learn, and think independently will likely dominate the field of AI. This brings us to deep learning, a field that has made tremendous strides in recent years and generated considerable excitement.

What Is Deep Learning?

Deep learning is a powerful set of techniques for learning with Artificial Neural Networks (ANNs). ANNs are software loosely modeled after the neuronal structure of the mammalian cerebral cortex. They are currently much simpler; ANNs such as AlphaGo are powerful because of their laser-like focus on just one thing. Processing units (referred to as nodes) are organized into layers: input, hidden, and output. Input layers roughly correspond to photoreceptors in the retina. Hidden layers are like the neurons that process signals from the retina and pass those signals to the visual cortex. Output layers correspond to the visual cortex. Simple ANNs have a single hidden layer. ANNs with two or more hidden layers are capable of deep learning; such ANNs can process more complex data sets than ANNs having only one hidden layer. Deep learning currently provides the best solutions to problems in image and speech recognition, and natural language processing (NLP). The key to deep learning is access to large, high-quality datasets for training ANNs. No data, no (deep) learning.

What Are the Advantages, Disadvantages, and Risks of Using AI?

The phrase “software is eating the world” was coined in 2011 to reflect the growing use of software to take over mundane or well-structured tasks. In 2016, the phrase “AI is eating software” emerged. Recognizing that AI systems are themselves software, we prefer the clarifying concept of “frozen software” — software that cannot learn and so can only be improved via updates. Tax preparation software is a classic example of frozen software; its performance does not improve with use. AI is now eating frozen tax preparation software.

A clear advantage of AI is its ability to learn and evolve in ways that frozen software cannot.

A clear advantage of AI is its ability to learn and evolve in ways that frozen software cannot. Rule-based frozen software is limited by the human knowledge used to develop it. For example, an early chess program was developed using the great chess player Gary Kasparov as a subject matter expert. The program was good, but not as good as Gary Kasparov; he could not transfer all of the things that made him great. In contrast, AlphaGo learned by playing countless games of Go against versions of itself and by playing against skilled human players. In so doing, AlphaGo became the world's premier player of Go and surpassed the human knowledge that went into it. As a side note, humans who played regularly against AlphaGo considerably improved their skills, so this has implications for training humans.

A clear disadvantage of learning AI is that it is only as good as the data it gets. A number of chatbots have developed undesirable sexist, racist, or even Mein Kampf-quoting behavior after interacting with humans or other chatbots that fed them “poisoned” inputs. Another is that it is still not ready for many tasks that require a deep level of contextual knowledge. Lastly, a key risk to AI is that it is opaque, making people hesitant to use in certain domains such as criminal justice.

What Are Potential Implications for the Military?

There are several possible AI applications for the military. Replacing frozen software with systems that do not need to be refreshed periodically creates a broad potential for creating more nimble systems, possibly at lower cost. Again, AI could be used in training systems. For example, it could provide unpredictable and adaptive adversaries for training fighter pilots. Computer vision, the ability of software to understand photos and videos, could greatly help in processing the mountains of data from surveillance systems or for “pattern-of-life” surveillance. Facial recognition AIs are developing rapidly (including in China). Augmented reality can be used to close “skill gaps” in complex maintenance; it is now being used by international airlines. NLP, used by systems such as Amazon's Alexa, enables systems to interact with humans using natural language. NLP could enable systems to take orders without using keyboards. NLP also can translate documents and could serve as a translator in the future.

Other suggested applications might include: using AIs to solve logistics challenges; to support war games; to automate combat in so-called manned-unmanned operations; to speed weapon development and optimization, and for identifying targets (as well as non-combatants).

However, there are also implications from AI adoption by the military. The military's current verification and validation process is meant for frozen software and is not suited to AIs that learn. Tainted data, possibly from adversaries, might have fatal consequences. It is also hard to trust a system that cannot be understood. Lastly, data will be critical, since learning AI success depends critically on data.

Robert Button is a senior operations research analyst at the nonprofit, nonpartisan RAND Corporation.

More About This Commentary

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.