Robot Detection U.S. Air Force Airman Gevoyd Little operates his remote explosive detection robot during Operation Falcon Sweep in the Village of Shakaria, Iraq, Jan. 11, 2006, photo by Kevin L. Moses Sr./U.S. Air Force

commentary

(War on the Rocks)

How to Train Your AI Soldier Robots (and the Humans Who Command Them)

U.S. Air Force Airman Gevoyd Little operates his remote explosive detection robot during Operation Falcon Sweep in the Village of Shakaria, Iraq, Jan. 11, 2006

Photo by Kevin L. Moses Sr./U.S. Air Force

by Thomas Hamilton

February 21, 2020

This article was submitted in response to a call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Robert Work and Eric Schmidt. You can find all of RAND's submissions here.

Artificial intelligence (AI) is often portrayed as a single omnipotent force—the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (“2001: A Space Odyssey”), reason with it (“Wargames”), blow it up (“Star Wars: The Phantom Menace”), or be defeated by it (“Dr. Strangelove”). Sometimes the AI is an automated version of a human, perhaps a human fighter's faithful companion (the robot R2-D2 in “Star Wars”).

These science fiction tropes are legitimate models for military discussion—and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really “artificial” if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.…

The remainder of this commentary is available at warontherocks.com.


Thomas Hamilton is a senior physical scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

This commentary originally appeared on War on the Rocks on February 21, 2020. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.