A robot arm moves its index finger toward a nuclear button

commentary

(Bulletin of the Atomic Scientists)

May 1, 2018

Will Artificial Intelligence Undermine Nuclear Stability?

Photo by Andrey Suslov/Getty Images

by Andrew J. Lohn and Edward Geist

Artificial intelligence and nuclear war have been fiction clichés for decades. Today's AI is impressive to be sure, but specialized, and remains a far cry from computers that become self-aware and turn against their creators. At the same time, popular culture does not do justice to the threats that modern AI indeed presents, such as its potential to make nuclear war more likely even if it never exerts direct control over nuclear weapons.

Russian President Vladimir Putin recognized the military significance of AI when he declared in September that the country that leads in artificial intelligence will eventually rule the world. He may be the only leader to have put it so bluntly, but other world powers appear to be thinking similarly. Both China and the United States have announced ambitious efforts to harness AI for military applications, stoking fears of an incipient arms race.

In the same September speech, Putin said that AI comes with “colossal opportunities” as well as “threats that are difficult to predict.” The gravest of those threats may involve nuclear stability—as we describe in a new RAND publication that outlines a few of the ways in which stability could be strained....

The remainder of this commentary is available on thebulletin.org.


Andrew J. Lohn is an engineer and Edward Geist is an associate policy analyst at the nonprofit, nonpartisan RAND Corporation.

This commentary originally appeared on Bulletin of the Atomic Scientists on April 30, 2018. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.