AI robot pressing a nuclear launch button


How Artificial Intelligence Could Increase the Risk of Nuclear War

April 24, 2018

Photos by kyryloff, Jorge, Scanrail/Adobe Stock; MakaronProduktion/Getty Images. Design by Chara Williams/RAND Corporation


Could artificial intelligence upend concepts of nuclear deterrence that have helped spare the world from nuclear war since 1945? Stunning advances in AI—coupled with a proliferation of drones, satellites, and other sensors—raise the possibility that countries could find and threaten each other's nuclear forces, escalating tensions.

Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through satellite data, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983.

A siren clanged off the bunker walls. A single word flashed on the screen in front of him.


The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world's major nuclear powers. It's not the killer robots of Hollywood blockbusters that we need to worry about; it's how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That's the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It's part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

"This isn't just a movie scenario," said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. "Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful."

Security 2040: How technology, people, and ideas are shaping the future of global security

Glitch, or Armageddon?

Petrov would say later that his chair felt like a frying pan. He knew the computer system had glitches. The Soviets, worried that they were falling behind in the arms race with the United States, had rushed it into service only months earlier. Its screen now read “high reliability,” but Petrov's gut said otherwise.

He picked up the phone to his duty officer. “False alarm,” he said. Suddenly, the system flashed with new warnings: another launch, and then another, and then another. The words on the screen glowed red:

"Missile attack."

To understand how intelligent computers could raise the risk of nuclear war, you have to understand a little about why the Cold War never went nuclear hot. There are many theories, but “assured retaliation” has always been one of the cornerstones. In the simplest terms, it means: If you punch me, I'll punch you back. With nuclear weapons in play, that counterpunch could wipe out whole cities, a loss neither side was ever willing to risk.​​​​​​​

Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely.

That theory leads to some seemingly counterintuitive conclusions. If both sides have weapons that can survive a first strike and hit back, then the situation is stable. Neither side will risk throwing that first punch. The situation gets more dangerous and uncertain if one side loses its ability to strike back or even just thinks it might lose that ability. It might respond by creating new weapons to regain its edge. Or it might decide it needs to throw its punches early, before it gets hit first.

That's where the real danger of AI might lie. Computers can already scan thousands of surveillance photos, looking for patterns that a human eye would never see. It doesn't take much imagination to envision a more advanced system taking in drone feeds, satellite data, and even social media posts to develop a complete picture of an adversary's weapons and defenses.

A system that can be everywhere and see everything might convince an adversary that it is vulnerable to a disarming first strike—that it might lose its counterpunch. That adversary would scramble to find new ways to level the field again, by whatever means necessary. That road leads closer to nuclear war.

"Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely," said Edward Geist, an associate policy researcher at RAND, a specialist in nuclear security, and co-author of the new paper. "New AI capabilities might make people think they're going to lose if they hesitate. That could give them itchier trigger fingers. At that point, AI will be making war more likely even though the humans are still quote-unquote in control."

A Gut Feeling

Petrov's computer screen now showed five missiles rocketing toward the Soviet Union. Sirens wailed. Petrov held the phone to the duty officer in one hand, an intercom to the computer room in the other. The technicians there were telling him they could not find the missiles on their radar screens or telescopes.

It didn't make any sense. Why would the United States start a nuclear war with only five missiles? Petrov raised the phone and said again:

False alarm.

Computers can now teach themselves to walk—stumbling, falling, but learning until they get it right. Their neural networks mimic the architecture of the brain. A computer recently beat one of the world's best players at the ancient strategy game of Go with a move that was so alien, yet so effective, that the human player stood up, left the room, and then needed 15 minutes to make his next move.

Russia recently announced plans for an underwater doomsday drone with a warhead powerful enough to vaporize a major city.​​​​​​​

The military potential of such superintelligence has not gone unnoticed by the world's major nuclear powers. The United States has experimented with autonomous boats that could track an enemy submarine for thousands of miles. China has demonstrated “swarm intelligence” algorithms that can enable drones to hunt in packs. And Russia recently announced plans for an underwater doomsday drone that could guide itself across oceans to deliver a nuclear warhead powerful enough to vaporize a major city.

Whoever wins the race for AI superiority, Russian President Vladimir Putin has said, "will become the ruler of the world." Tesla founder Elon Musk had a different take: The race for AI superiority, he warned, is the most likely cause of World War III.

The Moment of Truth

500 Internal Server Error

Internal Server Error

Cannot serve request to /content/rand/blog/articles/2018/04/how-artificial-intelligence-could-increase-the-risk.html on this server

Apache Sling