This past summer, many of the titans of technology, including Stephen Hawking, Elon Musk, and Steve Wozniak, signed onto a letter calling for a ban on the application of artificial intelligence (AI) to advanced weapons systems.
The call for a ban is directly at odds with the Pentagon's plans for future warfare, which include an increased emphasis on AI and unmanned systems, especially in cyberspace and where communications are slow such as undersea. Deputy Defense Secretary Robert Work, has said “We believe strongly that humans should be the only ones to decide when to use lethal force. But when you're under attack, especially at machine speeds, we want to have a machine that can protect us.”
Unlike previous autonomous weapons, such as landmines, which were indiscriminate in their targeting, smart AI weapons might limit the potential for deaths of soldiers and civilians alike. The letter conveys an appreciation of the benefits and risks. “Replacing human soldiers by machines is good by reducing casualties for the owner,” the authors write, “but bad by thereby lowering the threshold for going to battle.” But is a ban really the best option?
The remainder of this commentary is available on defenseone.com.
Andrew Lohn is an associate engineer, Andrew Parasiliti is director of the Center for Global Risk and Security, and William Welser IV is director of Engineering and Applied Sciences all at the nonprofit, nonpartisan RAND Corporation.
This commentary originally appeared on Defense One on February 8, 2016. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.