Our Future Lies in Making AI Robust and Verifiable


Oct 22, 2019

Digital concept of a brain, photo by Vertigo3d/Getty Images

photo by Vertigo3d/Getty Images

This commentary originally appeared on War on the Rocks on October 22, 2019.

This article was submitted in response to a call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Robert Work and Eric Schmidt. You can find all of RAND's submissions here.

We are hurtling towards a future in which AI is omnipresent—Siris will turn our iPhones into personal assistants and Alexas will automate our homes and provide companionship to our elderly. Digital ad engines will feed our deepest retail dreams, and drones will deliver them to us in record time. In the longer term, autonomous cars will zip us around our smart cities where the traffic is fluid and where every resource, from parking spaces to energy and water, is optimized. Algorithms will manage our airspace, critical infrastructure, healthcare, and financial systems. Some technologies promise to detect illnesses earlier and others to develop drugs faster and cheaper. Still other algorithms will be dedicated to protecting our nation and our way of life.

This AI-enabled future is blinding in its possibilities for prosperity, security, and well-being. Yet, it is also crippling in its fragility and can easily come to a screeching halt. All it might take is for a safety-critical AI system to fail spectacularly in the public eye—an AI analog to the Three Mile Island accident, or worse, a series of cascading incidents leading to mass casualties (e.g., AI-enabled traffic lights that malfunction and set in motion a mass-pileup of autonomous vehicles at a busy intersection)—to halt the advancement and adoption of these technologies, and public support for it, in its tracks.…

The remainder of this commentary is available at warontherocks.com

Danielle C. Tarraf is a senior information scientist at the nonprofit, nonpartisan RAND Corporation, where her work focuses on technology strategy, informed by quantitative and data-driven analyses. She began her career as an electrical and computer engineering faculty member at Johns Hopkins University, where she established and directed a research lab focused on advancing control theory, particularly as it interfaces with theoretical computer science and reinforcement learning.