With Driverless Cars, How Safe Is Safe Enough?


Feb 1, 2016

A line of Lexus SUVs equipped with Google self-driving sensors awaits test riders in Mountain View, California, September 29, 2015

A line of Lexus SUVs equipped with Google self-driving sensors awaits test riders in Mountain View, California, September 29, 2015

Photo by Elijah Nouvelage/Reuters

This commentary originally appeared on USA Today on January 31, 2016.

Ride-sharing app Lyft and auto giant General Motors heated up the driverless car debate in January, announcing plans to build a fleet of autonomous vehicles to meet America's growing appetite for on-demand ride services. They join Google, Ford and Uber on the driverless bandwagon.

But before driverless cars can be deployed to satisfy this growing demand, a fundamental question remains: How safe is safe enough?

Inevitably, some will insist that anything short of totally eliminating risk is a safety compromise. They might feel that humans can make mistakes, but not machines. But waiting for autonomous vehicles to operate perfectly misses opportunities to save lives by keeping far-from-perfect human drivers behind the wheel. In the United States alone, about 30,000 people are killed and more than 2 million injured in crashes every year. The vast majority of the existing carnage is caused by human error (PDF). Moreover, perfection could be a standard that is unattainable or that is not economically viable for developers, putting the kibosh on the industry. This would be a classic case of the perfect being the enemy of the good.

It seems sensible that autonomous vehicles should be allowed on America's roads when they are judged safer than the average human driver, allowing more lives to be saved and sooner while still ensuring they don't create new risks. But this standard also has its drawbacks. Makers of driverless cars might have incentives to make them only as safe as required instead of as safe as possible. Moreover, this "at least as good" standard could also miss important opportunities.

What if driverless cars were permitted in some capacity even if they were not quite as safe as human drivers? The technology could be deployed sooner, but at the expense of more crashes, at least initially. The non-safety benefits might outweigh these drawbacks, for example by allowing people to do more productive things instead of sitting behind the wheel. Indeed, people make this trade-off every time they get in a car: They believe the benefit of traveling outweighs the very real safety risk of being on the road.

And, counterintuitively, more lives might actually be saved with this "not quite there" standard if developers can use early deployment as a way to rapidly improve the autonomous vehicles. They might become at least as good as the average human faster than they would otherwise and thus save more lives overall. On the other hand, public backlash from the inevitable crash from a not-quite-there technology could be so great as to put a stop to the industry.

Whichever standard of safety is pursued — either through legislation, the courts or both — it will shape the arc of autonomous vehicle development in ways more complicated than many assume.

And yet, it could be impossible to accurately gauge safety until many, many autonomous vehicles hit the roads. In the U.S., approximately one fatality occurs for every 100 million miles driven. To prove with 95% confidence that a driverless car achieves, at least, this rate of reliability by driving them around to see, it would require they be driven 275 million miles without a fatality. With a fleet of 100 autonomous vehicles (larger than any known existing fleet) driving 24/7, it would take more than 12 years to drive these miles. But with 10,000 such vehicles, it would take just six weeks. Regulators will have to find other ways of estimating safety, but widespread deployment will be the true test. If safety standards are too strict, this might never happen.

Managing the safety opportunities and risks of driverless cars is complex, and policymakers face tough decisions ahead. With the looming prospect of driverless chauffeurs plying the roads at a rate envisioned by Lyft, GM, Uber and others, these questions need to be asked and answered.

Nidhi Kalra, a senior information scientist at RAND Corporation and director of RAND's Center for Decision Making Under Uncertainty, is a co-author of the report "Autonomous Vehicle Technology: A Guide for Policymakers."

More About This Commentary

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.