Texas is at the bleeding edge of the coming autonomous vehicle revolution. Austin is one of only four cities in the United States chosen by Google to test its fleet of robot-driven Lexus SUVs and new prototype vehicles. When California proposed regulations that would limit the deployment of autonomous vehicles without human drivers also in the cockpit, some hailed it as a boon for Texas.
One day, it is hoped, autonomous vehicles will eliminate crashes, reduce congestion, and offer major environmental and land use benefits. But before this potential can be realized, people and policymakers in Texas and across the country face some very important questions. First, how safe should autonomous vehicles be before they are allowed on the roads for consumer and commercial use?
Waiting for autonomous vehicles to operate perfectly could miss the opportunity to reduce the risks that far-from-perfect human drivers pose. There are even arguments to be made for permitting driverless cars in some capacity even if they are not quite as safe as human drivers, not only because they offer many non-safety benefits, but because doing so may enable developers to improve them faster than they would otherwise, and thus save more lives overall.
Even as Texans debate this first question, they must also be able to answer a second: How can the safety of autonomous vehicles be adequately demonstrated? The most logical way is to test-drive autonomous vehicles in real traffic and observe their performance. Developers of autonomous vehicles rely on this approach to evaluate and improve their systems, as Google is doing in Texas. But how many miles of test-driving would be enough to satisfy policymakers and the public that driverless cars don't pose undue safety risks?
The safety of human drivers is a critical benchmark for comparing the safety of autonomous vehicles. And, even though the number of injuries and fatalities from human drivers is high, the rate of these failures is low in comparison with the number of miles that people drive. Americans drive nearly 3 trillion miles every year. The 2.3 million reported injuries in 2013 correspond to 77 reported injuries per 100 million miles. The 32,719 fatalities in 2013 correspond to 1.09 fatalities per 100 million miles.
For comparison, Google's autonomous vehicle fleet, which currently has about 55 vehicles, was test-driven approximately 1.3 million miles in autonomous mode and was involved in 11 crashes from 2009 to 2015, none of them fatal. A study by the Virginia Tech Transportation Institute found that Google's fleet might be safer than human drivers in terms of crashes with only property damage, but could not draw conclusions about the relative performance in terms of two critical metrics: injuries and fatalities. Given the fatality and injury rates, a million autonomously driven miles was simply not enough to make statistically significant comparisons.
We asked the next logical question: How many miles of driving would be enough to make statistically significant safety comparisons between autonomous vehicles and human drivers? We determined that autonomous vehicles would have to be test-driven hundreds of millions of miles and sometimes hundreds of billions of miles to confidently demonstrate their safety.
For example, suppose an autonomous vehicle fleet had a 20 percent lower fatality rate than human drivers. Proving this with 95 percent confidence would require driving 5 billion miles. That is literally astronomical — like driving to Neptune and back. Or, put another way, driving every mile of road in Texas nearly 16,000 times over. It would take a fleet of 100 vehicles driving 24/7 around 225 years to drive these miles. Test-driving to prove safety is an impossible proposition.
That doesn't mean we must wait decades (or longer) while developers test-drive these miles. Nor does it mean putting the kibosh on the industry. Instead, it means technology developers, third-party testers, and policymakers need to quickly come up with alternative and innovative methods of demonstrating safety. These could include virtual testing, mathematical analysis, and accelerated testing. Just as there is a race to develop autonomous vehicles, a parallel race to develop methods for testing autonomous vehicles is critical.
Yet, even with these methods, it may not be possible to establish with certainty the level of safety of autonomous vehicles. If uncertainty cannot be eliminated, can it be managed? Carefully planned pilot studies in controlled, well-designed environments could help. So could adaptive regulations designed from the outset to generate new knowledge by facilitating pilot studies and other forms of alternative testing. Establishing safety review boards could aid in evaluating these test results and guiding revised rulemaking.
Texans will be among the first to grapple with these new questions. And the answers will need to combine the science of reliability testing with the art of understanding and responding to the priorities of Texans. The country is watching.
Nidhi Kalra is a senior information scientist at the RAND Corporation, a codirector of RAND's Center for Decision Making under Uncertainty, and a professor at the Pardee RAND Graduate School.
This commentary originally appeared on Dallas Morning News on April 28, 2016. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.