After we released a report in April demonstrating that test-driving autonomous vehicles isn't a feasible method of determining when they will be safe enough for consumer use, people started peppering us with questions. Our findings also generated much discussion in government and automobile industry circles, and in the public arena.
The amount of time and number of miles required to properly test autonomous vehicles appeared to boggle minds. Given current traffic fatality rates, fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles, requiring many decades and possibly hundreds of years of test-driving, according to our research. We showed that autonomous vehicle injury and property-damage-only crash rates are more feasible to demonstrate, but would still take several years.
Here are two examples of the many intriguing questions we received about autonomous vehicle safety, and our replies.
If it were feasible to demonstrate through test-driving that an autonomous vehicle fleet has fewer property-damage-only crashes or injury-only crashes, what could we learn about more serious fatal crashes? In other words, if we can show fewer “little” mistakes, does that imply there would be fewer “big” mistakes?
No. Unfortunately, drawing this conclusion would require one to make the assumption that AVs affect the chances of making “little” and “big” mistakes in the same ways. It is completely conceivable that AVs could reduce instances of low-severity crash rates but have no effect on high-severity crashes or even cause them to increase if the AV makes new types of serious mistakes that humans usually don't. Given that there is no feasible way of proving AV safety in terms of fatalities at this time (through test-driving or other methods), property-damage-only and injury-only crash rates might be the best evidence we can hope for.
Crashes are not distributed uniformly per mile or maneuver. Could the number of miles of driving required for safety testing be reduced by focusing on the types of miles that disproportionately contribute to fatalities?
While that sounds logical, it can't prove safety either. We might be able to show AVs are less likely to make very specific mistakes, such as running a red light. However, the test result would only apply to a very specific driving condition. If we want to know how well AVs do in everyday conditions, they have to be driven in everyday conditions. Only in that case would we learn whether AVs make mistakes humans usually do not make.
Nidhi Kalra is a senior information scientist at the nonprofit, nonpartisan RAND Corporation, a co-director of the RAND Center for Decision Making Under Uncertainty, and a professor at the Pardee RAND Graduate School. Susan Paddock is a senior statistician at RAND and a professor at the Pardee RAND Graduate School.