- How should safety be defined in the context of automated vehicles (AVs)?
- How can AV safety be tested and measured?
This report presents a framework for measuring safety in automated vehicles (AVs) that could be used broadly by companies, policymakers, and the public. In it, the authors considered how to define safety for AVs, how to measure safety for AVs, and how to communicate what is learned or understood about AVs. Given AVs' limited total on-road exposure compared with conventional, human-driven vehicles, the authors also consider options for proxy measurements — i.e., factors that might be correlated with safety — and explore how safety measurements could be made in simulation and on closed courses. The report identifies key concepts and illuminates the kinds of measurements that might be made and communicated. It presents a structured way of thinking about how to measure safety at different stages of an AV's evolution, and it proposes a new kind of measurement. While acknowledging that the closely held nature of AV data limits the amount of data that are made public or shared between companies and with the government, the report highlights the kinds of information that could be presented in consistent ways in support of public understanding of AV safety.
No standard definition of safety exists in regard to AVs
- This report defines safety as the elimination, minimization, or management of harm to the public (with an emphasis on people, although it can include animals and property).
- The public and the policymaking community have an important interest in comparing AV safety with the safety of conventional vehicles, but there are limitations on the breadth and depth of comparable data collected for each type of vehicle.
This report presents a framework to discuss how safety can be measured in a technology- and company-neutral way
- The framework shows measurement possible in different settings (simulation, closed courses, and public roads with and without a safety driver) at different stages (development, demonstration, and deployment).
- The methods of measuring safety must be valid, feasible, reliable, and non-manipulatable. They can be leading (i.e., proxy measures of driving behaviors correlated to safety outcomes) or lagging (i.e., actual safety outcomes involving harm).
- Clearer communication about safety between the industry and the public will be critical for public acceptance of AVs. The more consistent the communication about AV safety from industry, the more cohesive and comprehensible the message will be.
- During AV development, regulators and the public should focus their concerns on the safety of the public, not on how development is progressing per se (which is the developer's concern).
- The opportunity to leverage a demonstration stage as a time for communication outside a company about safety (e.g., to policymakers or the public) should be pursued, recognizing the limits to what can be shown absent hundreds of millions or more miles driven and that there is currently no accepted, industrywide approach to demonstration because of variation among companies and the technologies they use.
- Safety events arising before the accumulation of exposure sufficient for statistically meaningful comparisons should be treated as case studies. Information from case studies can contribute to broad learning across the industry and by policymakers and the public.
- Given the potential for broader learning across industry and government, a protocol for information-sharing should be encouraged. It would have to precisely incorporate measures, format, context, frequency, governance, data security, and other factors.
- A taxonomy for common use that facilitates understanding of and communication about operational design domains is needed. A common approach to specifying where, when, and under what circumstances an AV can operate would enable, in particular, inter- and intraorganizational communication and communication with consumers and regulators. It would also facilitate tracking for a given AV of its progress through development and into deployment. Minimal-risk conditions should also be included.
- Research is needed on how to measure and communicate AV system safety in an environment wherein the system evolves through frequent updates. AV safety measures must balance reflecting the current system's safety level with recent (and perhaps non-recent) safety records.
The research described in this report was prepared for the Uber Advanced Technologies Group and conducted by the Science, Technology, and Policy Program and the Justice Policy Program within RAND Justice, Infrastructure, and Environment (JIE).
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.