Tesla Fatal Crash Reminds That Human Interface Remains Important


Jul 30, 2016

A Tesla Model S with version 7.0 software update containing Autopilot features is seen during a Tesla event in Palo Alto, California, October 14, 2015

A Tesla Model S with version 7.0 software update containing Autopilot features is seen during a Tesla event in Palo Alto, California October 14, 2015

Photo by Beck Diefenbach/Reuters

This commentary originally appeared on The Mercury News on July 27, 2016.

News of the inevitable came late last month—the first reported fatality in a self-driving vehicle. It is a chilling reminder that our evolving relationship with our increasingly robotic motor vehicles needs to be a partnership, an undertaking with humans and machines managing the risks.

With many companies making significant investments in automated vehicle technology, your next car may likely relieve you of many everyday driving tasks while traveling on the freeway and, eventually, navigating city streets. Your vehicle is still a machine with a childlike sheen, observing and reacting to the world in a very simple way. So you will need to be the adult in the relationship.

Even though the notion of a car driving itself still seems fantastical, a line can be drawn between self-driving cars and the advent of the automotive age. More than a hundred years ago, my great-great-great grandfather, George Freestone, owned and operated an automobile repair shop on San Julian Street in Los Angeles that provided driving lessons for new automobile owners. According to a published history of his life, he gave at least one driving lesson that went awry:

After starting down the street at a very slow speed, “everything seemed to be going well until suddenly I realized everything had gone quiet,” he said. “I looked back and saw that the motor had dislodged and dropped off and lay in the middle of the road. I then showed the owner how to stop the car, and pointed out to him what had happened. The owner was very upset, and ran back to pick up the motor, not realizing that it would be red hot. It burned his hand, causing him to fly into a rage.”

Two things went terribly wrong: There was an uncharacteristic mechanical failure (the engine fell out!), and the operator responded ignorantly (unlike horses, engines get very hot). The fatality involving the Tesla Model S bears similarities to this early 20th century tale: There was an uncharacteristic technology flaw, and the modern driver was perhaps overconfident in the vehicle's capabilities, in this case the limitations of the autopilot. (Previously, the driver had posted a video lauding the Tesla's ability to avoid an accident, and there are suggestions that the driver might have been watching a DVD when the crash occurred.)

Automated vehicles will continue to misinterpret the world in which they operate. Both operators and bystanders will need to understand the characteristics and limitations of the automated vehicles of the future. Knowledge will help prevent ignorant behavior like picking up the modern equivalent of a hot engine.

Unfortunately, the mechanisms that Tesla uses to keep the driver engaged, including sensing hands on the steering and specific warnings within the user manual, were not enough to prevent this accident. Proper human engagement with the machine, also known as the human-to-machine interface, will continue to be a critical area of development for automated vehicles.

To manage this partnership, humans must remember that these highly technical vehicles are still only machines that will invariably struggle to understand the context of the world they operate in. With that in mind, the engagement becomes more sophisticated and perhaps safer.

Sure, the machine will relieve humans of mundane tasks, but it will also require humans to be more vigilant as we exercise higher levels of cognition in interpreting and reacting to the decisions our vehicles will be making.

Shawn McKay is an engineer at the nonprofit, nonpartisan RAND Corporation. He wrote this for the Mercury News.