Illustration of a large gavel crashing down on self-driving cars, illustration by Chris Philpot

essay

October 23, 2019

Who's Responsible When Your Car Gets Hacked?

Illustration by Chris Philpot

In the future, when cars can drive themselves, grand theft auto might involve a few keystrokes and a well-placed patch of bad computer code. At that point, who will be liable for the damages caused by a hacker with remote control of a 3,000-pound vehicle?

Cars are becoming “fast, heavy artificial intelligences on wheels,” a recent RAND report cautioned—and that means they're becoming vulnerable. Potentially billions of dollars ride on that question of who has the legal responsibility to keep hackers from grabbing the wheel or cutting the brakes.

“These are not likely events, and there are lots of engineers working to make them even less likely,” said James Anderson, the director of the RAND Institute for Civil Justice and a coauthor of the study. “But they're not impossible. They will occur. It's at least worth some serious thought about what the legal consequences will be.”

Systemwide Vulnerabilities

Reality here is catching up to science fiction. In 2015, hackers showed that they could take control of a Jeep Cherokee through a hidden flaw in the entertainment system. They blasted the air conditioning, cranked up the radio, and switched on the windshield wipers. Then they cut the transmission. Chrysler had to rush software updates to 1.4 million owners, the first cybersecurity-related vehicle recall in U.S. history.

The FBI has since warned that hackers could exploit many of the electronic selling points of modern cars, from their internet radios to their critical control systems. Mercury Insurance even has a tool on its website: “How hackable is your car?”

The FBI has warned that hackers could exploit many of the electronic selling points of modern cars, from their internet radios to their critical control systems.

Share on Twitter

Anderson has been studying the legal challenges posed by autonomous vehicles for more than a decade. He's an attorney by training; his last job was as an assistant federal public defender representing death-sentenced prisoners after their convictions. He hadn't given vehicle technology much thought—until he was standing in line at RAND to get his picture taken for his employee badge. He started chatting with another new hire, an information scientist who was studying the emerging science of driverless cars.

Anderson's new study looks at how courts might assign blame if a hacker taps into an autonomous vehicle and causes trouble. That could be the nightmare scenario, a hacker cutting the brakes, commandeering the wheel, and steering the car into a collision. But it also could be a hacker swiping personal information from a car's driver logs; or threatening to disable a car's electronics if its owner doesn't pay a ransom.

If the hacker gets away, who else might expect a lawsuit?

Fewer than half the states have any laws on the books governing autonomous vehicles. And the technology is too new for courts to have much experience with it. But that doesn't mean these questions haven't come up before.

In 1947, for example, an interior decorator stepped out to buy some wallpaper and forgot to lock the door of a house he was working on. A thief snuck in and stole a diamond bracelet. The court found the decorator liable for the loss, in a case that is still taught in law schools today. In modern terms, he had created a vulnerability that the thief was able to exploit.

It's not a perfect precedent, of course. For one thing, locking down the millions of lines of code in an autonomous vehicle will not be so simple as turning a deadbolt. But it gives some idea of the legal thinking a court might apply in a claim arising from a hacked vehicle. Could someone have foreseen the problem and taken reasonable steps to fix it? The tougher question might be, Who's the someone?

If it's your car, that someone might be you. In a future of autonomous vehicles, software updates might be as routine a part of car care as oil changes. Miss one, and you might have just left the door unlocked.

“Think about the car of the future as, essentially, a laptop with an engine, wheels, and windshield wipers,” said Nahom Beyene, an engineer at RAND and coauthor of the study. “It's going to be continually redesigned, revised, and updated. It opens up a whole new dimension of vulnerability when the final product is almost to-be-determined.”

Local governments could also face claims. Most visions of autonomous vehicles imagine them communicating in real time with their surroundings, the streets and traffic signals. If a hacker can exploit that connected infrastructure, government officials might have to explain how it happened to a court.

Fewer than half the states have any laws on the books governing autonomous vehicles. And the technology is too new for courts to have much experience with it.

Share on Twitter

Any lawsuit involving a car will almost certainly name the car maker and the software provider as well. For them, one challenge will be staying on top of any potential vulnerabilities as they arise, possibly even years after the car comes off the assembly line. Courts have come down hard in negligence and product-liability cases when a manufacturer knew—or should have known—of a potentially dangerous defect.

Several years ago, for example, the Supreme Court of Alabama ordered General Motors to pay $15 million in punitive damages to the family of a young boy killed in a crash. The boy had been riding in a new pickup truck that stalled just as it drove into an intersection. A logging truck coming from the side couldn't stop in time to avoid it. The court found that a defective computer chip had killed the engine.

Existing laws and legal precedents like that should be enough to address most claims arising from hacked vehicles, the researchers concluded. “The legal system has been coping with new technologies for many, many, many years,” Anderson said. “Everything doesn't just come crashing to a halt any time there's a new technology.”

But there is one scenario that policymakers might want to consider. Call it the Rhode Island exception.

Tesla founder Elon Musk once mused that—in principle—hackers could someday tunnel into an entire fleet of connected cars and route every one of them to Rhode Island. The damage caused by such a fleet-wide hack might be so large that no single insurance policy or class-action lawsuit could cover it. In a case like that, the researchers wrote, policymakers might want to have a legal backstop to cover the flood of claims, much like one they established after the 9/11 attacks.

“We have no way of knowing the probability of hackers exploiting autonomous vehicles,” Anderson said. “I'll make the claim that it's not zero. That's about as strong a claim as I'm willing to make. Hopefully this will help advance the conversation about these issues, to bring that risk closer to zero.”

If all else fails, the owners of hacked vehicles might have one other line of recourse. Every state has a law requiring manufacturers to replace any car shown to have such a serious defect that it can't be fixed. For all their high technology, autonomous vehicles will still be subject to those Lemon Laws.

Doug Irving