Last week, sadly, two tragic deaths represented unfortunate, but predictable firsts in transportation. They are also reminders that despite the very real potential benefits of new technology, operating large metal objects at high speeds is an inherently dangerous activity, and public safety is best served by reducing people’s exposure to the risk—which means designing urban spaces to minimize necessary driving and keep most vehicular traffic traveling at low speeds.

On May 7, Joshua Brown had his Tesla sedan operating in auto-pilot mode crash into a semi truck that turned across his path on a four-lane Florida highway. Neither driver nor operator reacted to the truck, the car’s self-driving mechanism apparently fooled by the low contrast between the truck and a bright sky, or its own programming that led it to disregard large metal rectangles as highway signs. As Tesla put it: Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” (The passive voice here suggests autonomous vehicles are well on their way to developing the key driving skill of minimizing responsibility when crashes occur).

A Tesla. Credit: Lummi Photography, Flickr
A Tesla. Credit: Lummi Photography, Flickr

 

Then on July 1, bike sharing recorded its first fatality. In Chicago, 25-year old Virginia Murray riding a Divvy bike was struck by a flat-bed truck as they both turned right from Sacramento on Belmont in the city’s Northwest side.

Until these crashes, both technologies had enviable safety records. But as they become more widely used, it was a statistical certainty that both would fail. What lessons can we draw from these tragedies?

Gizmodo hastened to point out that the Tesla, while using its lane-keeping, object-detection and speed-maintaining functions, was not truly a fully autonomous vehicle. (Tesla’s system requires the driver to keep his or her hands on the wheel, or the car starts slowing down). Arguably, the Tesla’s systems don’t have all of the functionality or redundancy that might be built into such vehicles in the future. To Gizmodo, the Tesla crash is just a strong argument for fully autonomous vehicles—humans can neither be counted upon to intervene correctly at critical moments, and in theory, vehicle-to-vehicle communication between the Tesla and truck could have avoided the incident entirely.

In its press release on the crash, Tesla pointed out that its vehicles, collectively, now have recorded one fatality in 130 million miles of auto-piloted driving, which compares favorably with a US average fatality rate of one fatality per 94 million miles driven.

In a subsequent tweet-exchange in response press coverage of the Florida crash, Tesla CEO used that disparity to claim that if all cars worldwide were equipped with the auto-pilot function it would save half a million lives per year. Elon Musk upbraided a Fortune reporter, insisting he “take 5 mins and do the bloody math before you write an article that misleads the public.”

But the math on safety statistics hardly supports Musk’s view, for a variety of reasons, as pointed out in Technology Review. So far, the sample size is very small: until Tesla has racked up several tens of billions of miles of driving, it will be hard to say with any validity whether its actual fatality rate is higher or lower than one in 94 million. Second, it’s pretty clear that current Tesla owners only use the autopilot function in selected, and largely non-random driving situations, i.e. travelling on freeways and highways. Limited access freeways, like Interstates, are far safer than the average road; in 2007, the crash rate on Interstates was one fatality per 143 million miles driven (100 million divided by .70). The most deadly roads are collectors and local streets, where auto-pilot is less likely to be used.

Fatality Rates Per Million Miles Traveled, 2007

Screen Shot 2016-07-11 at 8.37.58 AM

Statistically, it’s far too early to make any reasonable comparisons between this emerging technology and human drivers. But our experience with managing risk and safety with other technologies suggest that the problem will be daunting. As Maggie Koerth-Baker pointed out at FiveThirtyEight, the complexity of driving and of coping with every possible source of risk—and selecting the safest action—is mind-boggling. Plus, computers may not make the same mistakes as humans, but that doesn’t mean that they won’t sometimes act in ways that lead to crashes.

Part of the problem is that the very presence of safety systems may lull drivers into a false sense of security. Crashes, especially serious ones, are low-probability events. Humans may be very leery about trusting a machine to drive a car the first few times they use it, but after hundreds or thousands of repetitions, they’ll gradually believe the car to be infallible. (This logic underlies the Gizmodo argument in favor of full autonomy, or nothing).

This very process of believing in the efficacy the safety system can itself lead to catastrophes. Maggie Koerth-Baker describes the meltdown of the Three Mile Island nuclear reactor. It had highly automated safety systems, including ones designed to deal with just the abnormalities that triggered its accident. But they interacted in unanticipated ways, and operators, trusting the system, refused to believe that it was failing.

While some kinds of technology—like vehicle-to-vehicle communication—might work well in avoiding highway crashes, there’s still a real question of whether autonomous vehicles can work well in an environment with pedestrians and cyclists—exactly the kind of complex interactions with un-instrumented vulnerable users that resulted in Virginia Murray’s death in Chicago.

Increasingly, safety problems affect these vulnerable users. Streetsblog reported that the latest NHTSA statistics show that driver deaths are up six percent in the past year, pedestrian deaths are up 10 percent and cyclist deaths are up 13 percent, reversing a long trend of fewer deaths, and making 2015 the deadliest year on the road since 2008.

For the time being, it’s at best speculative to suggest that all of these deaths can be avoided simply by the greater adoption of technology. And as many observers have noted, today’s technology, while impressive and developing quickly, is far from achieving the vision of full vehicle autonomy; some robotics experts predict self-driving cars may be 30 years away. With present technology, as we’ve noted more driving means more deaths. The most reliable way to reduce crash-related deaths is to build environments where people don’t have to drive so much, and where cyclists and pedestrians aren’t constantly exposed to larger, fast moving and potentially lethal vehicles even when making the shortest trips. That’s something we can actually do with the technology that exists today.