BlankFatal crash shouldn't kill self-driving cars by Nidhi Kalra
It has happened. The first known fatality in an autonomous vehicle occurred on
May 7'in Florida, when a man driving a Tesla Model S with Autopilot collided
with a crossing tractor-trailer as it made a left turn across his lane. Neither
the Autopilot system nor the driver reacted in time to avoid the fatal crash.
This incident raises important questions. First, does it mean that autonomous
vehicles are less safe than human drivers? No. Tesla reports that this fatality
occurred after' 130 million miles 'of Autopilot driving. Human drivers
experience about one' fatality in 100 million miles . This would seem to imply
that Tesla's systems are safer, but the Autopilot miles are simply too few to
make statistically meaningful comparisons.
These statistics do not tell us whether Tesla's Autopilot systems are more or
less safe than human drivers;
they only tell us that they are not perfect.
Second, will it or should it stop development of self-driving and driverless
cars? No. There are' nearly 90 traffic fatalities each day 'in the United
States
and many more worldwide. Research by the National Highway Transportation Safety
Administration'shows that about 94%'of these are caused by human errors that
autonomous systems could reduce or eliminate.
So far, the public and the media responses have been responsibly tempered:
don't
throw the baby out with the bathwater.
Third, what can be learned from this? Hopefully a great deal. This fatality
occurred in a fairly common traffic situation: one vehicle turning across
another vehicle's path. Some reports suggest the Tesla driver may have been
distracted and speeding , and the truck may have turned dangerously . The
ongoing government investigation may find that the human-created conditions
that
led to the imminent crash were unavoidable ' that the Tesla Autopilot simply
could not have stopped in time.
Such an outcome would suggest that human drivers can create crash antecedents
that automated systems cannot overcome, no matter how sophisticated they are.
Alternatively, the investigation may find that there was opportunity to detect
and avoid the crash (for instance, by using other types of sensors), but the
Tesla's hardware or software was not equipped to do so. If this occurred, the
question arises: Should automated systems be allowed that are not as
sophisticated as they could be,'because they depend on a distractible human
driver to fill in the gap?
Research suggests that while partially automated systems may address some forms
of human error, they may actually create unique safety risks . That's because
drivers may believe that, since the car is at least partially capable of
driving
itself, they don't need to'pay full attention to the road.
Perhaps more than anything, this is an occasion to better understand the risks
of autonomous vehicle technologies. The investigation into the Autopilot crash
should identify the human and technology factors that led to this fatality. If
those factors are part of a growing pattern, it should identify changes to the
technology or its deployment that could reduce those risks. For example,
systems
like Tesla's that require human intervention may need to actively assess
whether
the driver is indeed paying attention. Simultaneously, drivers should keep
their eyes on the road, literally. Drivers should take heed when their car asks
them to pay attention.
Above all, the investigation and its findings should contribute to a reasoned
public assessment of an uncertain but promising technology.
Nidhi Kalra is a senior information
scientist at the nonprofit, nonpartisan RAND Corporation, a co-director of
RAND's Center for Decision Making under Uncertainty, and a professor at the
Pardee RAND Graduate School.