ROBOT CAR SAFETY
Raymond Paul Johnson
Eliot A. Bennion
Raymond Paul Johnson, A Law Corporation
2101 Rosecrans Avenue, Suite 3290
South Bay Los Angeles
El Segundo, California 90245
Welcome to yesterday’s tomorrow, where robot vehicles are
here to stay. The goal however is to make them safe and defect-free. That
begins and ends with the trinity of safety: Analysis, testing and quality
control; matters in progress as you read this.
So where are we? Some auto manufacturers plan to put fully
autonomous vehicles on the market by the end of this decade. Nissan, Audi,
Mercedes, BMW, Tesla, and others are all working on their versions of the robot
Audi, for example, got the first permit from the state of
California in 2014 to test self-driving cars on the open road.
It promptly used the permit to drive autonomously from Silicon Valley to Las
Vegas in a prototype A7 with hands-off “safety observers” in the driver and
passenger seats. Audi reported that, despite driving at night in heavy rain at
speeds up to 70 mph, the 560-mile trip went without a hitch.
Does anyone sense a race is on?
If so, the lead horse has to be Google. Its engineers began
secretly developing driverless cars in 2009.
Google now operates a test fleet of specially-modified robotic Toyota Priuses
that have logged hundreds of thousands of miles on city streets, freeways,
highways and mountain roads. See Figure 1/Google’s self-driving Prius on the
Although the marketing of a Google self-driving car is
probably years away, vehicles with semi-autonomous features are already on the
road. Look around. You’ll see cars that park themselves, automatically steer
to stay in a lane, swerve to get out of danger, and/or spontaneously slam on
brakes to avoid a crash.
Below, we’ll look at the cutting edge of these
semi-autonomous driving systems, focus more on Google’s self-driving car,
describe related government-sponsored testing and legislation, and explore
on-going research and the safety challenges posed by the robot car.
The Mercedes Intelligent Drive system — an option introduced
in the S-class and now spreading to other Mercedes models — is considered one
of the most advanced semi-autonomous systems on the market.
Mercedes Intelligent Drive is actually a bundle of
semi-autonomous subsystems that use stereo cameras, radar, and ultrasonic
sensors to detect the vehicle’s surroundings. Those subsystems have been
labeled with somewhat self-descriptive names, and include “Distronic Plus”
proximity control with “Steer Control”; “Bas Plus” with “Cross-Traffic Assist”;
“Adaptive Highbeam Assist Plus”; “Night View Assist Plus”; “Pre-Safe Plus”;
“Attention Assist”; “Parking Package” with optional 360° camera; “Magic Body Control”; “Active Lane Keeping Assist”;
“Active Blind Spot Assist”; “Crosswind Assist” and “Traffic Sign Assist” with a
wrong‑way warning function.
This arsenal of semi-autonomous functions can steer, brake
and accelerate the vehicle up to 124 mph, maneuver the car through city traffic
(at speeds below 37 mph), park the vehicle, brake for pedestrians or
cross-traffic, and adjust seats and apply brakes in anticipation of a
collision. The vehicle can even perform functions that a human driver could
not, such as recognizing a bump or a pothole and adjusting the suspension to
drive smoothly over an uneven surface.
Similar semi-autonomous systems by other manufacturers will
soon be on the market. For example, GM expects its SuperCruise (to be
available in Cadillac models within a few years) to be capable of autonomous
freeway driving. And Toyota has a similar system called “Automated Highway
Driving Assist,” which will autonomously drive a vehicle on a highway, but also
uses infrared eye detection to make sure the driver is paying attention.
The Google Self-Driving Car project leads the pack.
Currently, Google’s test cars include Toyota Prius, Audi TT, and Lexus RX450h
models with special equipment added at a cost of about $150,000 per vehicle.
The Google test cars use a roof-mounted Lidar (laser radar)
unit, cameras, four radar sensors, GPS, an inertial measurement unit, and wheel
Figure 2/A Fleet of Google self-driving Lexus RX450h models. The Lidar
subsystem generates a 3D map of the environment that the vehicle’s software
couples with high-resolution maps of the terrain to determine position. The
combination of sensors and cameras allows the vehicle to locate itself at
greater accuracy than the Global Positioning Satellite (GPS) system allows;
recognize other vehicles, pedestrians, and cyclists on the road; and react to
situations according to its programming which is constantly being modified as
the test vehicles encounter new situations and engineers tweak the vehicle’s
Google has logged hundreds of thousands of miles with its
autonomous vehicles, mostly near Google headquarters in Mountain View, California.
Although the vehicles have been involved in a few minor collisions, none have
been reported to be the fault of the autonomous driving technology.
In May 2014, Google revealed its prototype self-driving car
–– a two-person vehicle with no steering wheel, gas pedal, or brake pedal.
Google wants to get 100 of these fully autonomous prototypes on the streets over
the next two years, once it clears legal hurdles
and safety issues. Google plans to get volunteers to use the prototype vehicles
–– so it can continue to log autonomous driving miles and accumulate data.
The prototypes will not be for sale, but Google hopes to
release driverless cars to the public between 2017 and 2020 by partnering with
an automotive manufacturer, rather than manufacturing vehicles itself. Just
what the final product will look like –– whether it be a modification of an
existing car model or a new model similar to Google’s prototype –– is still up
in the air.
Even Google has a way to go, however, before its vehicles
are truly autonomous. For
example, it has yet to master autonomous driving in heavy rain or snow. Nor is
it able to distinguish easily between objects such as a plastic bag and a rock
–– causing the vehicle at times to unnecessarily swerve out of the way of
harmless debris. The current technology also has difficulty with potholes, and
it cannot spot humans, such as police officers, signaling the car to stop. And
though it has little trouble navigating freeways, Google’s car cannot yet
handle many parking lots or garages.
So challenges await certainly, yet none seem impossible to
Other autonomous vehicle projects reflect thinking that is
even more outside the box – or perhaps “outside the car” is more accurate. The
National Highway Traffic Safety Administration (NHTSA) sponsored a year-long
project in Ann Arbor, Michigan studying the benefits of vehicle-to-vehicle
(V2V) and vehicle-to-infrastructure (V2I) communications systems in vehicles
from different manufacturers. Part of
the idea is to create a ground transportation network that will allow
communications and interactions between vehicles, and among vehicles, highway
operators, safety personnel and others.
NHTSA Acting Administrator David Friedman said that V2V
technology has “game changing potential.”
Using warning systems to alert drivers to dangers, the study found a reduction
in the number and severity of collisions. Though the V2V systems did not
incorporate autonomous features (steering, braking, etc.), the communication
system could and likely will be assimilated into present semi-autonomous
systems, and future autonomous vehicles.
Which brings us to connectivity. As any teenager with a
smartphone will tell you, staying connected is important. But to the
autonomous vehicle of the future, it could be essential.
First, the “autonomous” in autonomous vehicle is something
of a misnomer. No man is an island; no car alone –– especially in today’s
world. And the next step, in the not too distant future, may be that
high-occupancy-vehicle (HOV) lanes on the freeway will be transformed into
conga lines for fully autonomous vehicles with internet connectivity to highway
operators (like Caltrans) and other vehicles (for “see and be seen” collision
avoidance). See Figure 3/HOV lane conga lines.
Essentially, your autonomous vehicle will become a car in a
mile-long commuter train formed during certain hours in the HOV lane. Road
safety, of course will become a critical issue, and command, control and
communication between vehicles and highway operators will have to be fail safe.
This is not science fiction. Honda for example soon plans
to introduce a feature on its vehicles that could assess a road hazard and send
information to vehicles behind it –– which then would perform automatic lane
changes to avoid the hazard. If
connectivity can result in automatic lane changes, certainly braking and
speeding up to maintain a safe conga line of vehicles in the HOV lane would be
In addition, Google, SpaceX, Facebook and others are hard at
work creating systems that will enable world-wide internet coverage, constant
connectivity, back-up systems and fail-safe operations. They are presently developing
and testing satellite networks, solar-powered high altitude aircraft, and even
balloon systems that will eliminate gaps, decrease downages, and provide
internet connectivity around the globe.
Not coincidentally, the NHTSA-sponsored testing in Michigan
of vehicle-to-vehicle and vehicle-to-infrastructure communication systems
(described in the last section) fits hand-in-glove with this connectivity and
autonomous vehicles. There will of course be challenges such as hackers who
could create collisions, the effects of random or engineered data drops,
privacy issues related to tracking the comings and goings of people, and the
like. But the same is true of automated trains and subways, and solutions and
protections have been proven attainable through technology, legislation and
Just as the beginning of the twentieth century marked the
transition from horse to horseless carriage, the early twenty-first century
starts the switch from drivers to driverless cars around the world. Now we
need but to meet the challenge.
the Future Today
Unfortunately, most existing legislation and regulations
fail to consider autonomous vehicles. Yet, there is a general consensus that
autonomous vehicles are probably legal under existing law, even if states do
not enact specific legislation or regulations to deal with them. Beginning
with Nevada in 2011, however, four states (Nevada, Florida, California, and
Michigan) and the District of Columbia adopted legislation specifically
allowing autonomous vehicle testing on public roads.
Most of this legislation requires that autonomous test
vehicles on public roads have a human driver available to take over at any
however, Nevada allows autonomous driving without any available human driver,
if the particular autonomous vehicle has received a certificate of compliance
from the state. Some states
have also exempted autonomous vehicle drivers from their normal prohibition
against using cell phones.
The “operators” of autonomous vehicles are generally defined
in state legislation as the persons who engage the autonomous driving systems,
or in some cases, the person in the driver’s seat – assuming there is one.
Some state legislators have also established significant insurance requirements
for operators of autonomous vehicles.
Perhaps not surprisingly, Michigan’s laws explicitly protect
manufacturers of vehicles. The law specifically addresses third parties who add
(or attempt to add) autonomous systems into vehicles, and generally exempts the
original manufacturers from liability, unless the defect existed previously.
California has been a leader in the evolution of the
autonomous vehicle. For example, California’s Senate Bill No. 1298, signed
into law by the Governor on September 25, 2012, specifically expresses its
purpose, stating in part: “The State of California, which presently does not
prohibit or specifically regulate the operation of autonomous vehicles, desires
to encourage the current and future development, testing, and operation of
autonomous vehicles on the public roads of the state.”
This is of course noteworthy and laudable, and California
has since become a favorite test bed for autonomous vehicles. But further
legislation in California and other states is needed to allow safe testing of
future experimental and fully autonomous vehicles on public roads. More
importantly, effective legislation is needed now to deal with the many
challenges that emanate from robot vehicle technology that already exists.
What a shame, for example, if the geyser of current robot
technology is shut down by the absence of legislation and regulation needed to
allow safe expansion of the science. That is, unfortunately, quite possible if
manufacturers don’t get busy drafting prototype legislation, or if legislators
fail to legislate robot vehicles boldly but prudently in California and across
To build an autonomous-vehicle transportation system,
however, legislation, regulation and research must stretch beyond the vehicle
itself to roads and other infrastructure. For example, Volvo and researchers
at UC Berkeley, in separate projects, have already tested the use of magnetic
strips in roads to help steer vehicles.
The first step to autonomous steering is providing the vehicle with very
accurate and reliable spatial information so it can know its position. GPS
data alone is too inaccurate; positions can be off by several yards. That’s
why Google’s self-driving cars use its Lidar (laser range finder) system in
conjunction with detailed 3D mapping to determine position. Volvo and others
think that this critical job can be done more cheaply and more reliably with a
road magnet system.
A UC Berkeley group, back in 2008,
tested a research bus on a one-mile stretch of road embedded with magnetic
markers about every three and one-half feet down the center of the lane. The
driver still had control over acceleration and braking, but the magnetic
sensors on the research vehicle took over steering. Remarkably, the bus was able
to pull within one-half inch of the curb during the test.
In a project financed in part by the Swedish Transport
Administration, Volvo built a track in Sweden to test the ability of its
magnetic roadway system to guide vehicles. It implanted neodymium disk magnets
(approximately three-quarter inch diameter and half inch thickness) and ferrite
magnets (about 1 inch diameter and one-quarter inch thickness); using 100 total
over 110 yards of road.
Volvo engineers calculated that a car with magnetic sensors
traveling at 90 mph would need at least 400 readings per second to determine
its position. Using a modified Volvo-model S60 with five sensor modules, each
containing fifteen Honeywell magnetoresistive sensors, the vehicle was able to
make 500 readings per second of the magnets.
This allowed the system to calculate the vehicle’s position
to within 4 inches at 45 mph. Additional testing on a road with magnets glued
to the surface (instead of implanted) resulted in the same accuracy when
traveling 90 mph. Perhaps even more remarkable is the projected cost: the
vehicle’s sensor system will cost only $109 per car if 50,000 units are made.
Magnetic strip technology, however, would require a substantial
investment in updating current roads by embedding magnets. But this technology
has significant advantages over radar or GPS technology; in particular, it
functions robustly in bad weather and with other obstructions.
Further research with all three systems (GPS, radar and
magnets) is on-going. And no one would be particularly surprised if the result
was an infrastructure using all three technologies in different applications or
combined to guide autonomous vehicles in the future.
Autonomous and semi-autonomous vehicle technology is often
touted for its safety benefits. But the flip side is that the current
technology is imperfect, even when functioning properly, and glitches in
autonomous systems can cause significant risks.
One of the earliest semi-autonomous systems in vehicles was
cruise control. Even this basic system has had well-documented problems that
have led to malfunction, injury, and death. In the 1990s, many Ford model
vehicles for example had problems with sudden acceleration.
The culprit in some cases was a design flaw that could damage the speed control
cable conduit, part of the cruise control system. Damage to the conduit and/or
cable caused wide-open or surging throttle –– i.e., sudden acceleration.
Additionally, some experts believed that electromagnetic
interference caused the cruise control system on some Ford vehicles to signal
the throttle open without accelerator input.
As autonomous systems become more complicated and incorporate more electronics,
transient electromagnetic interference will be even more prevalent –– and
resulting malfunctions could become more common unless manufacturers take
additional precautions including proper testing and shielding of components.
Then, lest we forget, we had the more recent and notorious
runaway acceleration problems of Toyota. A significant fact: The complaints
of unintended acceleration in Toyota and Lexus vehicles increased dramatically
after a switch to electronic throttles in the year 2000. These electronic
throttles were merely a type of semi-autonomous control of engine
acceleration. They replaced the mechanical links (usually steel cables)
between the driver’s foot and engine acceleration with a series of sensors,
microprocessors, electronic motors and wiring.
In some models, the reports of sudden acceleration increased five-fold after
introduction of the electronic throttles.
Following stupendous fines and settlements, the publicity
has died down, but the root cause of many runaway accelerations is still
believed by numerous experts to have been a combination of software glitches
and electromagnetic interference/electromagnetic compatibility (EMI/EMC)
problems. A major tenet of safety design is: The more electronics you stuff
into a small package (like a car’s engine compartment), the more prevalent and
potentially lethal the EMI/EMC issues.
As the aerospace industry learned decades ago, manufacturers
cannot simply continue to jam processors and other electronic devices into
small areas without rigorous testing and designing away EMI dangers. If they
short shift the analysis and testing, spurious signals that inadvertently and
randomly excite nearby electronics are inevitable.
Eliminating EMI/EMC dangers is a system design, integration
and test issue that can affect every electronic component and computer-driven
subsystem in a vehicle. And this potentially deadly issue takes center stage
with the advent of more and more semi-autonomous systems and autonomous
The answer is that manufacturers must test for EMI/EMC
dangers at every step of the design process. In addition, careful safety
analyses must be conducted from concept through final design. The most
important of these is the Failure Modes and Effects Analyses (FMEA) which, if
done properly, can identify the potential causes and results of packing
electronic devices such as microprocessors, sensors and radar equipment into
Manufacturers who put their heads in the sand and ignore
these safety needs are destined to produce defective products. On the other
hand, with careful analyses, testing and quality control, the many electronic
devices of the autonomous vehicle and semi-autonomous subsystems can be safely
integrated, insulated and if need be isolated, and all associated algorithms
can be verified and validated to create –– what we all want –– safe robot
Safety Issues and Challenges
Humans have an intuitive sense of safety and ethics that
factors into everyday decisions we make, often without realizing it. For
example, when driving a car on a single-lane road, if a person encounters an
obstacle in the road, a likely response is to steer the vehicle across the
double yellow line (assuming a clear path) to get around the obstacle –– even
though it is a violation of traffic law.
Consider now how a robotic car would handle the same
situation. Should a robotic car be programmed to violate the law? If so, under
what circumstances? Are some laws –– like crossing a double yellow line with no
oncoming traffic –– less important to follow than others –– like speed limits
or traffic signals? Although humans intuitively determine these questions while
driving, an autonomous vehicle would have to be programmed ahead of time to
weigh these decisions.
Now consider the trolley problem, originally posed by
Phillipa Foot, and its
many variations. Imagine that you are the driver of a trolley car approaching a
fork where you can take two different sets of tracks. As it happens, down one
set, there are five men working on the tracks, who would be killed if you went
down that route. Unfortunately, on the other side, there is one man working on
the tracks. Most people faced with the two choices would avoid the five men ––
essentially sacrificing one life to save five.
Now what if the trolley car is headed toward five men on the
tracks, but it can be stopped by pushing a nearby fat man onto the tracks? Even
though the situation still boils down to trading one life for five, this
situation is more difficult for people to deal with. There is a difference
between killing one person as an unintended consequence (as in the original
scenario) and actively intending to kill one person (the fat man) to save the
The programming of an autonomous vehicle will involve
similar safety and ethical decisions. It may
very well be that an autonomous vehicle, facing a certain crash, may have to
decide which of two vehicles to collide with. Should the programming send it
toward the vehicle likely to be least damaged? Or the vehicle with the fewest
occupants? Would it matter if the need for the collision was created by, say,
the vehicle with more occupants? Whereas human drivers may have to make these
decisions in a split second, computers will have to be programmed in advance,
and human programmers will be faced ahead of time and out-of-context with these
difficult decisions, and their consequences.
What about choosing between hitting a helmeted motorcyclist
or an unhelmeted motorcyclist? Obviously, less injury would be expected if the
vehicle chose to hit the helmeted motorcyclist, but the resulting incentives
from that calculation would encourage motorcyclists not to wear helmets –– in
order to exploit the autonomous vehicle programming and make themselves safer.
Discouraging the use of helmets would be an unintended result, no doubt, but
it’s the type of consequence that must be considered in programming autonomous
Should an autonomous vehicle be programmed to save its own
occupants at the expense of others –– even if it leads to a greater number of
lives lost? Manufacturers and insurance companies would surely want to limit
the total damage in a collision, but manufacturers may also want to promise
that their vehicles protect their owners at all costs.
No doubt, a new realm of safety issues will open when robot
vehicles hit the road in force. And many related questions, without obvious
answers, must be answered ahead of time –– with human lives in the balance.
When then is the right time to anticipate, weigh and decide those issues? That
time is now. Autonomous vehicles are here to stay.
We live today with robot vehicles and pervasive
semi-autonomous car systems that park, steer, brake, accelerate, cruise and correct
on their own. Tomorrow will bring more. History tells us, however, that we
must properly prepare now to make this future ground transportation system safe
and defect-free. We need bold but prudent legislation and regulation to let
current concepts flourish. In addition, robot vehicle manufacturers must
continue to use rigorous analysis, testing and quality control to ensure
safety. We can’t stop the future. Why should we? But we can make it safe, as
well as exciting.
. Others shun the
concept, or think it further away. In September 2014, for example, Toyota
confirmed it has no plans to build a fully autonomous vehicle. See “Toyota –
of All Companies – Defends Drivers, Says It Won’t Build a Fully Autonomous
Car,” Clifford Atiyeh, Car and Driver Blog, September 10, 2014, blog.caranddriver.com.
In 2013, Will Knight in MIT Technology Review, October 22, 2013, at www.technologyreview.com
suggested that fully autonomous cars may still be decades away.
Undercoffler, Smart Wheels, Los Angeles Times, January 7, 2015.
. “What we’re
driving at,” Sebastian Thrun, Google Blog, October 9, 2010, googleblog.bbgspot.com
. See “Mercedes-Benz’s
autonomous driving features dominate the industry – and will for years,” Diana
T. Kurylko, Automotive News, August 4, 2014, www.autonews.com.
Mercedes-Benz Intelligent Drive, techcenter.mercedes-benz.com/en/intelligent_drive/
discloses costs of its driverless car tests,” USA Today Drive On, June 14,
. “How Google’s
Self-Driving Car Works,” Eric Guizzo, IEEE Spectrum, October 18, 2011, spectrum.ieee.org.
. “The latest
chapter of the self-driving car: mastering city street driving,” Google Blog,
April 28, 2014, google.blogspot.com.
the minor collisions, one Google vehicle was rear-ended while it was stopped.
Another Google vehicle rear-ended another car while the Google driver was
laws, for example, require autonomous vehicles tested on public roads to have a
human driver available to take control at any time – a standard which is
impossible for the Google prototype, since it has no manual controls.
“Hidden Obstacles for Google’s Self-Driving Cars,” Lee Gomes, MIT Technology
Review, August 28, 2014, www.technology review.com.
Department of Transportation Announces Decision to Move Forward with
Vehicle-to-Vehicle Communication Technology for Light Vehicles,” NHTSA Press
Release, February 3, 2014.
Flemins, Who’s in Control?, Los Angeles Times, November 21, 2014.
. See, e.g., Bloomberg
News, Google, Fidelity buy Stake in SpaceX, Los Angeles Times, January
Cal. Veh. Code § 38750; § 319.145 Florida Statutes (2012); Mich. Veh. Code §
257.665; D.C. Code § 50-2352 (2012).
e.g., Nev. Admin. Code 482A.030.
e.g., Mich. Veh. Code §§ 257.602b, 257.817; § 316.305 Florida Statutes (2013).
e.g., Cal. Veh. Code § 38750, subd. (a)(4).
example, California Vehicle Code section 38750, subdivision (b)(3) requires
insurance in the amount of $5 million to cover any test vehicles.
e.g., Mich. Veh. Code §§ 257.817, 600.2949b.
Ma, No Hands! Automated Bus Steers Itself,” Dave Demerjian, Wired, September 9,
Thinks Magnetic Roads Will Guide Tomorrow’s Autonomous Cars,” Alexander George,
Wired, March 17, 2014, www.wired.com.
See, e.g., Aerostar (NHTSA Recall ID No. 00V425000); Contour (NHTSA Recall ID
No. 99V194000); Escape (NHTSA Recall ID No. 00V210001); Explorer (NHTSA Recall
ID No. 03V280000); F‑Series Truck (NHTSA Recall ID No. 99V062001); Focus
Hatchback (NHTSA Recall ID No. 99V346000, 00V302000); Taurus (NHTSA Recall ID
No. 97V025000); Mercury Mystique (NHTSA Recall ID No. 99V194000); and Mercury
Sable (NHTSA Recall ID No. 97V025000).
e.g., NHTSA Recall ID Nos. 97V025000, 03V280000, and 99V062001.
e.g., Friedl v. Ford Motor Company, 2005 WL 2044552 (D.S.D.)
Foot (1920-2010) was a British philosopher who introduced the trolley problem
in 1967, originally comparing it to a judge faced with a choice between letting
an innocent person go while knowing that a lethal riot would inevitably result,
and condemning the innocent person to death, thus averting the riot. The
problem has since been analyzed by many others and in numerous contexts.
Vartabedian and Bensinger, Data Points to Toyota’s Throttles, Not Floor Mats,
Los Angeles Times, November 29, 2009.
more-detailed description of these issues can be found in R.P. Johnson and C.G.
Lee, Deadly Runaway Acceleration, Forum, April 2010.
. See, e.g., “The Robot
Car of Tomorrow May Just Be Programmed to Hit You,” Patrick Lin, Wired, May 6,