Self-driving fatalities are a warning sign of problems with autonomous vehicles. The fatalities are the saddest, most tragic evidence. However, there are many non-fatal accidents that could have turned fatal as well.
We have previously discussed barriers to autonomous vehicle development: the challenge of inclement weather, how difficult it is for a human to take control, and the impossible reliability standards we humans demand of a robotic chauffeur. In fact, we even considered ethical rules that should be in the system.
However, we no longer need to speculate about these problems. More and more autonomous cars are on the road every year. These include “driver assist” systems as in Cadillac, Tesla, Nissan and Volvo; and licensed test vehicles, some of which operate on public roads all around us. The latter are proliferating, with 62 companies in California alone now licensed to conduct vehicle tests. And all these vehicle miles are leading to many accidents that call for study.
NTSB as Objective Judge
The National Transportation Safety Board is the US government’s official investigator of transportation mishaps. We tend to think of them when a passenger airline crashes. However, these days more and more of their work addresses self-driving accidents, especially self-driving fatalities.
Whenever there’s a new technology, it will offer downsides as well as upsides. As the technology matures, we learn how to counter the downsides and take advantage of the upsides. So of course we hope that the early tech problems will gradually go away as the technology evolves. Thus far, this has not happened.
A Maturity Scale for Self-Driving Technology
To have an accurate perspective on problems of self-driving vehicles, we need to place them in the context of evolving technologies. Fortunately, the Society for Automotive Engineers has defined 6 phases in vehicle automation that calibrate where we stand in the technical calendar:
- Level 0. Vehicle systems may provide information but do not control the vehicle.
- Level 1. Systems may control speed and steering. Example are adaptive cruise control, lane-keeping and emergency braking. However, the driver can override the automatic systems.
- Level 2. The system can carry out some functions such as automatic parking and highway steering. However, the driver must stay alert, and take control if the system does not react properly to road situations.
- Level 3. The system will alert the driver if it encounters a situation that it does not understand how to handle. If notified, the driver must immediately take control of the vehicle.
- Level 4. The system is in complete control, but is limited to only certain operating conditions or locations such as a controlled-access roadway.
- Level 5. The system is as competent as a well-trained fully capable human driver. The humans in the car are essentially cargo.
We can use these levels to rate the maturity of the systems in our own cars. All modern cars offer Level 1 driver assists as extra-cost options. A few cars offer some Level 2 functions; however, they are so far from autonomous that a human driver has to watch them all the time! When we are barely at Level 2 out of 5 levels, it’s not surprising that not all is well…
Tesla’s Bad Karma
One particularly notorious driver assist system is the one that Tesla calls “Autopilot.” Autopilot is nominally a Level 2 driving system, but its name is tragically misleading.
Tesla owners discover that Autopilot can handle routine highway situations. Many of them erroneously conclude that it is far more capable than it is! As a result, owners use Autopilot improperly. The list of self-driving fatalities currently has five Level 2 examples. In each case, the driver died, and every example on that list is a Tesla.
NTSB has been blunt in its criticism of the Tesla Autopilot. Following one of the self-driving fatalities, they commented:
Level 2 automated systems like this Tesla… [need] …requirements or guidelines that limit the use of Level 2 automated systems to roadways for which they are designed. An automated system that does not automatically restrict its operation to conditions for which it was designed allows a driver an opportunity to inappropriately use the system, which in this case, ended tragically. This was not a safe systems approach.
Autopilot functioned as designed; but the system was operating outside the domain afforded by its limited capabilities. Again, not a safe systems approach!
Why does Tesla seem to be jinxed in this way? Here’s my best guesswork:
- The Autopilot is smart enough to give the Tesla owner high confidence, but not smart enough to know when it’s in trouble.
- No matter how many cautions Tesla puts into the owner’s manual (and, there are many), the name “Autopilot” implies a very capable self-driving system. Besides, who reads the manual?
- The kind of person who can afford a Tesla and chooses to buy one may be pre-disposed to push it to its limits, even at his or her personal risk.
New Level, New Problems
What about the higher levels of technology, levels 3, 4 and 5?
You can buy some cars that claim to have level 2 autonomy. However, the Tesla history should tell us to be very skeptical of those claims. You cannot buy any car today that claims level 3 and above. Vehicle research groups believe that they are operating at levels 3 and 4, but no one is so brash as to claim level 5 autonomy.
Level 2, such as it is, has caused five self-driving fatalities, as mentioned above. In every one of them the driver died. However, in the only fatal accident to date involving level 3, new problems appeared. The fatality in this case was not a driver disobeying the manual and pushing the system beyond its design parameters. The fatality was a pedestrian.
Uber Contributes to the Self-Driving Fatalities
The Level 3 fatal accident occurred March 18, 2018. It was caused by a modified Volvo operated by the Uber Advanced Technology Group (ATG) in Tempe, Arizona.
Uber’s Automated Driving System (ADS) sensors use radar, laser ranging, multiple cameras, ultrasonic sensors and GPS location sensing. For safety, the ADS operates only when the vehicle is on a select pre-mapped route, at speeds below 45 mph. A 16-page NTSB report describes the accident in detail and does not mince words about the system’s shortcomings.
Since the Volvo was a Level 3 self-driving vehicle, a human operator was present behind the wheel. On a Sunday evening at 10:00 pm the Volvo was heading north on Mill Avenue, a multilane divided highway, toward Curry Road. At that time Elaine Herzberg walked a bicycle across the highway, far from any intersection or pedestrian crosswalk. The photo below shows the area and the accident location.
A Tragic Timeline
The driving system detected an object in the lane ahead 5.6 seconds before the crash. However, the ADS could not identify what that object was. The ADS was programmed to expect pedestrians to always be in crosswalks or at intersections.
During the next 4.4 seconds the ADS changed its identification of the object several times, shuffling between vehicle and bicycle. Unfortunately, on every re-classification, the system deleted its knowledge of the object’s previous position. Thus the ADS was not able to conclude that the object was moving across the path of the Volvo at a low speed which would lead to a collision.
Finally, 1.2 seconds before the collision, the ADS decided that the object ahead was a bicycle and that the car would collide with it. Uber had deactivated the Volvo’s emergency braking system, relying instead on the human operator. Thus the ADS had no options available to avert the collision.
After one second, the ADS alerted the operator, who grabbed the steering wheel just after the car hit the pedestrian. It took an additional 0.7 seconds before the operator was able to engage the brakes.
A Technology Fix, Too Late to Save a Life
Following the tragic 2018 accident, Uber ATG made some critical changes in the Volvo’s programming:
- Assume that pedestrians may jaywalk.
- Keep track history even when an object is re-classified, allowing estimation of its speed and direction of motion.
- Activate strong braking when needed to avoid a collision.
These changes corrected serious errors in the system programming, but too late to save a life.
ATG ran simulations to re-create the accident conditions. They concluded that if this revised programming had been in place, the Volvo would have detected the pedestrian 4.5 seconds before the collision. Braking would begin 4 seconds before the collision, when the Volvo was 264 feet from the pedestrian.
A nominal braking distance at 45 mph is just over 100 feet. Thus these revised design rules would have completely prevented this self-driving fatality.
Uber Adds a Human Systems Fix
Following the Uber accident, NTSB was called to testify before various Congressional committees. This gave both NTSB and Uber the opportunity to think further about the problems revealed by this and other self-driving fatalities.
For Uber’s part, they have implemented better training of human backup drivers, adding a second driver and hiring a safety director.
For NTSB’s part, they concluded that two types of human error directly contributed to the crash:
- The backup driver did not watch the road because she was watching a TV show on her mobile phone.
- The pedestrian was impaired by methamphetamines and chose to cross the road away from an intersection.
NTSB also criticized the National Highway Traffic Safety Administration, the government’s road safety agency, for not regulating self-driving tests on public roads. It also said that states need to adopt their own regulations.
People Are the Problem
It doesn’t take much thought to find the common denominator in the self-driving fatalities discussed here. And it is not a fault of the technology, either!
The fault is with the people behind the technology. This fault makes itself felt at many levels:
- The design of the Tesla Autopilot, like that of the Uber ADS, was careless. The system was not simulated in all the situations that it might encounter in real life.
- The testing plan was inadequate to shake out shortcomings and reveal the system’s true limitations.
- The responsible companies (Uber and Tesla) in this case did not provide adequate oversight on the development and use of their products.
- Federal and state government agencies have not imposed strict regulations for self-driving tests.
Tesla Says: Trust Me!
Obviously, the tasty pie of self-driving luxury contains bitter pills of repeated accidents, some of them fatal. So what are we to make of Tesla’s holiday-season announcement that it expects to release “Full Self-Driving” before the end of 2019?
Tesla has a strong financial motivation to release this feature, because they have pre-sold half a billion dollars worth of these options. They can’t start recording the sales and profit until they deliver the software. However, it’s also true that Tesla is predisposed to push the boundaries of technology – “it’s in the company’s DNA” as some analysts say.
Tesla claims that Full Self-Driving will be “feature complete,” meaning that your car can drive itself from home to work “most likely” without driver intervention. Ah, the wiggle room contained in those words “most likely”! Even this highly lauded addition to Tesla’s smarts will still require drivers to keep their hands on the steering wheel and be ready to take over driving whenever something unexpected happens.
As Fortune says, the key question is how much the public trusts Tesla to decide when its technology is not a public safety risk. Here’s a quote from Jason K. Levine, Executive Director of the Center for Auto Safety:
Based on Tesla’s cavalier attitude up until now when it comes to quality control,… There is nothing that should lead anyone to believe ‘entirely safe fully self-driving’ vehicle software is being released by Tesla anytime soon. All car manufacturers have a conflict of interest when it comes to putting safety before revenue.
Walk Before We Run
It’s ironic that so much ink has been expended on the more advanced aspects of self-driving vehicles. By more advanced, I mean human interface, societal resistance, ethical rules and the fact that robots won’t gain acceptance until they are much better drivers than people. And I have helped spread that ink myself!
But the NTSB analysis and reports make it clear that we’re in no position to seriously address those complex topics. It seems as if we the technologists are not even applying common-sense principles of systems engineering and project management.
Before any autonomous system is allowed to operate a vehicle it must be safe to human (and animal) life under every possible scenario. It’s not good enough to wait for self-driving fatalities to belatedly guide our technical development. Moreover, the executives at companies and in government who bear oversight responsibility must get serious about their job of managing.
Self-driving fatalities underscore how primitive autonomous vehicle technology and its regulation remain. I recommend caution before you get behind the wheel, or even walk across the street, if anyone is testing self-driving cars in your area!
– Robot driving adapted from j4p4n on openclipart.org
– Tesla from Free-Photos on pixabay.com
– Uber’s Volvo, and aerial view of Uber self-driving fatality courtesy of National Transportation Safety Board