Self-Driving Fatalities – Warning Signs!

robot driving red car

Self-driving fatalities are a warning sign of problems with autonomous vehicles. The fatalities are the saddest, most tragic evidence. However, there are many non-fatal accidents that could have turned fatal as well.

We have previously discussed barriers to autonomous vehicle development: the challenge of inclement weather, how difficult it is for a human to take control, and the impossible reliability standards we humans demand of a robotic chauffeur. In fact, we even considered ethical rules that should be in the system.

However, we no longer need to speculate about these problems. More and more autonomous cars are on the road every year. These include “driver assist” systems as in Cadillac, Tesla, Nissan and Volvo; and licensed test vehicles, some of which operate on public roads all around us. The latter are proliferating, with 62 companies in California alone now licensed to conduct vehicle tests. And all these vehicle miles are leading to many accidents that call for study.

NTSB as Objective Judge

The National Transportation Safety Board is the US government’s official investigator of transportation mishaps. We tend to think of them when a passenger airline crashes. However, these days more and more of their work addresses self-driving accidents, especially self-driving fatalities.

Whenever there’s a new technology, it will offer downsides as well as upsides. As the technology matures, we learn how to counter the downsides and take advantage of the upsides. So of course we hope that the early tech problems will gradually go away as the technology evolves. Thus far, this has not happened.

A Maturity Scale for Self-Driving Technology

To have an accurate perspective on problems of self-driving vehicles, we need to place them in the context of evolving technologies. Fortunately, the Society for Automotive Engineers has defined 6 phases in vehicle automation that calibrate where we stand in the technical calendar:

  • Level 0. Vehicle systems may provide information but do not control the vehicle.
  • Level 1. Systems may control speed and steering. Example are adaptive cruise control, lane-keeping and emergency braking. However, the driver can override the automatic systems.
  • Level 2. The system can carry out some functions such as automatic parking and highway steering. However, the driver must stay alert, and take control if the system does not react properly to road situations.
  • Level 3. The system will alert the driver if it encounters a situation that it does not understand how to handle. If notified, the driver must immediately take control of the vehicle.
  • Level 4. The system is in complete control, but is limited to only certain operating conditions or locations such as a controlled-access roadway.
  • Level 5. The system is as competent as a well-trained fully capable human driver. The humans in the car are essentially cargo.

We can use these levels to rate the maturity of the systems in our own cars. All modern cars offer Level 1 driver assists as extra-cost options. A few cars offer some Level 2 functions; however, they are so far from autonomous that a human driver has to watch them all the time! When we are barely at Level 2 out of 5 levels, it’s not surprising that not all is well…

Tesla’s Bad Karma

Tesla vehicle

Tesla vehicle

One particularly notorious driver assist system is the one that Tesla calls “Autopilot.” Autopilot is nominally a Level 2 driving system, but its name is tragically misleading.

Tesla owners discover that Autopilot can handle routine highway situations. Many of them erroneously conclude that it is far more capable than it is! As a result, owners use Autopilot improperly. The list of self-driving fatalities currently has five Level 2 examples. In each case, the driver died, and every example on that list is a Tesla.

NTSB has been blunt in its criticism of the Tesla Autopilot. Following one of the self-driving fatalities, they commented:

Level 2 automated systems like this Tesla… [need] …requirements or guidelines that limit the use of Level 2 automated systems to roadways for which they are designed. An automated system that does not automatically restrict its operation to conditions for which it was designed allows a driver an opportunity to inappropriately use the system, which in this case, ended tragically. This was not a safe systems approach.

They continue:

Autopilot functioned as designed; but the system was operating outside the domain afforded by its limited capabilities. Again, not a safe systems approach!

Why does Tesla seem to be jinxed in this way? Here’s my best guesswork:

  • The Autopilot is smart enough to give the Tesla owner high confidence, but not smart enough to know when it’s in trouble.
  • No matter how many cautions Tesla puts into the owner’s manual (and, there are many), the name “Autopilot” implies a very capable self-driving system. Besides, who reads the manual?
  • The kind of person who can afford a Tesla and chooses to buy one may be pre-disposed to push it to its limits, even at his or her personal risk.

New Level, New Problems

What about the higher levels of technology, levels 3, 4 and 5?

You can buy some cars that claim to have level 2 autonomy. However, the Tesla history should tell us to be very skeptical of those claims. You cannot buy any car today that claims level 3 and above. Vehicle research groups believe that they are operating at levels 3 and 4, but no one is so brash as to claim level 5 autonomy.

Level 2, such as it is, has caused five self-driving fatalities, as mentioned above. In every one of them the driver died. However, in the only fatal accident to date involving level 3, new problems appeared. The fatality in this case was not a driver disobeying the manual and pushing the system beyond its design parameters. The fatality was a pedestrian.

Uber Contributes to the Self-Driving Fatalities

Uber's Volvo self-driving vehicle

Uber’s Volvo self-driving vehicle

The Level 3 fatal accident occurred March 18, 2018. It was caused by a modified Volvo operated by the Uber Advanced Technology Group (ATG) in Tempe, Arizona.

Uber’s Automated Driving System (ADS) sensors use radar, laser ranging, multiple cameras, ultrasonic sensors and GPS location sensing. For safety, the ADS operates only when the vehicle is on a select pre-mapped route, at speeds below 45 mph. A 16-page NTSB report describes the accident in detail and does not mince words about the system’s shortcomings.

Since the Volvo was a Level 3 self-driving vehicle, a human operator was present behind the wheel. On a Sunday evening at 10:00 pm the Volvo was heading north on Mill Avenue, a multilane divided highway, toward Curry Road. At that time Elaine Herzberg walked a bicycle across the highway, far from any intersection or pedestrian crosswalk. The photo below shows the area and the accident location.

Aerial view of Uber self-driving fatality

Aerial view of Uber self-driving fatality

            A Tragic Timeline

The driving system detected an object in the lane ahead 5.6 seconds before the crash. However, the ADS could not identify what that object was. The ADS was programmed to expect pedestrians to always be in crosswalks or at intersections.

During the next 4.4 seconds the ADS changed its identification of the object several times, shuffling between vehicle and bicycle. Unfortunately, on every re-classification, the system deleted its knowledge of the object’s previous position. Thus the ADS was not able to conclude that the object was moving across the path of the Volvo at a low speed which would lead to a collision.

Finally, 1.2 seconds before the collision, the ADS decided that the object ahead was a bicycle and that the car would collide with it. Uber had deactivated the Volvo’s emergency braking system, relying instead on the human operator. Thus the ADS had no options available to avert the collision.

After one second, the ADS alerted the operator, who grabbed the steering wheel just after the car hit the pedestrian. It took an additional 0.7 seconds before the operator was able to engage the brakes.

            A Technology Fix, Too Late to Save a Life

Following the tragic 2018 accident, Uber ATG made some critical changes in the Volvo’s programming:

  • Assume that pedestrians may jaywalk.
  • Keep track history even when an object is re-classified, allowing estimation of its speed and direction of motion.
  • Activate strong braking when needed to avoid a collision.

These changes corrected serious errors in the system programming, but too late to save a life.

ATG ran simulations to re-create the accident conditions. They concluded that if this revised programming had been in place, the Volvo would have detected the pedestrian 4.5 seconds before the collision. Braking would begin 4 seconds before the collision, when the Volvo was 264 feet from the pedestrian.

A nominal braking distance at 45 mph is just over 100 feet. Thus these revised design rules would have completely prevented this self-driving fatality.

            Uber Adds a Human Systems Fix

Following the Uber accident, NTSB was called to testify before various Congressional committees. This gave both NTSB and Uber the opportunity to think further about the problems revealed by this and other self-driving fatalities.

For Uber’s part, they have implemented better training of human backup drivers, adding a second driver and hiring a safety director.

For NTSB’s part, they concluded that two types of human error directly contributed to the crash:

  • The backup driver did not watch the road because she was watching a TV show on her mobile phone.
  • The pedestrian was impaired by methamphetamines and chose to cross the road away from an intersection.

NTSB also criticized the National Highway Traffic Safety Administration, the government’s road safety agency, for not regulating self-driving tests on public roads. It also said that states need to adopt their own regulations.

People Are the Problem

It doesn’t take much thought to find the common denominator in the self-driving fatalities discussed here. And it is not a fault of the technology, either!

The fault is with the people behind the technology. This fault makes itself felt at many levels:

  • The design of the Tesla Autopilot, like that of the Uber ADS, was careless. The system was not simulated in all the situations that it might encounter in real life.
  • The testing plan was inadequate to shake out shortcomings and reveal the system’s true limitations.
  • The responsible companies (Uber and Tesla) in this case did not provide adequate oversight on the development and use of their products.
  • Federal and state government agencies have not imposed strict regulations for self-driving tests.

Tesla Says: Trust Me!

Obviously, the tasty pie of self-driving luxury contains bitter pills of repeated accidents, some of them fatal. So what are we to make of Tesla’s holiday-season announcement that it expects to release “Full Self-Driving” before the end of 2019?

Tesla has a strong financial motivation to release this feature, because they have pre-sold half a billion dollars worth of these options. They can’t start recording the sales and profit until they deliver the software. However, it’s also true that Tesla is predisposed to push the boundaries of technology – “it’s in the company’s DNA” as some analysts say.

Tesla claims that Full Self-Driving will be “feature complete,” meaning that your car can drive itself from home to work “most likely” without driver intervention. Ah, the wiggle room contained in those words “most likely”! Even this highly lauded addition to Tesla’s smarts will still require drivers to keep their hands on the steering wheel and be ready to take over driving whenever something unexpected happens.

As Fortune says, the key question is how much the public trusts Tesla to decide when its technology is not a public safety risk. Here’s a quote from Jason K. Levine, Executive Director of the Center for Auto Safety:

Based on Tesla’s cavalier attitude up until now when it comes to quality control,… There is nothing that should lead anyone to believe ‘entirely safe fully self-driving’ vehicle software is being released by Tesla anytime soon. All car manufacturers have a conflict of interest when it comes to putting safety before revenue.

Caveat emptor!

Walk Before We Run

It’s ironic that so much ink has been expended on the more advanced aspects of self-driving vehicles. By more advanced, I mean human interface, societal resistance, ethical rules and the fact that robots won’t gain acceptance until they are much better drivers than people. And I have helped spread that ink myself!

But the NTSB analysis and reports make it clear that we’re in no position to seriously address those complex topics. It seems as if we the technologists are not even applying common-sense principles of systems engineering and project management.

Before any autonomous system is allowed to operate a vehicle it must be safe to human (and animal) life under every possible scenario. It’s not good enough to wait for self-driving fatalities to belatedly guide our technical development. Moreover, the executives at companies and in government who bear oversight responsibility must get serious about their job of managing.

Self-driving fatalities underscore how primitive autonomous vehicle technology and its regulation remain. I recommend caution before you get behind the wheel, or even walk across the street, if anyone is testing self-driving cars in your area!

Image Credits:
– Robot driving adapted from j4p4n on openclipart.org
– Tesla from Free-Photos on pixabay.com
– Uber’s Volvo, and aerial view of Uber self-driving fatality courtesy of National Transportation Safety Board

Comments

Self-Driving Fatalities – Warning Signs! — 6 Comments

  1. All well and good, but you haven’t brought up what I consider to be a major threat to self-driving vehicles: computer hacking. I don’t know if self-driving cars require continuous communication with an off-vehicle host computer. If so, then I think the risk of hacking would be high. Even if the car operates completely autonomously, never communicating with a host computer at any time, the car’s computer code could be hacked on installation or doing servicing. I don’t know how this risk can ever be mitigated.

    • Hi Marvin, hacking is certainly a threat. And completely autonomous cars are not in anyone’s plan today. The present driver assist systems rely on software updates, and malware could be smuggled into an update, whether it’s delivered wirelessly or in a garage. Even worse, researchers have demonstrated that they can spoof GPS signals and take over control of an autopiloted ship simply by feeding it false signals from the outside. No system hack required!

      By analogy, we worry about an enemy disabling our internet, or hacking our electric power distribution system, which is networked and software dependent. If our economy becomes dependent on driverless systems, that’s one additional way we could suffer an attack.

      A common theme: a large threat to the advance of technology is our lack of trust in our fellow humans. [supply background of the Kingston Trio’s Merry Little Minuet https://www.youtube.com/watch?v=MCTdfo6T-u8]

  2. The future belongs to self-driving cars, period. There will come a day when there will be NO human-driven vehicles on specific high-speed or high-volume roads. When that day comes a vehicle will ONLY be allowed onto such a road if the human yields control of the vehicle (if they ever had it in the first place … it isn’t a much-further leap into technology futures before some cars will be produced with no controls for humans other than selecting the destination and route, and perhaps the priority of their journey).

    That’s where we start, is that paragraph. All the other issues revolve around when and how we get there. And those decisions of how/when involve humans, with all our flawed capabilities and priorities, selfish and unselfish motives, misdirected and inadequate knowledge of technology, plus our flawed and fluctuating moment-by-moment assessment of risk and reward. And that doesn’t even count the motivations of powerful or wealthy individuals with their control of the fire-hose of media influence on our society.

    There is no smooth path to get to that future. It WILL occur, because the forces are too powerful for it not to happen but it’s going to be a bumpy ride getting there. Yes, people will die or be maimed as we move to that future but it’s important to accept that people are dying and being maimed every day across the country because of poor driver decisions, fatigue, substance abuse, teenage hormones, abnormal driving conditions, inadequate driving skill for those conditions, or just play carelessness or inattention while driving.

    The issue is not whether the best self-driving vehicle can surpass the best human driver. It is instead whether the best robot driver can surpass the AVERAGE human driver. And the range of human driving competency is SO wide that I consider it’s actually a low bar in terms of standards. We will get to that point sooner rather than later.

    If you need an analogy, how comfortable are any of us giving up control of a vehicle and riding in the back seat where we have zero influence over the vehicle (other than being a back-seat driver, with all its cliched meaning)? If it’s our own 16-year-old behind the wheel we’re going to want to restrict them to only a few familiar roadways and in the best lighting and weather possible. If it’s a driver under-the-influence of alcohol or drugs we won’t let them have the steering wheel at all. We all have to make that decision of whether to give up control of the vehicle and it has to be a case-by-case decision. With self-driving vehicles our problem is that we don’t KNOW how handicapped that driver would be for the specific day and route we intend to take.

    So the problem at present is twofold — the driving public doesn’t have enough information to adequately decide whether to turn over control to any advertised self-driving system, AND I don’t have confidence that car manufacturers are sufficiently well-managed to not prematurely release self-driving vehicles onto the roadway, potentially endangering everyone on the road with them.

    Fine, those are the problems. Do I have any (possibly-useful) suggestions?

    First, someone must prevent car manufacturers from providing systems that haven’t been fully tested. But there we have two problems. I don’t trust governmental entities to have that control because increasingly in our society such agencies are subject to meddling by other government agencies or the White House itself. And yet it is dangerous to rely on the car manufacturers to do that testing themselves because of conflict of interest as has been pointed out in the article above for Tesla. To my way of thinking, neither solution makes sense.

    I think the best way to proceed is if the government sets the standards AS WELL AS the penalties for prematurely releasing technology to the public, and then the car manufacturers gauge the risk vs reward of releasing such technology to the public. Further, those penalties have to be high enough that car manufacturers will not cut corners on their own quality testing.

    Second, for the universe of specific situations being specified by the lawmakers (traffic density, nighttime driving, bad weather conditions, road conditions, etc) every self-driving system must have the autonomy to decide whether they are being asked to operate within their own competency zone, and the vehicle must have the authority to “just say no” to what the human requests if the vehicle isn’t cleared for that combination of circumstances.

    But that in turn means more up-to-the-moment information must be available to every vehicle (not the driver) at every moment. Whether it’s weather conditions all along the route, road closures, traffic density or other parameters — that information has to be available to the vehicle at all times so it can make the decision whether it is approved to operate on the specified route. The information must also be provided by a reputable standard and approved source, and at every point along the route. This is an infrastructure issue which may require funding as well as its own quality controls.

    The key is that this is a cooperative venture, to support self-driving capabilities in our vehicles. The driver has some control but not the final say in whether the self-driving system is engaged for any journey. The auto manufacturers have the responsibility of designing and improving self-driving systems but they are held to a safety standard which is set by and controlled by our government (which will change over time as the capabilities of such systems improve).

    But while the government sets those safety standards they are not directly designing or controlling the technology — only the performance envelope expectations of that technology at any time. Our court systems will then evaluate any self-driving situation which results in harm to anyone, and make the judgment about whether the vehicle was operating in that vehicle’s zone of competency for the conditions, and if it is not then to set the financial penalties on the vehicle manufacturer for not meeting the standards.

    • Hi Charles, many thanks for your always-thoughtful comments. I think we can all agree with your first paragraph for the future situation. Of course, before the future arrives we are called on to make many decisions and as you point out, we don’t have the data to intelligently evaluate whether a given self-driving vehicle can safely carry us along a particular route on a particular day.

      Perhaps the professional societies can do a better job of developing and proposing engineering standards to guide vehicle development. They have some conflicts of interest, but in general they can do a pretty good job with standards.

  3. Hi Art,
    Your comments about self-driving cars are consistent with my experience and biases. We should think of good tests for the qualification process. One of my favorites would be to have a vehicle begin in the middle of Jersey City — and then, without help, find its way to Brooklyn!
    All the best and fond memories of past good things–including your letting me drive your Corvette!
    Dick Dixon

    • Hi Dick, and thanks for your comment! I like your challenge test – but hope the autonomous vehicle of the future is smart enough to avoid dicey neighborhoods on its way…

      Ah yes, my lovely Vette! My post-grad-school splurge. It had sleek-looking headlights that rotated to disappear after use. After I drove it into the roadside ditches a few times during icy weather the headlights did not want to line up properly so it was always touch-and-go getting through the New Jersey annual vehicle inspections. I eventually cashed it in for a VW bug that was just as functional tho not as glamorous.