Insights Into the Future of Driverless Vehicles…
Self-Driving Risk is gaining new attention, to counter companies bragging about autonomous vehicle progress. But don’t rush out to invest your life’s savings in glowing tech promises!
Today’s blog cools off the driverless hype with a look at the future. We take off from an article in Discover and thoughtful comments on that article by subscriber Charles South.
Fantasies Ignore Self-Driving Risk
It seems that every car manufacturer is betting that autonomous vehicles will become important to their business. Daimler, Ford, Hyundai and Honda have joint programs with internet giant Baidu. BMW, Nissan, Ford, GM, Tesla and Mercedes Benz are running tests of self-driving vehicles.
The promises are wonderful. Who wouldn’t love to relax in their car while Robby the Robot chauffeurs them to work and back?
But now let’s talk about Self-Driving Risk – the hazards to life and limb of abdicating the role of driver to a bucket of circuits and software.
A Thoughtful Article on Self-Driving Risk
Charles South called my attention to a new article by Hannah Fry in the November Discover Magazine: Baby, Can You Drive My Car. Discover published the article online as The Road to Self-driving Cars Is Full of Speed Bumps.
We have previously discussed the ethics of driverless cars, their future and the challenges of inclement weather.
Fry’s article covers some of these topics, but is especially worthwhile for the time it spends on two issues involving self-driving risk:
- Self-driving risk in robot-to-human handoff.
- Self-driving risk from declining human competency.
Charles provided a critique in each of these areas. The following sections rely heavily on his commentary.
Self-Driving Risk in Human Takeover
This risk involves human takeover in an unsafe or challenging driving situation.
Some traffic conditions are so unexpected and complex that autonomous vehicle designers doubt that a self-driving vehicle can handle them. They would like the car to recognize when things get out of hand and hand the problem over to a human being.
However, what if the human being isn’t paying attention to the road? After all, that is almost guaranteed if the human regularly and safely uses that autonomous vehicle. And now the car suddenly wants the human to take control.
There’s going to be a minimum time before the human can reengage with the environment around them so as to be able to take control safely. And time is probably what you won’t have, if you’re in a weird situation that the driving software doesn’t know how to handle.
A Deadly Time Delay
Fry’s article cites one study of handover time. It showed that 40 seconds would pass before a human could take control at their personal level of competency and become sufficiently aware of conditions to decide on the optimal response. It’s easy to think of a variety of situations where 40 seconds is enormously longer than what will be available before a response is necessary.
For example, suppose your engine suddenly quits while you’re on a freeway at normal speed. The best course of action is to steer to a safe area and stop.
If you are the one driving the car when that happens, then you are already aware of your surroundings. You’ll know that you need to act quickly, with awareness of the traffic around you, before you lose so much speed that you can’t maneuver safely.
The ideal approach is to quickly pick an area you want to steer toward and then come to a stop off the freeway. Some humans would succeed in doing that, some would panic and cause an accident, some would coast to a stop in the middle of the freeway and initiate a traffic jam until help arrives.
However, suppose that control is suddenly thrown at you while your attention is somewhere else. You need to act quickly with NO knowledge of the current situation. Even a highly competent driver would be hard-pressed to make good decisions in this circumstance.
Consumer Reports Scores Today’s “Self-Driving” Systems
Last month Consumer Reports weighed in with its first-ever report on today’s driver assist systems. CR compares four systems, then ranks them in this order:
- Cadillac Super Cruise
- Tesla Autopilot
- Nissan/Infiniti ProPilot Assist
- Volvo Pilot Assist
CR does not want car makers to advertise these systems as “self-driving” or “autonomous.” They worry that drivers will rely on them too much and that errors will lead to fatal crashes. We know of two such accidents involving Tesla vehicles.
CR awards Cadillac the top spot partly because it uses a camera to watch the driver’s eyes. If the car determines that the driver isn’t looking at the road, it will demand his attention. If that doesn’t work it will apply the brakes.
These driver assists are so far from autonomous that CR only considers them safe if they insist on your full-time participation. They avoid self-driving risk only by not being self-driving at all!
Self-Driving Risk from Declining Human Competency
The other self-driving risk arises from declining competency.
Once a car routinely drives you various places, it is human nature to backslide on the skills required to be a competent driver. What’s worse, how will people gain that competency in the first place? After all, a teenager who is driven everywhere by a driverless car will never acquire enough experience to skillfully pilot a vehicle.
It’s easy to see how this problem can develop. How many of us can competently handle all aspects of using a horse for transportation? Horse-handling is no longer a required skill, and therefore very few Americans acquire it. As Fry’s article points out, we are already seeing a decline in skills in other areas of driving, such as getting somewhere without a GPS navigation app.
Self-Driving Risk Must Reduce Our Expectations
Here’s the bottom line: Taking self-driving risk into consideration, we need to be realistic about the rate at which driverless vehicles will develop.
Fry’s article mentions two approaches that may occur in parallel:
- Fully autonomous vehicles can easily negotiate highly controlled roadways. Fry points out that this has already occurred with the use of the automobile. Thus we “don’t see bicycles, horses, carts, carriages or pedestrians on an expressway.”
- “Mostly” self-driving cars can also perform well in slow-moving traffic. The Audi Traffic Jam Pilot performs reliably, so long as a human driver stays alert to step in when needed.
More generally, the public must come to accept that self-driving errors will inevitably occur. And that realization will limit when and where society allows autonomous vehicles on the public roadways.
Self-driving risk means that driverless cars are not going to be in every garage any time soon. The road into the future is slow and it is bumpy!
Special Acknowledgement: To Charles South for his timely and thoughtful critique of Discover Magazine’s newest article on self-driving cars.
Drawing Credit: Robot Driving by j4p4n on openclipart.org
Ridic a couple deaths because of self driving cars is nothing compared to the hundreds of thousands deaths caused because of human drivers! I guess if people are going to die in a car crash they want it to be because they screwed up not because of a machine/computer! The funny thing is I hate to fly in airplanes even though it is statistically a 1000 times safer than driving. The simple reason is because I trust myself and my driving skills a ton and I want to be in charge of my own destiny. Lol. Regardless of that fact it still doesn’t stop me from flying when the need arises. I completely understand the legal risk the car companies take with the risk of being sued but the 1st 10 years or so of self driving cars should come with some kind of waiver where people have to sign off on the car company not taking responsibility for accidents where the driver wasn’t paying attention. You just can’t expect new technology to be perfect right out of the gate it takes many years to get all the kinks out. There have been airplanes in service that ended up having fatal flaws that didn’t come to light until 10+ years after its release because of age, wear and tear where they had gotten the fail safes wrong in Simulations on the ground vs what the fail safes would do with the actual plane in the air. The thing once that 1st accident occurs they ground every single one of those plane models and they fix that pacific problem so that particular accident doesn’t occur again. In the short term and long term self driving cars I feel are going to much safer and cause a whole lot less fatalities vs its human counterparts.
Will, you’re right, and safety experts agree with you, that many lives would be saved by taking the human out of the loop. Just as you prefer to control your own destiny (drive a car rather than ride in an airplane), surveys show that people prefer to drive their own car rather than turn it over to a machine. However, they would like OTHER people’s cars to be self-driving. Similarly, people would like other people to take public transportation, but not themselves. Perhaps these dilemmas will sort themselves out in time.
Comment from William Stanchina (Emeritus Professor) via LinkedIn: This article certainly brings up some very understandable potential problems associated with this technology. I raises my concern, also, as a possible investor (especially an older one…).
Invest with great caution, Bill! I have read that 100+ years ago there were many companies offering electrical service and electrical appliances. Most of them failed, but the few that survived were incredibly successful (GE, Westinghouse,…). If you chose the right ones to invest in, you made a lot of money. But if you spread your money among all the companies entering the electricity market, you were likely to do no better than the stock market as a whole.
Cadillac’s approach, of “demanding” that the driver keep his or her eyes on the road in order for the self-driving system to remain engaged is another example of The Law of Unintended Consequences in action. Think about it — you leave the car’s driving system “engaged” only if you are looking at the roadway, but you do absolutely NOTHING to “interest” the driver in what he or she is looking at. If the self-driving system is good enough the human will never have a reason to re-focus their full attention on the road.
What will be the human response? The human will “learn” to keep their eyes on the road (otherwise the car won’t drive itself) but the human will learn to disengage from their surroundings once they turn on self-driving mode (because there is never any “reward” for doing otherwise). All you’ve done is teach the human where to look if they want to get somewhere.
We’ve all been in situations where we drive “on automatic” for a period of time and then think back and realize our minds were far, far away for those miles. That is semi-safe in a few situations. One is if the driver is a good driver and is encountering only free-flowing conditions on freeways, just following their lane in light traffic and matching their speed to cars around them. That works if there are few speed changes or people merging into your lane. Another is in stop-and-go traffic where everyone is staying in their lane and you only have to follow the car in front of you at very low speed.
But what will the human do if they are completely disengaged physically, so they aren’t even steering the car? They will daydream at a deeper state then driving on automatic. For those of you who have used PCs, it’s the difference between “sleep” mode and “hibernate” mode … it takes more time to “come back” to reality from hibernation, and that’s what Cadillac will promote with their system.
Because humans are devious, Cadillac’s system will only work well for NEW customers, and every day that a customer uses their self-driving system they will begin to perform more and more like customers using other manufacturer’s systems that don’t have such eye-following capabilities. In the end nothing will have been accomplished.
Given human tendencies, I believe the only solution is to continue to improve a self-driving car’s capabilities so they EXCEED something like 95% of the human population’s abilities, to the point that the car’s autopilot would never ask humans to take control to handle ANY situation other than from a non-moving (parked) position. The penalty is simply too high given the amount of time it takes for a human to reengage with their environment at any speed.
It’s just one step from there to removing the steering wheel and foot pedals from the car, because the human will be in more danger WITH them than WITHOUT them. And it’s just one further step to outlawing human-driven cars completely from a variety of roadways.
To see that, just think about how many streets are still legal for a human riding a horse. Yes, you can still encounter a horse and rider on dirt roads in the country, but for most non-rural paved roads in this country you are not allowed to ride a horse because it’s too dangerous for horse, rider, and all cars that might encounter them.
This follows a sequence. New technologies are highly regulated at first, less-regulated later as they become more common. Eventually those newer technologies are regarded as standard and the older technologies are increasingly restricted, until eventually the older ones become isolated to a few special situations.
Thanks, Charles, you have a great way of looking ahead and anticipating the unexpected! Following your logic, Cadillac (and other car makers) will have to keep evolving their “management” of the driver, in a continual battle for the driver’s attention. It makes me wonder where, oh where, will we find self-driving autonomy? Guess it will take some time…
This is a great post. Since our last lunch I have been working at Michigan State University and part of my involvement is in an autonomous vehicle program. I have been trying to make these points to faculty and I think they get it. There is a lot of room for research, and as you point out not just in engineering before level 5 cars become a reality
Thanks, Linos, it’s certainly a difficult problem! I fear that autonomous devices may perform many tasks in the home before we trust them with traffic navigation.