When a driverless car crashes, should it be programmed to save the most amount of people, even if that means passengers don’t make it?
That’s the question participants were asked during a recent study published in the journal Science. And while the majority of people said they would rather see less fatalities, they also wouldn’t buy a car that’s programmed to sacrifice passengers for the greater good.
Manufacturers of autonomous vehicles are still far from developing this type of technology. Nonetheless, the question of who to save in an accident has come up, and will likely be asked many more times as driverless cars become more popular.
Last year, 90 percent of accidents in the U.S. were caused by human error. It’s been argued that driverless cars are safer because they take human error out of driving. Even with less collisions, tough choices are made when they happen. In the future, that may mean regulating cars to preserves the most lives, but will that keep consumers from buying autonomous vehicles altogether?
Do you think driverless cars should save more people, even at the expense of passengers’ lives? Would you buy an autonomous car that was programmed to do so?
One of the authors of the study, Azim Shariff, and a reporter for Automotive News, David Undercoffler, joined Patt Morrison to talk about the implications of autonomous cars and whether they should make these tough calls in a crash.
How much of a consideration is this question going to be for carmakers?
David Undercoffler: Certainly carmakers are aware of this “trolley problem.” By and large, a lot of them see it as a red herring. This is not what is keeping them up at night. They would point out that we’re talking about these improbable scenarios — I would ask how many of our listeners have been in a situation where there’s a bus full of kittens on one side of the street and a bus full of nuns on the other and it’s up to you to choose — when the breaks go out — where you steer your car. That’s not a likely scenario, and to talk about that really is a distraction.
What the automakers point out is that [with] autonomous cars — if you’re mitigating car accidents [caused by human error] — you’re saving potentially 40,000 lives a year and that’s a net gain by a huge magnitude.
Let’s compare this to an airplane scenario. We know 40,000 people a year die in car crashes, but when you get on an airplane, you freak out because you’re not in the cockpit.
Undercoffler: That’s true and I think there definitely is some trepidation by people about this whole idea of, ‘I’m giving up control and the car may choose to kill me.’ But again, these are sort of improbable areas.
Another thing auto makers are wrestling with is: If you are directing a car to choose what to do when there is an accident coming your way, it’s almost impossible to have an automobile that in real time has the computational power and the sensors to say, ‘O.K., I know everything that is going around right now and no matter what happens, I am able to put this car in the best scenario.’
Machines and artificial intelligence are not perfect, just like humans are not. And you talk to people leading this research and they express a little frustration in that we do expect machines to make the perfect decision.
How will fear come into play with this question of who to save and the adoption of autonomous cars?
Azim Shariff: I think this plays into the fear that’s going to delay the adoption of the use of autonomous cars that is going to cost us lives. . . I think one problem with these types of fears is that the idea of using your autonomous car and it potentially getting in one of these accidents, people are going to weigh it much more than they would with the human-on-human accidents we’re used to. As a result, that’s going to be powerful enough to dissuade them from adopting the cars as early as they could or probably should.
Azim F. Shariff, Ph.D, Assistant Professor of Psychology and Social Behavior, University of California, Irvine and co-author of the study, “The Social Dilemma of Autonomous Vehicles"