Why self-driving cars must be programmed to kill

Car companies will have to decide who their self-driving vehicles are going to kill in the event of a crash, philosophers have warned.

Self-driving vehicles are now being widely adopted, and are likely to soon become the norm — partly because they will lead to fewer crashes. And the advantage for drivers is clear, allowing them to switch off and have their car arrive at their destination without expending any thought or effort.

But their manufacturers will have to tell the cars which sets of people they are going to kill when the cars do crash, according to a new paper. The cars that generally will allow you to sit back in leisurely comfort might one day have to drive you into a wall and kill you.

Some accidents will be “inevitable”, the authors note. In that case, “some situations will require AVs to choose the lesser of two evils”, according to a paper by Jean-Francois Bonnefon at the Toulouse School of Economics in France and his two co-authors.

“For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall,” the paper notes.

In those kinds of situations, the car would have to make a choice. The three researchers set out to explore how that choice should be made, by asking members of the public how they think that cars should decide who to kill.

The researchers asked people on Amazon’s Mechanical Turk — a marketplace that allows people to pay others to do tasks — who they thought should die in a range of different situations.

In general, people are happy to using a utilitarian approach to deciding who to kill, they found. That meant that cars should generally minimise the death toll, irrespective of who that meant would die in a crash.

But that mostly applied to other people’s cars — the respondents were less keen on buying cars that would sacrifice themselves. People “were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: They actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves”, the team write.

And the team aren’t sure that the question might be that simple.

“Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle?

“Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place?

“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

The paper ponders similar questions to an article published earlier this year by a US bioethicist, which also proposed that cars will end up having to kill their owners.

The research is published this month in an article titled ‘Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?’.

Source: http://goo.gl/f3D8VI

Leave a Reply