Autonomous vehicles are in the list of WEF’s top ten emerging technologies for 2016. They say these new devices will help raise the number of lives saved in car accidents, therefore having a safer impact in the world. Wanting to get a better insight on the car’s morality, a team of international researchers conducted an online survey to know people’s opinion.
In a recent six-month study, researchers found out that people might not want to get into a self-driving car if it may deliberately kill them in an automobile accident. The survey, published in the journal Science, shows how complicated the moral view of self-driving cars is among participants. Easy to understand the volunteer’s reluctance, considering people getting inside a self-driving vehicle are putting their trust on a machine going through traffic.
Six online surveys were conducted during the six-month research. Among each questionnaire, every aspect of the car’s image to the public was tested. It’s worth mentioning the total participants of each poll sums up to 1928 people. In the survey, participants had to evaluate the morality of a self-driving vehicle encountering various scenarios.
Study finds the public is inconsistent about who should be protected in #driverless cars. https://t.co/dwJqvRHOrG pic.twitter.com/X9jo1b7L9L
— MIT (@MIT) June 24, 2016
‘A social dilemma’
In the case of an accident, self-driving cars would be programmed with an algorithm that would calculate the number of lives at risk inside the vehicle, versus the number of life at risk if the car hits pedestrians.
If it is impossible to save both, passengers or pedestrians, the algorithm would decide to sac rife the fewest number of people even if this means to sacrifice passengers inside the vehicle. For example, a self-driving car with this algorithm will veer into a concrete wall instead of a group of bystanders, and if the vehicle cannot avoid hitting them, it would try to hit the fewest people possible.
The findings show that 76 percent of participants agreed to sacrifice the life of one passenger rather than killing ten pedestrians or more in a car accident. Surprisingly enough, the results also showed people will still consider saving the life of these pedestrians even if family members were inside the self-driving car.
Nevertheless, when researchers asked participants if they would want to purchase a car following this principle, the answer in many occasions was no, even if the vehicle was more protective of themselves and their families.
A general feel of hesitation toward accident prevention on polls
Not only participants were reluctant to buy a self-driving car with this feature, but also they didn’t like the idea of a government regulation that could imply the use self-driving vehicles. The general hesitation towards the latter could be in response to a possible mandatory law to use such vehicles shortly.
Google has submitted a patent for a button that activates #driverless mode under certain conditions: https://t.co/SiQez5zb9I
— TechFreedom (@TechFreedom) June 24, 2016
In the study, researchers said that the findings are just a piece of the classic signature of a social dilemma; everyone is more likely to free-ride instead of adopting an attitude that could lead to the best global outcome.
“You can recognize the feeling; the feeling that I want other people to do something, but it would be great not to do it myself,” said co-author Jean-Francois Bonnefon, in a teleconference with reporters.
Source: WTOP
I would suggest they need to go back to the basic algorithm and design better defensive driving and collision avoidance techniques into it as well as incorporating redundant braking and air bag/restraint systems. Or maybe autonomous cars should not be permitted in any area where they will encounter pedestrians, only permit them on long distance open areas of limited access highways. Any algorithm designed by an non-involved third party making the decision of who lives and who dies is creating a robotic god. Not good! Watch the law suits against the deep pockets auto and computer corporations in these scenarios. I can visualize some hotshot plaintiffs attorney painting a picture of “the defendant intentionally designed a car that was programed to kill my clients child, they must pay for their wanton disregard of innocent life.”
Maybe they need to concentrate on killing the project if that is the type decision they are programing into the machine.