Here’s the Math Self-Driving Cars Will Use to Decide if it Should Sacrifice Its Passengers
Jerry Kaplan has some ethical concerns when it comes to handing over control to autonomous vehicles. He asks us to consider if we would drive in a vehicle that would kill us if presented with the right scenario.
When thinking about it, we’re handing over a number of social interactions and ethical scenarios to a robot. In one video, he proposes the “Trolley Problem” as one of his main concerns.
The Trolley Problem is a thought experiment in ethics. The setup is simple: There’s a runaway trolley barreling down the tracks about to hit five people tied up — they can’t move. You, a bystander, have the option to pull a lever, which will divert the trolley, saving the five people, but killing one person sitting on the other track. What do want your car to do?
A group of researchers, led by Jean-Francois Bonnefon from the Toulouse School of Economics, have proposed similar ethical scenarios to a group of participants through Amazon’s Mechanical Turk crowdsourcing tool.
“Our data-driven approach highlights how the field of experimental ethics can give us key insights into the moral and legal standards that people expect from autonomous driving algorithms,” the researchers write.
The researchers wanted to know how willing people would be to driving in a car that had self-sacrificing ethics, like would people rather a car swerve to avoid hitting a group of people, potentially endangering the life of the driver? The scenarios also changed the number of people in the car and the ages of the passengers, having participants consider a number of ethical gray areas.
“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” the researchers write. “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”
The results showed 75 percent thought it would be moral to swerve, while only 65 percent thought the car would be programmed by the manufacturer to swerve. That consideration brings up another interesting question concerning liability.
“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?” the researchers ask.
Indeed, how should robots be treated under the law? As AI becomes more and more autonomous, we’re going to need a new standard to prosecute and protect society from an autonomous accident happening again.
Kaplan suggests “rehabilitation and modification of robot behavior” would be the most logical step.
Brad Templeton, a consultant on Google’s autonomous vehicles, has become tired of these discussions. He’s well-aware of the proposed Trolley Problem and other ethical dilemmas autonomous cars bring, and asks people to come back to real life. “What I reject is the suggestion that this is anywhere high on the list of important issues and questions. I think it’s high on the list of questions that are interesting for philosophical class debate, but that’s not the same as reality.”
“In reality, such choices are extremely rare,” he writes in his blog. “How often have you had to make such a decision, or heard of somebody making one? Ideal handling of such situations is difficult to decide, but there are many other issues to decide as well.”
***
Natalie has been writing professionally for about 6 years. After graduating from Ithaca College with a degree in Feature Writing, she snagged a job at PCMag.com where she had the opportunity to review all the latest consumer gadgets. Since then she has become a writer for hire, freelancing for various websites. In her spare time, you may find her riding her motorcycle, reading YA novels, hiking, or playing video games. Follow her on Twitter: @nat_schumaker
Photo Credit: WPA Pool / Pool/ Getty