Science & Technology

Self-Driving Cars Have to Decide Whether Passengers or Pedestrians Are More Important

You may have heard that self-driving cars are safer than cars with human drivers. And that's probably true. Still, as driverless vehicles inch closer and closer to the country's freeways and side streets, we'll probably hear more and more about AI-controlled vehicles getting into deadly accidents. Even a perfect driver can't avoid every accident. So if a self-driving car finds itself in a situation where a deadly accident is inevitable, how should it make a decision to minimize the damage?

The Trolley Problem, Explained

An Old Problem, a New Take

Remember the trolley problem? It's one of the most firmly entrenched questions in moral philosophy (recently made popular in NBC's "The Good Place"), and it goes something like this. You're a train engineer, and an out-of-control engine is barrelling down the tracks. There are five people tied to the tracks right in its path, certain to be killed if the train hits them. The good news is that you're standing by the lever that will send the train to a different track. The bad news? There's one person tied to that track. Do you pull the lever? If you do, will you be guilty of murder? There are tons of variations on the problem — variations where there are more people on the other track, variations where you know some or all of the victims, and even variations where the person you'd choose to sacrifice ends up being the villain who caused the whole mess in the first place.

When Philippa Foot came up with this moral quandary in 1967, she probably conceived it as more of a thought problem to shine a light on how we think about ethics, and not as a moral problem anybody was likely to face in the future. But now, thanks to the growing development of self-driving cars, engineers are finding that an issue very much like the trolley problem is looming large in their minds. Driverless vehicles are almost certainly going to find themselves in situations where whatever decision they make, people will die. Instead of a trolley, imagine that it's a driverless car that's about to hit five people in a crosswalk. It doesn't have time to brake, but it does have time to swerve into a barricade, killing its sole passenger. What should it do?

AI-Way to the Danger Zone

A new study published by the Society for Risk Analysis set out to answer this question by posing several different scenarios to participants recruited online. Participants had to choose whether a vehicle should stay in its lane or swerve in a situation where staying would endanger the life of a pedestrian on the street while swerving would threaten a bystander on the sidewalk. Various scenarios had various levels of certainty of a collision with either victim. The pedestrian (threatened by staying in the lane) would be granted a 20 percent, 50 percent, or 80 percent chance of collision, while the bystander (threatened by swerving) would be granted either a 50 percent chance or an unknown chance.

When the bystander had a 50 percent chance of being hit and the pedestrian's chances were 20 percent, almost nobody said that the car should swerve — that makes sense. When both the pedestrian and the bystander had a 50 percent chance of being hit, about 13 percent of people suggested the car should swerve to avoid the pedestrian. And when there was an 80 percent chance that the pedestrian would be hit more than 60 percent of people said the car should swerve.

If people didn't know the chances were of hitting the bystander by swerving, however, the picture looked a bit different. As you'd expect, the higher the odds of hitting the pedestrian, the more people chose "swerve." But in this case, the increase was basically a straight line from about 15 percent saying "swerve" when the pedestrian faced a 20 percent chance of being hit to about 45 percent of people swerving when the pedestrian's odds were 80 percent. There wasn't a big spike at the 80 percent mark like there was when people knew what the odds of the bystander being hit were. That shows that people generally prefer the car to stay in its lane, even if the odds are pretty high that doing so will injure somebody. It really got interesting when they posed that same question under the assumption that a human was driving, and not a computer. In that case, the figure did spike when the odds of hitting a pedestrian were 80 percent and the odds of hitting a bystander were unknown — about 60 percent of people would have crossed their fingers and swerved, not knowing how likely they were to hit a bystander on the sidewalk.

Answer for Yourself

While the Society for Risk Analysis certainly has some skin in the game when it comes to figuring out the most acceptable actions for an autonomous vehicle to take, they aren't the only ones looking into this question. Meet MIT's Moral Machine, which gives anyone the chance to make a multitude of hard decisions — each guaranteed to leave at least one person dead. With this test, we're in full-on trolley problem territory, and your answers aren't just about the odds of hurting someone: They're about who you choose to hurt, how, and why.

You really don't get the full experience unless you take the test for yourself, but here are a few of the truly brutal options that you might face. Say you're riding in the autonomous car with your spouse and your two children, and a family exactly like yours is crossing the street. You can choose for the car to crash itself, killing you and your spouse, injuring your daughter, and leaving your son with an unknown fate, or choose for the car to go through the intersection, killing the other set of parents and leaving both of the other set of children with an unknown fate. Or what if you have to choose between killing two adult passengers in the car or killing three jaywalking pedestrians — one criminal, one innocent woman, and one baby? Yikes. Okay, one more: Is it better for your car to run over five jaywalking babies or one law-abiding dog? Um, wow. There's really no way to top that. Just take the test and start feeling horrible about your decisions right now.

Get stories like this one in your inbox each morning. Sign up for our daily email here.

We're not done plumbing the depths of the human conscience. Thomas Cathcart's "The Trolley Problem, or Would You Throw the Fat Guy Off a Bridge?" explores this classic conundrum with humor, wit, and a healthy dose of the history of philosophy. We handpick reading recommendations we think you may like. If you choose to make a purchase through that link, Curiosity will get a share of the sale.

Written by Reuben Westmaas November 20, 2018

Curiosity uses cookies to improve site performance, for analytics and for advertising. By continuing to use our site, you accept our use of cookies, our Privacy Policy and Terms of Use.