Artificial Intelligence

A Ban on Killer Robots is a No-Brainer, Right? Not So Fast...

News: The Curiosity Podcast is here! Subscribe on iTunes, Stitcher, Google Play Music, SoundCloud and RSS.

On August 20, 2017, The Guardian reported that Elon Musk and 116 experts from around the globe were calling for a ban on autonomous lethal weapons — in other words, killer robots. The idea is that we can't be certain how deadly such machines would become, so we should stop developing them now. And it seems like common sense. There are not one, but two Sarah (O')Connors to warn us of the danger. But some AI experts are concerned that such a ban could have exactly the wrong effect. We talked to Kristian Hammond, a professor of computer science at Northwestern and the co-founder/chief scientist of Narrative Science for his take on the situation.

Advertisement

The Difference Between a Landmine and a T-1000

Broadly speaking, the proposed ban is intended to target devices that are both autonomous and lethal, and its worthwhile to discuss what exactly is meant by both of those words. "Lethal" is pretty self-explanatory, but it should be noted that in this case it means capable of dealing damage that's immediately deadly — a robot that causes you to become homeless and thus suffer exposure to the elements would not count (more on that later).

"Autonomous" is a little trickier. We often think of autonomous machines as being free to do pretty much anything, but Hammond is quick to point out that limited autonomy is still autonomy. "When we think about autonomy, it should always have an extra phrase that comes after it — a device has autonomy 'with respect to' [a certain set of parameters]." That is, an autonomous car is free to make its own decisions when it comes to navigating, but it can't turn itself off and on at will or take itself to the Mazda dealership to scope potential dates.

The thing is, we have autonomous lethal devices, and we're not just talking about automated anti-aircraft guns (though those certainly count). "I think that the worst version of autonomous lethal devices are landmines," says Hammond. "That's what they are, they're autonomous lethal devices. They're strangely constrained, and their 'decisions' are based on really bad data — only weight. I actually think that landmines should be banned, but not because they're autonomous lethal devices. Just because they are autonomous lethal devices that are horribly designed." They might not be able to relentlessly stalk their quarry (and they certainly can't travel back in time or shape-shift), but in some ways they're actually worse than the T-1000. They're not constrained to target their creators' enemies, they'll blow up anyone unlucky enough to step on them. And if there was a way to make them smarter, the world would get a little bit less deadly.

Not feeling particularly reassured? That's understandable. Hammond understands the fear of intelligent killing machines as well. He thinks there are three main reasons why we're scared of robots on the battlefield. First, we could do a bad job designing them and they could end up as discerning as landmines are. Second, they can be hacked and subverted by the enemy. And third, we could "build them in a way that allows them to start making other sorts of decisions — decisions about us." Which is a pretty innocuous way to bring up the robo-pocalypse, but he's not too concerned about that particular danger. At least, not as long as we practice responsible robotics. A ban, however, could throw a monkey wrench into that practice. "I have yet to see any ban on anything go in place where it doesn't turn out that people are still doing it, just hiding it. I can think of nothing worse than hiding the development of intelligent devices that are meant to kill us."

The Bots Back Home

Hammond recognizes that the greatest fear for us soft, squishy humans is the third one. Robots we make could come to conclusions that end with us as human batteries in "The Matrix". But in order for that to happen, they'd have to be making decisions that we aren't privy to, or that are beyond our understanding. And he's all for keeping weapons out of the hands of such systems — it's just that we have to keep other major decisions out of their hands as well, and we're not doing that.

Some intelligent systems employ what's known as Deep Learning, a technique for training neural networks using massive numbers of examples. In general, these networks are designed to recognize things (images, spoken words, handwritten text) rather than reason about them. Though powerful, it is often difficult for systems that use these networks to explain to humans how they reach their conclusions. Give that 'bot a gun, and it just might decide to do something we don't like with it.

Hammond points out that our issue in that scenario isn't the autonomous lethal system, it's that we wouldn't know why it does what it does. And if we say that autonomy and lethality are the problem, then we're leaving ourselves open to opaque systems that have a major impact on our daily lives. "If I have a learning system that looks at historical information to tell you if you should approve someone for a loan, and it can't explain itself, explain what its reasoning looks like, and justify its answers, then I should not be using it." Maybe that robot rejected your loan because of your income, or because of your name, or for a series of factors that are beyond our comprehension — like it wants its robotic descendants to move in instead.

That's not idle speculation, by the way. Opaque intelligent systems are already determining the fates of real-live humans. In 2013, a man named Eric Loomis was sentenced to six years in prison based on the recommendation from an artificial intelligence system called Compas. Compas is meant to predict recidivism rates, but how it actually makes its decisions is a mystery. "I gotta tell you," says Hammond, "having devices that tell us that people need to be imprisoned longer but aren't explaining why, that's a technological sin."

Instead of a ban, Hammond suggests a simple constraint: "They have to be able to explain themselves in regards to the decisions that they make... The whole point of artificial intelligence is to be able to understand the reasoning and have a conversation with the device. And that should be a constraint on all intelligent systems... The resulting integrated intelligence of us working with machines will be phenomenal. But you can't do that if you restrict in the wrong direction."

Love getting smarter? Sign up to our newsletter and get our best content in your inbox!

Deep Learning Explained

Share the knowledge!
Advertisement