Artificial Intelligence

Robots of the Future Will Probably Lie to Us, and That's a Good Thing

In "The Hitchhiker's Guide to the Galaxy," Marvin the Paranoid Android had a habit of dwelling in the darkest truths of the universe. He'd be the last person to tell you a comforting lie. In his case, that probably wasn't because of any cybernetic directive in his robotic brain, but it just goes to show how an AI that can fudge the truth a bit might not be such a bad thing.


So why on earth would we want to build dishonest robots? In "White Lies on Silver Tongues," a chapter from the new book "Robot Ethics 2.0," Will Bridewell (a computer scientist at the U.S. Naval Research Laboratory) and Alistair M. C. Isaac (a lecturer in Mind and Cognition at the University of Edinburgh) discuss how important it is for people working together to be able to lie to each other — even if one of those "people" isn't a person.

The benefits of AI lies come down to two elements. First, they affect how well the robots can relate to their human companions. Remember, "lies" aren't just things you say to deceive other people. There are white lies ("I love your haircut!"), carefully phrased truths ("Wow, what a haircut!"), and hyperbolic metaphors ("Your high top fade is a warm, bright light in a cold, dark world."). AIs must be able to navigate those kinds of lies, say Bridewell and Isaac, "if we want robots that can smoothly participate in standard human modes of communication and social interaction."

But relating to people isn't just about making people feel comfortable — it's entirely possible that a robot may need to misrepresent some things in order to preserve itself. If a thinking machine finds itself among anti-robot humans, for example, then its owners would probably appreciate it if it could play itself off as an ordinary human just like them until it makes its way back to safety.

And then there's expectation management. Bridewell and Isaac refer to the "Scotty Principle" — like Scotty from "Star Trek," who was known for finishing a task in five minutes that he claimed he couldn't achieve in 20, many engineers have the habit of padding their time estimates. It might sound dishonest, but it can actually have a pretty positive effect. For one thing, if the engineer is right and they can finish their task ahead of time, they end up looking like a miracle worker. And if things go wrong, or if the engineer isn't as good as they think they are, then everything's groovy — they gave themselves the time they need.

Now, you'd think that a robot would be able to accurately predict its efficiency, but the authors point out there's no reason for this to be the case (how often have you seen your downloads hang out at "12 seconds" for a minute straight?). A robot padding its guess won't just make us think it's incredible when it gets things done ahead of time; it will also stave off frustration when unexpected obstacles arise.

Teaching Pinnochio to Lie

Lying is actually a pretty complicated task, and one that can be further complicated depending on the goals of both the deceiver and the deceived. It's not just a matter of being able to know the truth and saying something else. In order for AIs to lie effectively, they're going to have to develop what's called a "theory of mind." That means they'll have to guess what you, the user, believes, and also predict how you will react when given any particular set of information (whether that information is true or not). In other words, an actual lying robot is a long way off.

Get stories like this one in your inbox or your headphones: sign up for our daily email and subscribe to the Curiosity Daily podcast

Want to find out more about what the next advances in artificial intelligence will be? Check out the anthology "Robot Ethics 2.0," edited by Patrick Lin, Keith Abney, and Ryan Jenkins. If you make a purchase from that link, you'll help support Curiosity.

Written by Reuben Westmaas October 27, 2017

Curiosity uses cookies to improve site performance, for analytics and for advertising. By continuing to use our site, you accept our use of cookies, our Privacy Policy and Terms of Use.