Artificial Intelligence

Engineers Created a Robot That Can Imagine Itself

You weren't born knowing how your limbs work. You flailed and grasped your toes and stuck your fingers in your mouth, gradually learning how your body parts worked and what they could do. Robots don't get such a luxury — they're programmed at the outset to perform their tasks, either with or without awareness of their internal body plan. That's a shortcoming that keeps them from being able to perform some tasks that human babies take for granted (a problem that's so common in AI that there's a name for it). But late last month, robots took a big step toward this kind of self-awareness: Columbia engineers created a robot that could figure out what it looked like without any external input. In essence, it could imagine itself.

Related Video: Robots Use 'Imagination' to Learn Concepts

Teach an AI to Fish ...

You're so familiar with your own body that you probably don't think about how handy your mental map of yourself really is. With your eyes closed, you can touch your nose, throw a ball, or stand on one foot. If you break your arm, lose weight, or grow an inch taller, your brain easily updates its model to keep you moving about the world as normal.

None of this is true of robots. If they aren't hard-coded with a simulator, they have no idea what they are or how they work. If a component is damaged, they'll go on using the same programming from before without understanding that they have to make tweaks to accommodate their newly damaged state. It's costly and time-consuming to program an artificial intelligence system with its own simulator, so as an alternative, some AIs just go without and just focus on learning individual tasks. That's kind of like only learning to shoot a basketball from the free-throw line — you'll be great in free-throw contests, but you'd be hopeless in an NBA game.

That super-specialized kind of training is known as "narrow AI," and while that's the driver behind most of the AI in the world today, engineers dream of a "general AI" future where robots can learn and adapt to their surroundings without having to be pre-programmed. That's why these Columbia engineers wanted to create a robot that could figure itself out. It's kind of like that old idiom about teaching a man to fish: If you give a robot a self-simulation, it can do one task well; if you teach a robot to simulate itself, it can learn and adapt to all sorts of tasks.

An image of the intact robotic arm used to perform all of the tasks.

Does AI Dream of Electric Sheep?

To make that happen, they created a single robot arm and let it explore. It proceeded through 1,000 random movements, recording each one to help itself learn. (Since this procedure was similar to a baby learning to talk, the study authors called it "babbling.") While the robot moved, it quickly realized that certain movements in certain orders weren't possible: Sometimes, its gripper would collide with its arm, other times, its metal body wouldn't be flexible enough to perform the action it wanted to try. With each success and failure, it learned what it could and couldn't do, and eventually (after 35 hours or so) came up with a model of itself — all on its own.

An image of the deformed robotic arm in multiple poses as it was collecting data through random motion.

To test the accuracy of the model, the engineers had the robot arm pick up small red balls at specific places on the ground and place them in a jar. The engineers had it perform this task using two different systems: closed-loop and open-loop. In the closed-loop system, the robot could recalibrate its original position every step along the way using its internal model, kind of like watching your own hand as it reaches for an object. In the open-loop system, it had to rely entirely on the internal model with no recalibration, which would be "like trying to pick up a glass of water with your eyes closed, a process difficult even for humans," according to lead author Robert Kwiatkowski. The robot had a 100 percent success rate in the closed-loop test, and even the open-loop test saw a respectable 44 percent success rate.

They even 3D-printed a deformed part and installed it on the robot to see if it could adapt to damage. Sure enough, it detected the change, altered its self-model, and kept up with its tasks at about the same level of performance.

The researchers believe this is the necessary next step in AI development. "This is perhaps what a newborn child does in its crib, as it learns what it is," said Hod Lipson, a professor of mechanical engineering and director of Columbia's Creative Machines Lab. "We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness."

Speaking of awareness, the researchers are well aware that this research might make some people fear a sentient robot apocalypse in the future. "Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control," they note. "It's a powerful technology, but it should be handled with care."

Get stories like this one in your inbox or your headphones: Sign up for our daily email and subscribe to the Curiosity Daily podcast.

Explore the implications of AI in "The Sentient Machine: The Coming Age of Artificial Intelligence" by Amir Husain. We handpick reading recommendations we think you may like. If you choose to make a purchase, Curiosity will get a share of the sale.

Written by Ashley Hamer February 18, 2019

Curiosity uses cookies to improve site performance, for analytics and for advertising. By continuing to use our site, you accept our use of cookies, our Privacy Policy and Terms of Use.