Technology

Neural Networks Have Advanced Beyond Our Understanding, And That's Kind Of Terrifying

One of the brilliant things about neural networks — the complex system of electronic "neurons" that power everything from lab experiments to your phone's facial-recognition software — is that they can learn things on their own. Feed them a bunch of data, and they'll eventually figure out how to identify pictures or find distant galaxies. But after the first step and before the last step, AI engineers don't actually know what happens. Just like human brains, neural networks are a mystery. But before you mourn for our dystopian robot future, wait until you hear what experts are doing about it.

"I'm Not Sure I Can Trust It."

Neural networks behave very differently than the computers you know. "You can say they're loosely inspired by the brain," Science reporter Paul Voosen says in a video. They're composed of a web of artificial neurons, each of which is capable of "firing" in response to certain features in the data it's fed — to use Voosen's example, say it's millions of pictures of dogs. Those neurons are arranged in layers, each of which processes more and more abstract details — from the broad outline of the pup in the first layer, for instance, to the colors of its fur and the shape of its eyes in later layers — eventually coming up with a final product, say, the dog's breed.

At first, the network won't be very good. But just like with human students, the network gets to compare its work to the right answer, eventually learning where it went wrong and refining its choices for next time. But just like with human students, we only see the improvement. We don't actually see the millions of neural processes that go into that improvement.

That poses a pretty major problem. For example, when a neural network screws up, engineers don't know precisely why, or whether that one screw-up represents a small fluke or a much bigger flaw in the programming. That would be a minor problem with dog pictures, but a much bigger issue when it comes to predicting diseases or driving autonomous vehicles. And sometimes these networks do things we never meant them to. AI trained to recognize objects begins to recognize human faces, for example, or two bots trained to negotiate with each other come up with their own language. If that strikes fear in you, you're not alone. When machine-learning researcher Maya Gupta joined Google and asked AI engineers about problems with the technology, she tells Voosen, they were uneasy too. "I'm not sure what it's doing," they said. "I'm not sure I can trust it."

Clearing Up The Black Box

To fix the conundrum of AI's "black box," engineers are working on new networks that are more transparent in their decisionmaking. Some are having the networks watch humans perform the same tasks and explain their decisions so that the networks learn to explain theirs, too. Others are using multiple AIs, one to do the heavy lifting, and the other to interpret the reasoning of the first. And still others are building completely new systems with built-in predictability.

It's a good thing, too. With governments around the world channeling funding into AI technology, it's no longer acceptable to just say "we don't know how it works." If the future is going to run on neural networks, we need to understand them now — before it's too late.

AI Detectives Are Cracking Open The Black Box

How Deep Learning Works

Share the knowledge!
Written By
Ashley Hamer
August 17, 2017