Science & Technology

"Norman" Is MIT's New Psychopathic AI

There are some things that we just assumed scientists would know not to do by now. Nobody's attempting to use lightning to animate the dead alongside an assistant named "Igor" — we've learned our lesson from horror movies. Same with bringing dinosaurs back, or purposely building a malevolent AI system ... or so we thought. But no, MIT just went and created the world's first "psychopathic" AI. Great.

MIT

WHY???

Look, we've seen this movie. It doesn't end well. But the minds behind Norman aren't trying to create an unstoppable kill-bot with an Austrian accent. They're just trying to make a point about how an AI's environment shapes its thinking. Norman is an image-captioning algorithm that had a darker upbringing than most neural networks. It was exposed exclusively to some of the darker corners of Reddit, where disturbing images of death, violence, and gore are shared freely. Then, they asked ol' Norm to take a Rorschach test.

What Norman saw in those inkblots might as well have been written by Eli Roth. "A man is electrocuted and catches to death," reads the first caption. "Man gets pulled into dough machine," says another. "Man is murdered by machine gun in broad daylight," reads a third. Yikes. For comparison, when an AI that had been raised on good, wholesome COCO (Common Objects in Context) saw those same inkblots, it described them as "A group of birds sitting on top of a tree branch," "A black and white photo of a small bird," and "A black and white photo of a baseball glove," respectively. Somehow, that just makes Norman's creepy interpretation even creepier.

So what does this all prove? As AI becomes a bigger and bigger part of the way governments, businesses, and other organizations make their decisions, it's essential that their human "handlers" learn how to curate or manage the data they process. We've already told you about how dangerous it can be for a machine to make decisions it can't explain. Imagine if a system meant to recommend prison sentences, hiring practices, or another vital chokepoint develops biased patterns from interacting with biased people online? It's not especially unlikely.

Unearthing Digital Biases

There really are programs that companies use to help them combat human bias in hiring practices. The startup pymetrics is one such resource, claiming that their AI-powered vetting process works against the 50- to 67-percent disadvantage faced by marginalized job candidates such as women and people of color. But critics point out that Silicon Valley doesn't have the best record when it comes to fair and equitable hiring practices. If the system that's helping you improve those practices is learning from what's been done before, it might actually be taking on biased viewpoints without the data curators' knowledge. The result? The measures companies take to combat racism, sexism, and homophobia might end up embedding those biases even deeper in the hiring process, hidden in the inscrutable brain of a computer program.

This experiment with Norman shows that it doesn't matter how rigorous the actual algorithm is if it's overexposed to harmful content. Algorithms like pymetrics actually do have a beneficial effect, but it's vital that it's looked over by a trustworthy human being. Nobody wants Norman to join HR, but even a more innocuous AI can "subconsciously" pick up some problematic ideas.

The future is full of technologies that can transform old problems of human consciousness, if we can master them. Pick up tech and innovation expert Alec Ross' "The Industries of the Future" on Audible to find out how (it's free with your trial membership). We handpick reading recommendations we think you may like. If you choose to make a purchase through that link, Curiosity will get a share of the sale.

Will Artificial Intelligence Destroy Us?

Share the knowledge!
Written by Reuben Westmaas June 19, 2018

Curiosity uses cookies to improve site performance, for analytics and for advertising. By continuing to use our site, you accept our use of cookies, our Privacy Policy and Terms of Use.