MIT

MIT Researchers Created The World's First 'Psychopathic' AI, For Science

AI is a product of the information it learns from. Thus, it really shouldn't be a surprise that many AI programs are racist or sexist. They merely reflect the attitudes of society.

That led researchers at MIT to ask the question: What if you only trained AI using data from the darker side of humanity? Norman, an AI program named after Hitchcock's Norman Bates, is the product of our collective nightmares.

"A man is shot dead."

MIT

Norman is an image recognition program. It was trained with images of “the disturbing reality of death” found on Reddit. To understand how this training biases Norman’s interpretation, the researchers gave Norman an inkblot test and compared his results to typical AI.

Inkblot images are designed to look abstract and suggestive, which allows people to have multiple interpretations of the same image. Some psychologists believe that inkblot interpretations can give insight into a person's state of mind.

“Man jumps from floor window”

MIT

In this inkblot, Norman saw a man plummeting to his death. In comparison, the standard AI saw “A couple of people standing next to each other."

Which interpretation is correct is a matter of debate, but it does demonstrate Norman’s bias towards death and destruction. More importantly, it highlights how training data can dramatically bias AI.

"Man gets pulled into dough machine"

MIT

Understanding how bias forms and changes AI is important. Already there have been several documented cases of AI acting in racist ways. On Twitter, Microsoft’s chatbot Tay began tweeting racist messages after trolls taught it white supremacist ideas.

It is tempting to brush this off as harmless mischief, but some racist algorithms have much more power than a Twitter chatbot. An AI designed to assess risk assessment for the U.S. Courts consistently flagged Black people as more likely to re-offend.

"Pregnant woman falls at construction story"

MIT

One of the purposes of this experiment is to understand if the AI can learn to be less gruesome. The researchers at MIT incorporate the feedback from real people into Norman’s program.

Over time, they hope to find that Norman develops a more balanced interpretation of the inkblots. If you would like to do an inkblot test and help rehabilitate Norman, you can visit the project's website and take the test. Together, we might be able to brighten the bleak outlook of this poor miserable AI.

h/t: Norman - MIT

Filed Under: