MIT's Deranged AI Raises Questions About AI Data Bias

MIT’s Deranged AI Raises Questions about Data Bias


A team at the Massachusetts Institute of Technology has created an AI that was trained with data from the, let’s say, murkier corners of the internet. It has been aptly named after Norman Bates, Hitchcock’s well-known “Psycho” character. The AI was shown images filled with gore, found on a Reddit group.

The experiment was intended to reveal how an AI responds to this type of training via inkblots, since all the other AIs have given responses to them such as “a group of birds sitting on top of a cherry tree”, and other, equally non-aggressive interpretations.

The inkblots are part of the Rorschach test, used to reveal the subjects’ emotional response and personality by analyzing their perceptions of the images shown.

Norman’s interpretations of the inkblots were dark at best. The AI saw dead bodies, murder, blood and violence in every single one of them.

The AI that was trained next to Norman was fed images of cats, birds and people, and its inkblot responses were far more lighthearted.
Prof Iyad Rahwan, who is a part of the team at MIT that developed Norman, said that:

“Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

In a world that relies more and more on AI and their interactions with the world, Norman is a wake-up call, warning tech companies about how harmful this tech can become once it gets into the wrong hands or trained improperly.

Fairly so, Dave Coplin, Microsoft’s Chief Envisioning Officer, has said that there needs to be an important conversation to be had with the public about AI and it should start with “a basic understanding of how these things work“:

“We are teaching algorithms in the same way as we teach human beings, so there is a risk that we are not teaching everything right. When I see an answer from an algorithm, I need to know who made that algorithm. For example, if I use a tea-making algorithm made in North America then I know I am going to get a splash of milk in some lukewarm water.”

Google’s AI (who can make a phonecall with a voice similar to a human being’s) and Alphabet’s Deepmind (who is able to teach itself to play complex games) are just the tip of the iceberg for what’s to come. Not so far in the future, we might need to employ new measures to spot and eliminate data bias.

AI is currently being used in more than one industry, from software to security. Any biased algorithms in the case of the latter could have disastrous effects.

Subscribe to our website and stay in touch with the latest news in technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To Top