MIT Just Pulled The Plug On Racist AI After Years Of Use
Smart Life

MIT Just Pulled The Plug On Racist AI After Years Of Use

Photo by Franck V. on Unsplash

 

Massachusetts Institute of Technology (MIT) is one of the most prestigious institutions and research centers in the world. With endeavors that range from automatization, robotics, mechanics, and AI, MIT always finds a way to be at the forefront of advancing a field beneficial to humankind. Artificial Intelligence (AI) is one of those fields that the Insititute is investing resources and time, developing a computer-vision software able to learn from an image-recognition data set. Just one problem. The AI started classifying images using racist and misogynistic slurs.

Using the same 80 Million Tiny Images database since 2008, a gallery that contains 79.3m low-resolution images, MIT based the AI image-recognition algorithms on this info. By implementing computer-vision tools, the database images used to be associated with one of around 75,000 nouns to simplify the photo scan for recognizable items. With this system in place, the computer-vision tool program can assign a sentence to a picture like; Computer on the desk with RGB lights. 

The system seemed to be a smashing success until people started looking at the sentences that the AI creates, especially when you think that all data is sourced directly from Princeton University’s WordNet lexical database. A closer inspection revealed the usage of derogatory words as the descriptors for body parts, animals, and others. UnifyID AI Labs researcher Vinay Uday Prabhu and University College Dublin researcher Abeba Birhane exposed the problem, noting that the AI can go for “verifiably pornographic” associations and “ethical transgressions” contained within image-recognition datasets. Microsoft halted efforts into AI development amidst concerns that AI-base facial recognition was producing unwanted results, and an Amazon AI stopped development altogether after it was found to be heavily biased against female job applicants. 

MIT researchers were appalled at the results at first and pulled the resource offline. The team decided to pull the plug on the project, going so far as to ask the computer-vision community to refrain from using it in the future.

Biases, offensive, prejudicial images, and derogatory terminology alienates are an important part of our community — precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.

Team Representatives: Antonio Torralba, Rob Fergus, Bill Freeman.

 

If It Has a Camera, We Know Something About It.

Subscribe to our website and stay in touch with the latest news in technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To Top