Smart Life

MIT Just Pulled The Plug On Racist AI After Years Of Use

Photo by Franck V. on Unsplash

 

Massachusetts Institute of Technology (MIT) is one of the most prestigious institutions and research centers in the world. With endeavors that range from automatization, robotics, mechanics, and AI, MIT always finds a way to be at the forefront of advancing a field beneficial to humankind. Artificial Intelligence (AI) is one of those fields that the Insititute is investing resources and time, developing a computer-vision software able to learn from an image-recognition data set. Just one problem. The AI started classifying images using racist and misogynistic slurs.

Using the same 80 Million Tiny Images database since 2008, a gallery that contains 79.3m low-resolution images, MIT based the AI image-recognition algorithms on this info. By implementing computer-vision tools, the database images used to be associated with one of around 75,000 nouns to simplify the photo scan for recognizable items. With this system in place, the computer-vision tool program can assign a sentence to a picture like; Computer on the desk with RGB lights. 

The system seemed to be a smashing success until people started looking at the sentences that the AI creates, especially when you think that all data is sourced directly from Princeton University’s WordNet lexical database. A closer inspection revealed the usage of derogatory words as the descriptors for body parts, animals, and others. UnifyID AI Labs researcher Vinay Uday Prabhu and University College Dublin researcher Abeba Birhane exposed the problem, noting that the AI can go for “verifiably pornographic” associations and “ethical transgressions” contained within image-recognition datasets. Microsoft halted efforts into AI development amidst concerns that AI-base facial recognition was producing unwanted results, and an Amazon AI stopped development altogether after it was found to be heavily biased against female job applicants. 

MIT researchers were appalled at the results at first and pulled the resource offline. The team decided to pull the plug on the project, going so far as to ask the computer-vision community to refrain from using it in the future.

Biases, offensive, prejudicial images, and derogatory terminology alienates are an important part of our community — precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.

Team Representatives: Antonio Torralba, Rob Fergus, Bill Freeman.

 

Follow TechTheLead on Google News to get the news first.

Subscribe to our website and stay in touch with the latest news in technology.

Must Read

Are you looking for the latest innovations in tech? You're in the right place, just subscribe to our RSS feed


Techthelead Romania     Comedy Store

Copyright © 2016 - 2023 - TechTheLead.com SRL

To Top