IBM Introduces Fairness Kit: A Tool To Avoid Bias In A.I

Credit: Pixabay

IBM has recently launched a tool that, the company, claims, can detect bias in A.I, called the Fairness 360 Kit.

According to IBM, the Fairness 360 is “a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias.”

The Kit is supposed to analyze how the machine-learning algorithms make their decisions in real-time and work out if and how they are being biased on accident, by, for example, not identifying correctly people who are not white in photographs.

Because AI code changes and mutates as the systems learn, it is difficult for developers to actually pinpoint where exactly the bias had been adopted – they are the so-called “black boxes” where the developers cannot peer into the decision-making process of the AI.

So IBM hopes that, by using the Fairness Kit, developers will be able to gain more transparency about how their systems place judgement.

Machine learning models are increasingly used to inform high-stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage.” said Kush Varshney, principal research staff member and manager at IBM Research “Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.

The Kit will be checking for bias during the initial AI training as well as during the times it is being tested. A final check will be carried out too, but the systems will be tracked for accuracy, performance and overall fairness, over time.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To Top