[adrotate group=“15″]
The Youtube channel, which is named ‘Agadmator‘, belongs to Croatian chess player Antonio Radic and has been the unintended victim of AI algorithms that are set up to detect racist content and hate speech.
Last year, the world’s most popular YouTube chess channel, with 1.000.000 subscribers and over 393 million views in total, was blocked over charges of “harmful and dangerous” content.
The incident happened on 28 June 2020 when Radic’s channel was blocked during a chess show with Grandmaster Hikaru Nakamura because of the AI algorithms which mistook the two chess players’ conversation about black and white chess pieces as racism. Fortunately, the channel was restored within 24 hours.
Ashiqur KhudaBukhsh, a project scientist at CMU’s Language Technologies Institute, and fellow researcher Rupak Sarkar at Carnegie Melon’s Language Technologies Institute have tested the ‘black v white’ chess match theory by using AI software to screen an impressive 680.000 comments from 5 popular chess YouTube channels.
Their conclusion after manually reviewing chess videos and 1.000 comments, was that over 80% of them had been flagged by the speech classifiers as hate speech because of terms such as “black”, “white”, “attack” and “threat”.
RELATED ARTICLE: About Building Brains and Eureka Moments in Coding with Adobe Sr. Software Engineer, Gabi Ghita
“We don’t know what tools YouTube uses, but if they rely on artificial intelligence to detect racist language, this kind of accident can happen,” KhudaBukhsh was quoted as saying.
In fact, according to a study commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the JURI Committee, “machine learning systems need a big amount of training data“. The study, titled “The Impact of Algorithms for Online Content Filtering or Moderation”, recognizes that while there is no problem in finding datasets with appropriate characteristics for some kinds of content, such as well-known categories like copyright-infringement material, in other cases, such as hate speech, “this distinction is often not easy”.
YouTube generally relies on both human moderators and AI algorithms to filter out prohibited content. This, combined with the fact that most social media platforms do not have chess language incorporated in their algorithms, means that the AI system was unable to differentiate between hate speech and normal conversation and thus made the error due to insufficient learning or training in interpreting the given context.
What’s more, in a blog post at the beginning of the pandemic, Google warned that “turnaround times for appeals against these decisions may be slower,” indicating that phone support would also be limited.
RELATED ARTICLE: MIT Made An AI Algorithm That Predicts The Future By Watching TV
“We may see some longer response times and make more mistakes as a result,” Google wrote. “Some users, advertisers, developers and publishers may experience delays in some support response times for non-critical services, which will now be supported primarily through our chat, email, and self-service channels.”
Last year, Youtube, along with Facebook and Twitter gave their users a heads up that videos and content might be mistakenly removed for policy violations by AI software since the companies temporarily chose to rely more on artificial intelligence and automated tools during that period. The announcements came following the collective decision to reduce the number of employees and contractors required to physically go to work, in order to slow down the spread of coronavirus.
Follow TechTheLead on Google News to get the news first.