Instagram is taking the fight against online bullying further, after receiving positive results for a feature launched back in July, that detects potentially nasty comments and asks the user if they still want to post it.
The social platform uses artificial intelligence machine learning to warn its users if a caption contains possibly offensive language, allowing the user to think twice before posting.
The update was announced on Monday, December 16th and is available in “select countries”, with plans to expand globally. The feature uses A.I. that would be able to learn and detect different sorts of bullying and verbally offensive material.
“In addition to limiting the reach of bullying, this warning helps educate people on what we don’t allow on Instagram, and when an account may be at risk of breaking our rules,” the announcement about the new feature reads.
Online bullying has become a serious issue in recent years, a lot of this behavior taking place on Instagram. A study from 2017 determined that 17% of teens are bullied online.
This isn’t the first step Instagram has taken against cyberbullying. “Restrict” is a feature launched in October that lets users protect their profile from unwanted interactions from online bullies. When you restrict someone, their comments are only visible to them.
Instagram is also in the process of testing hiding the like counts on a photo. The company says social media brings about a competitive nature among users trying to portray the perfect likable life. The feature is limited to only a few users.
Instagram CEO Adam Mosseri has said that he hopes to encourage self-expression and connection instead of competition.
Follow TechTheLead on Google News to get the news first.