For months now, Facebook has been using its artificial intelligence program to filter posts for suicidal thoughts. In just one month, CEO Mark Zuckerberg stated that they flagged cases of users who were thinking of endangering their lives more than 100 times. In doing so, they passed the cases to first responders who could prevent that. Now, they are rolling out the system outside US #machinemagic
Facebook has been employing AI to save lives for months now. The software has been helping the company to flag worrying posts on the social media platform to human moderators. From that point forward, they can either approach said users with mental health advice or, if needed, contact first-responders who have the resources to find them.
How does Facebook filter the posts correctly? The company has explained in a post that they are using pattern recognition to identify statuses and livestream with suicidal content. Written comments like “Are you ok?” and “Can I help?” are important indicators, as well. While AI helps in prioritizing the posts, humans are equally, if not more, important. Teams around the world – that include specialists with training in self-harm – read the Facebook reports and take proper action.
The success rate in US motivated Zuckerberg “to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. This will eventually be available worldwide, except the EU.” The exception is the result of data protection laws.
This is not the first time artificial intelligence has proven its importance in identifying people with suicidal thoughts. Researchers from Vanderbilt University Medical Center developed an AI that predicted with 80-90% accuracy whether a person will attempt suicide in the next two years.