Security

Facebook Will Use Police Camera Footage To Train Content-Removal AI

Facebook has announced that it is upping its efforts on dealing with “terrorists, violent extremist groups and hate organizations” across its platform as well as on Instagram. 

Facebook uses a large number of AI tools, in addition to its human moderators, to deal with the harmful content that appears on the platform on a daily basis but all those extra eyes and ears did not help it very much during the Christchurch attacks in New Zealand that took place earlier this year: the attack was livestreamed on Facebook and was subsequently shared by users over 1.5 million times before the company took it down. 

This, understandably, raised a lot of eyebrows as to how competent Facebook actually is in detecting extremist material across its platform. 

So the company is currently working towards improving its content monitoring technology, in the devastating aftermath of the attack. 

Some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of the Christchurch Call to Action, has strongly influenced the recent updates to our policies and their enforcement.” Facebook said in a post published on its Newsroom. “First, the attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content.”

In order to achieve that, the company decided to use footage from police body cameras in order to better train its AI to recognize videos that contain gun attacks. On Tuesday, Facebook said that  it will provide the United Kingdom’s Metropolitan Police with cameras to take along with them for firearms training exercises. 

The technology Facebook is seeking to create could help identify firearms attacks in their early stages and potentially assist police across the world in their response to such incidents,” Neil Basu, U.K counter-terrorist police officer, has said of the project. “Technology that automatically stops live streaming of attacks once identified, would also significantly help prevent the glorification of such acts and the promotion of the toxic ideologies that drive them.

The images captured during these exercises will be put to use for content moderation that will help the social media network to “rapidly identify real-life first person shooter incidents and remove them from our platform”.

This is just the first step as Facebook is also discussing the project with law enforcement agencies in the U.S. 

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To Top