Future

San Francisco PD Will Use AI Tool To Reduce Crime Charging Bias

sfpd-ai-mitigation-tool

 

Back in 2017, a study pointed out that people of color do receive more serious charges when being accused of a crime when compared to their white counterparts, even when during the initial booking stage, which usually happens hours before the district attorney can even step in to clear the situation. 

We identified systematic differences in outcomes for Black, Latinx, and White defendants along almost all margins.” the study said. “Using cutting-edge statistical decomposition techniques, we could isolate racially disparate booking charges as the driver of racial disparate criminal justice outcomes. [..] The influence of booking in downstream decisions made by district attorneys, public defenders, and judges can create a system of “race neutral” disparity, where district attorneys are responding directly to the charges brought to them by the police, not a client’s race. However, the data suggest
that the charges brought by the police are not, in fact, race neutral.

So, how do we fight the bias, the question remains?

San Francisco, for example, uses a manual process that involves the city removing the first two pages of the documents detailing the accused, but the prosecutors will manage to see the rest of the report, which might hold details that could indicate the accused’s racial background. 

However San Francisco seems to have found a different way to deal with the situation: technology. A ‘bias mitigation tool’, to be more exact, that employs the use of AI to redact the information that identifies a subject’s race from the police reports. 

By doing this, it’s expected that prosecutors will not be influenced by any racial bias when they make their decision to charge someone with a crime. 

The tool completely blocks out any race descriptions as well as characteristics like eye and hair color. The people’s names, locations as well as neighborhoods that might lead someone to assume the accused belongs to a particular racial background are completely removed. 

San Francisco District Attorney George Gascón told the media during a briefing yesterday that a prosecutor could immediately tell by seeing a name like Hernandez that that person is of Latino descent, thus causing racial bias. 

When you look at the people incarcerated in this country, they’re going to be disproportionately men and women of color,” Gascón said. 

But the AI tool will also remove details about the badge number of the police officers as well as other similar details, just to avoid any situations where the prosecutors might have any bias towards them, be it racially-oriented or otherwise. 

The tool was developed by Alex Chohlas-Wood and his team at the Stanford Computational Policy Lab and it’s capable of recognizing the word in the report via computer vision and replacing specific indicators with more generic ones such as Location, Officer #1, etc. 

Even so, the prosecutors’ final decisions will still be based on the original, full, unredacted reports. If the evidence presented in the redacted report includes video, that will also reveal the suspect’s race, so, at the moment, knowing all this, it’s a bit unclear how the tool will genuinely help avoid racial bias from the beginning of an investigation all the way to its end. 

Follow TechTheLead on Google News to get the news first.

Subscribe to our website and stay in touch with the latest news in technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Must Read

Are you looking for the latest innovations in tech? You're in the right place, just subscribe to our RSS feed


Techthelead Romania     Comedy Store

Copyright © 2016 - 2023 - TechTheLead.com SRL

To Top