The investigation regarding the fatal Uber crash in March revealed crucial information about the car’s systems.
According to this report by The Information, the crash happened because the self-driving software “decided” to ignore the pedestrian.
The report showed how the software used made a large number of “false positives” – when it came to landscape features or objects that posed no real threat – so Uber decided to tweak the settings.
Turns out, the threshold of detection in the self-driving car was turned so low, that the car simply ignored the pedestrian, despite seeing it clearly.
Unfortunately, this bad decision led to the tragic demise of Elaine Herzberg, a 49-year-old woman.
To make matters worse, according to The New York Times, months before the crash, Uber was aware of the fact that their program “was not living up to expectations”. Even so, in October 2017, the company took a huge risk and tested autonomous cars using just one, not two safety drivers.
When the tragedy occurred, authorities thought that the driverless car could not have swerved to avoid the pedestrian, even with a human behind the wheel. Now, the extent of Uber’s recklessness has come to surface.
The company was already ordered to stop all self-driving car tests. As the investigation progresses, the chances for further tests seem extremely low.
Uber commented on the report but did not address the reveal directly:
“We’re actively cooperating with the NTSB in their investigation. Out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident. In the meantime, we have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture. Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”