Video of the self-driving Uber car crash that resulted in the death of a pedestrian was released publicly last Wednesday, showing a woman walking her bike across a dark road, the vehicle rider looking down right before the crash.
The incident has sparked questions about whether the pedestrian was at fault, or whether the LIDAR system used to detect objects by the car should have sensed her coming (or both). There have also been reports that Uber disabled the Volvo SUV’s collision-avoidance technology before the crash – but it’s common to disable other autonomous driving systems when developing and testing your own system.
In California, Uber will lose its permit to test self-driving cars on the road.
There’s also some question about the video itself – graphic in nature, released during a police investigation – and whether it has value or just adds to confusion.
What is the pedestrian detection tech in driverless vehicles and was it faulty in this situation? What is the value of releasing this video the public? And what about the larger trend of video footage being released to the public? What are the inherent limitations of camera footage and how is it perceived?
Jeffrey Miller, expert in driverless vehicle systems and computer science education; associate professor of engineering practice at the University of Southern California
Jack Bratich, associate professor of journalism and media studies at Rutgers