Re: Clear cut...
Whooa, let's wait up a bit to get the facts first, right? Uber or not Uber.
First, we don't know if the pedestrian did something that made avoidance really hard.
Second, I've always had my doubts about systems-good-enough-to-drive-almost-all-the-time-but-human-as-failsafe. If human then fails, then throw book at him.
That model works, well, with autopilot systems on aircraft. But, the crucial bit is that pilots have plenty of time to take over at cruising altitude and they are most definitely in the loop, if not outright controlling, at crucial points like takeoff and landings. Commercial pilots are also top level professionals in their field and they benefit from decades of massive investment in researching failure root causes in aviation.
Expecting a trained operator, who is a passive bystander almost all the time, to react instantly to avert an accident, each and every time when needed is unrealistic psychology of how humans work. Yes, an attentive backup driver may get it right 99% of the time, but it won't be like someone who is already in control of the vehicle. This is true for test drivers, but it will be 10x true if regular Joes and Janes are expected to instantly correct bungles by their autonomous vehicle.
Even with a backup driver, the AI really needs to be very, very, good at avoiding accidents. This is going to require some rethinking of test protocols, even if AI surpass regular human drivers in safety. Maybe we also need to mandate some failure analysis collaboration between competing AI companies - don't want to have the same mistakes done over and over due to commercial secrecy.
Thoughts to the family of the killed pedestrian. And, yes, to the driver and engineers too. This is a sad moment, no need for gloating and finger pointing. And, yes, that remark extends to the headline, despite my dislike of Uber.