No. The Uber accident was not caused by “AI”
It was bad AI. Though the human contribution is not great
Your point about Uber cutting wires is incorrect.The standard collision avoidance system is disabled in computer controlled mode as it conflicts with the more sophisticated sensors. It is enabled in human control mode.
The NTSB write in their preliminary report:
"The report states data obtained from the self-driving system shows the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behaviour. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator."
The cause of the accident seems to be that despite tracking the object for 5 seconds, the AI failed to accurately predict that it would still be in a collision course until it was too late.
The human failing is in not implementing emergency maneuvers of any kind. It is not realistic to expect the human operator to intervene in an emergency - there simply isn't enough time to assess the situation and act (https://www.sciencedirect.com/science/article/pii/S1369847814001284). Google has already tested the assumption that the human supervisor will be constantly vigilant and concluded that the assumption is false (http://uk.businessinsider.com/larry-page-google-self-driving-car-autonomous-2016-9?r=US&IR=T).
When it comes to autonomous vehicles, there is no halfway house. The vehicle has to manage come hell or high-water.