What about cases where a malicious actor alters the AI? How would you prove it? I also don't understand how you are going to prove negligence in something which can be very ambiguous and difficult to unravel. What if the faliiure was not caused by the programming but the initial data set used to teach the system?
I don't think we will have any answers to this until something does go wrong but the discussions still need to be had.