Some people just like to drive.
Granted. I count myself among that some, but I also wouldn't mind a car that can drive itself when I don't want to (e.g. congested urban areas, long distance travel).
Some people don't trust the automation so they're going to want to drive.
And some people still don't trust airliners despite a proven safety record. This will be a generational change.
There’s no software designer in the world that's ever going to be smart enough to anticipate all the potential circumstances this software is going to encounter.
That's what testing is for. Is an Apollo Program analogy too much of a cliché here?
I can give you an example I've seen mentioned in several places. My automated car is confronted by an 80,000 pound truck in my lane. Now the car has to decide whether to run into this truck and kill me, the driver, or to go up on the sidewalk and kill 15 pedestrians.
Presumably not a self-driving truck. This is a bit of a furphy IMNSHO, but why has this old railcar thought experiment lately increased from 5 to 10 to 15 pedestrians? Why not go for 50 or 100?
A self-driving vehicle is not a philosopher. All it has to do is lessen the severity of a collision, no matter how contrived the circumstance, and it's proven its value. If it can avoid the collision entirely, perhaps only by seeing the errant 80,000 pound truck well before a human would have, so much the better.