I wonder if the problem here is the question being asked?
Rather than highlighting the crash "ooh look in this situation the Tesla would crash" should we not be asking "in this situation did the Tesla do the right thing"?
If I was driving the properly, with no assistance, I would be aware of the traffic around me in all four directions and presented with this I would know if I could dive right, left or should brake (driving properly remember).
(but really if I was driving properly I might well have been aware of the obstruction from half a mile away if I was looking ahead and had a line of sight)
Does the autopilot on the Tesla have awareness of the road users around it or is braking its only choice? Or would the majority of human drivers be be lax in knowing what's ahead and unable to brake as hard as the Tesla (recent news ignored) and would have the same accident but more energetically.
In this case is the Tesla's response statistically safer by still having an accident but at lower energies?
I'm no great fan of driving assistance beyond active cruise. I agree with the studies that show if the driver isn't requires to be in control their concentration will go. I think that just seems to be the way we are made. So either very little assistance or full automation. The Tesla half way option seems to be the worst most dangerous choice.