Re: @ Chris Miller
I agree - in principle, greater automation has led to greater safety, the stats speak for themselves. I can't think of any instances where pure automation on its own has resulted in a crash, though there have certainly been instances of automation 'losing it' and human pilots rescuing the situation. And, of course, prior to the extensive use of computer automation in the cockpit, almost all accidents could be ascribed to mechanical failure and/or human error.
A hidden trap with automated systems is there tendency to deal with a deteriorating situation by applying correction until they run out of authority and then dropping out, handing an almost unflyable aircraft to the human pilots. An example of this was the Turkish 737 that crashed near Schiphol, it's true that the pilots could have spotted what was happening, but in this case, sadly, they didn't.
I've a good friend who is heavily involved with software development and fault analysis in this area (safety-critical software). I put it to him (speaking as a rank amateur) that it ought to be possible for the automation to project a current trend and give some advance warning along the lines of 'if things carry on this way, I'm going to run out of control authority in 30 seconds' or something. Of course, this has been thought of. The arguments against are too many false positives and introducing an extra layer of complexity into the software.