Computer knows best?
Yes, this reignites the old debate about who should have the final say when it comes down to human-vs-computer (or, if you like, in broader terms, human perceptions-vs-instruments), but in truth it needs to be informed by people who are experts in the subject matter, not—forgiving the few exceptions on these pages today—BTL commenters who have never worked in aviation.
The problem is, there is no perfect, dogmatic answer. Those are only open to ill-informed amateurs. The software is generally superb but not foolproof but, far more significantly, it has to rely upon information fed to it by sensors. If the sensors, be they pitot tubes, AoA, radioaltimeter etc—all of which have been implicated in fatal crashes even within the last 25 years—provide bad data, then GIGO applies. It is why Airbus systems, for example, have multiple degradation modes depending on what information is missing or suspect—'Alternate Laws' handing progressively more control back to the flight crew. It is why Boeing, ironically as it turns out, have been known for the philosophy of ultimately un-restricting the pilot's ability to operate the controls, even outside safe limits, in extreme circumstances. (This isn't a Boeing v Airbus contest: both build superb planes. I personally don't like the concept of the zero-tactile-feedback sidestick, but that's just me.)
I suspect any analysis of the last 30 years' major airline incidents will show very few which were the sole result of sensor/instrument/computer failure. Even disasters whose precipitating events are such a failure tend to have been survivable, if only the pilots had done the right thing. AF447 would have survived if the pilots had followed a standard procedure for mutliple conflicitng airpseed warnings. The Lion Air plane would have stayed airborne if this flight crew had known (or remembered?) how to disable the stall prevention system. (Aeroflot 593 would have lived if the pilots had ... just ... let ... go. (Though that wasn't precipitated by instruments).)
It is particularly saddening to think that Boeing's engineers may well have had incidents like AF447 in mind when setting up the stall avoidance system. One could argue that they should have allowed sufficient sustained back pressure on the control column to disable that system, similarly to how other aircraft autopilot channels can be disabled after positive sustained pilot input. The counter-argument—emphasised in the case of AF447—is that a panicked pilot may just keep pulling back, even as the plane plummets. Like I said: no perfect answers.
One last point. This tragedy puts me in mind of Scandinavian 751. In that crash, a safety system of which the crew were unaware was triggered and caused an otherwise avoidable crash.*¹ Sound familiar? I'll allow the interested to read the Wiki article, but the parallels are a little eerie.
*¹ During climbout he plane had suffered surging from both rear-mounted engines, caused by transparent ice breaking from the wings. Crew followed correct procedure, retarding the throttles to keep damaged engines alive long enough to allow an emergency landing, but an (unknown to them) safety system advanced the throttles again, thereby causing the engines to shake themselves to bits. (The good news is, the plane pancaked in a snowy meadow after losing a lot of speed clipping the hair of a conifer forest, and although it broke into pieces on impact, there was no fire and everyone survived. A real feel-good story. Pity it all happened too fast for a movie.)