"Nobody imagined a problem like this could happen to three engines,"
Lack of imagination when thinking up things that can go wrong. That sums it up I guess.
A dodgy software installation that deleted vital files caused last month's Airbus 400M transport plane crash in which four people died, it is claimed. On May 9, a test flight of the A400M, intended to replace the aging Hercules as a mainstay of NATO's air mobility fleet, crashed in Spain, killing four of the six crew. …
You would think that the system would report missing configuration files, wouldn't you? But in an embedded system, the more checking you add the more things there are to go wrong. A ship can simply sit in the water - most of the time - a car can generally remain stationary without damage - but suppose a software checking system itself develops an error at several thousand feet? What are the options?
What if it reports a problem with the AE-35 antenna unit, for instance?
I'm glad I don't have to make those decisions.
"I'm glad I don't have to make those decisions."
You think. People's lives are routinely in the hands of the low-wage heroes who maintain armco barriers. Or the unfortunate oiks making sandwiches. Or the unthanked people doing airport security. Or the people assembling third party phone chargers. Or design theme park rides. Or engine (or lift) software Etc, etc.
My guess is that around one third of the population work in roles where the worst case scenario is somebody dies. I've worked in roles (many, many years ago) where if I got it wrong then an East German village might have been hit with a tactical nuke.
Be pleased if the worst result of your unintended error is a few zeroes in an SAP register.
"Or the unthanked people doing airport security."
"People"? Have you met them? "People" is not a descriptor I'd use.
"Soulless, emotionless, remorseless, unfeeling, unthinking drones" maybe. But that's being very, very polite about it.
TSA agents don't. Sympathy, empathy, emotion...it's drilled out of them. Cored out with a mellonballer and burnt in front of the new droid. They exist to make everyone's life as miserable as humanly possible. They exist to presume guilt. They exist to cause suffering.
TSA agents aren't people. They may once have been, but by the time they don that uniform, they're something else entirely. Something...darker. An evil that festers and ferments and infects our society from the edges in. A cold self-loathing and self-doubt that destroys not only the individuals, but the spirit of entire nations.
TSA agents are not merely soulless automatons, their entire purpose is to turn all of us into obedient soulless automatons. And the only tools they are permitted to use are fear, suspicion and intimidation. Not enough that they are suspicious, spiteful, petty, arrogant, emotionless and inhuman, their actual job is to make us turn against eachother as quickly and efficiently as possible. To make us obedient to them, but hostile towards one another, even our own family.
TSA agents do not have feelings. They have functions. And I pity them for it. Some might even have been good people, once. I do hope that maybe, when their sentence is served, they can reclaim some fraction of that lost humanity. Unfortunately, for every one that leaves there are a dozen waiting to take their place.
"I make a point of never replying to Mr. Pott as the result is usually a tirade, but it does occur to me that given the known proclivities of our intelligence agencies, who may even read the comments on El Reg, he may be writing with a view to being prevented from entering the US in future."
I'm 100% positive the Americans knows that I am no fan of the TSA by now. I obey them. That doesn't mean I have to think they're anything other than powermad drones drained of their humanity.
If the US of NSA wants to prevent me from entering because I believe all people - regardless of nationality - deserve civil rights and are entitled to the preservation of dignity, well, there's not much I can do about that, is there. I'm not going to change what I believe in that regard, ever.
Trevor Pott: "TSA agents are not merely soulless automatons, their entire purpose is to turn all of us into obedient soulless automatons. And the only tools they are permitted to use are fear, suspicion and intimidation"
....and a fanatical devotion to the Pope.....I'll come in again! ;-P
"Soulless, emotionless, remorseless, unfeeling, unthinking drones" maybe. But that's being very, very polite about it.
Certainly abuses by the TSA are legion, and actual threats caught by them rather difficult to identify.1 Yes, they confiscate all manner of stupid dangerous crap from passengers, but we went for decades with people bringing that junk on planes without widespread disaster.
And while TSA personnel are not Federal officers or LEOs of any sort, many often act as if they are; and many have a penchant for getting local LEOs (particularly the petty-martinet sort which plagues the US police state) to do their dirty work for them.
But that said, I have to say that my experiences with the TSA at my local airport have been, I believe, universally pleasant. That's because I fly out of a small regional airport, where crowds and queues are almost nonexistent and folks are generally relaxed and in a good mood. I avoid flying out of hubs whenever possible - and when I do, I try go to a less-busy security checkpoint (like the Concourse E/F checkpoint at O'Hare). It makes a world of difference.
And, for what it's worth, when I have a good experience with the TSA, I do thank them. Nothing wrong with returning courtesy for courtesy, regardless of what I think of the institution.
1As documented by any number of sources. Schneier has a number of pieces on the subject. Kevin Underhill collects idiotic legal moves by the TSA in his delightful Lowering the Bar blog. And there are various more rigorous studies.
"And, for what it's worth, when I have a good experience with the TSA, I do thank them. Nothing wrong with returning courtesy for courtesy, regardless of what I think of the institution."
Also, for what it's worth, I've never had an issue with CATSA, and they've been absolutely understanding and wonderful border guards who are worthy of - and receive - my own thanks and gratitude each time. That said, I don't view CATSA as "airport security". They're customs agents. The security folk seem to be entirely separate, because CATSA don't feel the need to threaten or bully. The TSA, on the other hand...
At an embedded software engineer (and file system developer) of over 30 years, I see numerous problems with this.
Even with the most high reliability file system, I would not store critical configuration data in a config file. Of if it really has to be in a file, it should at least be in a read-only file system that cannot be corrupted by other file system activity.
Checking is OK, so long as it only happens at specific times (eg. at power up) when the system is safe. If config data etc is not found, then the system needs to provide some sort of default software as a backup.
Unfortunately far too much of this software is designed to crash when unexpected conditions occur (that's what something like an assert or an exception typically does). That's what caused the Ariane 5 crash (http://en.wikipedia.org/wiki/Ariane_5). Basically an unimportant variable in a process that was not important at the time went out of range and the Ada exception mechanism shut the software down.
@ Charles Manning
I agree with you re the need to minimise checking and only do it at defined times - engine start-up being the obvious one - and if the report is accurate then I'm very surprised that a missing config file wasn't detected during the start-up checks.
However, I disagree with your comments about exceptions: the whole point of exceptions is that you drop into an exception handler. If that handler can't cope with the exception that has occurred then that is a design fault with the software. It's not a fault of the exception mechanism or the language that provides it.
In the Ariane case, the problem was not that an exception occurred but that it wasn't propagated up. So a horizontal velocity sensor went out of range, thought that it had suffered a fault and put diagnostic data on the data bus. Systems reading the bus, however, didn't realise and interpreted the diagnostic data as instrument data and this led to the destruction of the rocket.
In the embedded SW I worked on (only 10 years of experience for me, I'm still young :p), the checksums of all SW and configuration tables were checked during startup (Well, only during cold start : if the equipment was turned off for more than 10 seconds, or forced by exiting the dataloading mode). Such problems would have been caught.
Furthermore, most configuration parameters, especially the critical ones, are retrieved at startup, checked and, if detected as faulty (mostly out of range), either replaced by safe (but unoptimised) default values or prevent the use of the system, and report the issue to the flight computers.
Looks like some "SW architects" forgot those little details...
I when I read that statement in the story I interpreted it as:
Because of variations in manufacture and the tight tolerances of the flight, during installation certain bits of information specific to this engine on this plane are set and recorded for use by the system. So the data will be different for each engine.
How other than a"config file" are you going to store that data? Yes you can argue there should be safety mechanisms to prevent it being inadvertently overwritten, but depending on what else you are updating you might already be in a privileged context anyway.
Yes, I think a check at start up was in order, although in this case it should flash a warning and ground the flight, not supply some default that may cause the same sorts of issues later during flight anyway.
"How other than a"config file" are you going to store that data?"
There are lots of assumptions being made both in the media and round here.
If you want to form a more informed judgement on what may or may not have happened (still without actually having the relevant *facts*, which are still in short supply), you may want to go read about "data entry plugs" (aka rating plugs).
I have no idea whether Data Entry Plugs are used on the A400M, and given that my earlier reference to them on this very site is now the top hit on "data entry plug a400m", maybe they're not used. But they are widely used in the aircraft engine industry, and maybe there will be impact here.
Using the word "file" to describe the parameter data store may be a bit misleading in this context. It's more likely the same meaning as is used in the phrase "register file", not in the phrase "FAT32 file".
i think irrespective of the method of storing the values, testing should have determined they were indeed missing
I've done, albeit as a hobby, work with car ECU's, and although you can start a car with a variety of settings missing or misconfigured, you certainly can't take it past the failsafe region without seeing obvious issues. IE miscalibrate/configure the airflow sensor and you'll be able to start the engine and it'll idle, take the revs above 3,000 rpm (or whatever your limp home values are) and you'll see it enter limp home mode as soon as the other sensors data diverges from airflow.
I'm sure they had a more complex fault than that, simply as they had enough power to get into the air.
would very much like to see more info on what happened, as I can only speculate that it was something like a cloning of the settings from the running engine to the others that put a sensor just out of bounds as altitude increased, but what sensor would read like that is a curiosity to me.
RE: Ariane 5 - we used to get the ESA Bulletin at the time. ESA were very heavily plugging 'HOOD' as the answer to everything. What could possibly go wrong?
"HOOD is thus the method of choice for large, long-lived projects where reuse, reliability and maintenability are key issues.
Since 1999, the method is considered stable."
>But in an embedded system, the more checking you add the more things there are to go wrong.
However, you will have difficulty convincing me that this particular embedded system was constrained to 16K of ROM. Hence the question is why wasn't the absence of these key files picked up in the preflight tests/checks and flagged to the pilots.
But your IT support is probably not for safety critical systems and probably does and should have the authority to implement changes as necessary after appropriate sign off. A spreadsheet not opening as expected generally isn't too bad. A virus/"generic nasty" being received and being propagated could be more important.
There are many formal processes around that attempt to address the probability of an event occurring and the consequential events, e.g. FMECA.
Don't for one moment believe that your "IT support" has *any* similarity to safety critical/hard real time systems.
That really depends on what IT you are supporting and where it was utilized. Failure in my systems, software and hardware, may (likely would in one system) result in millions of dollars of damage, lives lost, and international news coverage (the highest echelons). Certainly spectacular. I always kept that lives mattered as my number one priority. Who/what the help desk supports can be vitally important which was why I kept abreast of their doings even I neck deep in engineering something.
I know I've raised this before, but what id10t thinks a missing configuration file, or any missing values, is acceptable? Sanity checks abound in alll my doings and this should never have allowed proceeding to power-up to take-off unless the battle-short(s) are enabled. And no, using an assert or exception to crash (my) software is not an acceptable method. Crash having two meanings here.
And therein the point: if there were critical bits of code missing, that engine should never have started.
That said, the phrase 'file' could simply be a translation through three layers of lawyers, PR bods, and reporters when what might have been meant was 'array' or something similar; something baked into the executable which might have started life as a file on someone's workstation, but which is thereafter internal to the finished software... just guessing here, of course. But if that's the case, no amount of checksums and hashcodes is going to find the missing data; you'd actually need to check that the data itself existed *and* made sense. If it's gone missing at the compile stage, that's a different problem.
The Routers article says that configuration files for three engines were wiped. In El Reg it says "the torque calibration parameters for the engines were wiped during the installation" where "the engines" refers to the three previously mentioned engines in the paragraph before, not all four engines.
And I'm sure even an embedded system can check the config files at startup and turn on a big red light in the cockpit if something's wrong with them.
All sorts of possibilities. The two which come immediately to mind:
- that engine happened to be particularly well balanced, so values near zero didn't adversely affect it as badly.
- another safety check in the software that said you had to have at least one engine powered while in the air. That is, the fourth engine can only fail for actual mechanical reasons, not just sensors and if it does fail, you try to start one you took offline for sensor if any such engines exist.
It's perfectly possible to control and land pretty much any (propeller or otherwise) aircraft on zero or more engines working. But it is true that e.g. a traditional constant-speed propeller can be tricky when loss of power occurs in that it tries to maintain a fixed RPM by varying the pitch and that means wind-milling (and overwhelming drag with dramatic consequences for glide ratio). Unless you notice what happens and take corrective action by "feathering" the propeller. (Or three.)
But then I would assume that modern ECUs are slightly different animals and possibly don't require quite the same amount of intervention as traditional constant-speed propellers do as far as loss of power and feathering is concerned, leaving you only with the minor inconvenience of loss of power and possibly slightly limited range of (still perfectly controlled) flight. But then again we're talking about a buggy/failed ECU here so who knows. (Well the accident investigators do or at least will know but you know what I mean.)
Now while it is also possible to completely lose control of the aircraft if you fail to maintain sufficient airspeed in an asymmetric thrust situation e.g. during take-off and initial climb I wouldn't expect that to happen to Airbus test pilots (with them being, well, test pilots, and engine failure and asymmetric power being some of the things even airline pilots routinely train for.)
So no, loss of power on 3 engines does not make the aircraft very difficult to control or land, it just may slightly limit the range of the flight and that may in turn limit your choice on the place to land, sometimes leaving you with only sub-optimal options available. Which is what I hope happened here (the alternative being loss of control caused by pilot error when handling the loss of power).
Actually, the bird should have been perfectly controllable with one fully functional engine. In fact they were on their best way to a successful emergency landing in that field, when they hit a powerline pole, apparently damaging a wing and resulting in a crash landing and subsequent fire. Without that impact, the engine failure probably would have resulted in just some red faces.
Maybe, maybe not.
The problem is that if the engines were at flight idle then the propellers would be at low pitch and thus very draggy, so with 3 engines in this condition and the one fully functional engine furthest from the fuselage and hence generating quite a bit of yaw control would have been pretty difficult and the sink rate would probably have been pretty high too. Add in an obstacle, and one that can ignite spilled fuel very easily and you get the result from May.
Very sad, and another example of how complex systems can do bizarre things that can't be diagnosed in the short time available. Maybe a better outcome would have required immediate shutdown and feathering of the failed engines, but the need to make that decision may not have been apparent to the crew until too late.
who programmed the fail mode to be off when the plane was already at 2,000 feet.
THAT was a pretty stupid idea. You always want the failure mode to go to the safest position. Valves fail open or at last position for a reason. Engine control units that are ALREADY at elevation should offer a warning but not just turn off. There should have been an override at the very least.
BBC report says that they didn't "just turn off" - the pilots put them into "flight idle mode" and then, without the config, were unable to take them out of it.
"Airbus has already confirmed that its pilots had tried switching the malfunctioning engines into "flight idle" mode - their lowest power setting - in an attempt to tackle the problem.
Without the parameter files, the engines would have been left stuck in this mode.
This is because the planes were deliberately designed to prevent out-of-control engines powering back up, to avoid them causing other problems."
A safety measure to prevent out-of-control engines powering back up, caused out-of-control engines to not power up.
Even Star Trek tos has a manual override on every critical system.
The problem is over confidence in technology and not enough confidence in highly trained people.
"This is because the planes were deliberately designed to prevent out-of-control engines powering back up, to avoid them causing other problems."
How far out of control?
As far back as WWII, aircraft were provided with the capability to go over 100% rated power if throttles were pushed past a retaining wire. It was called War Emergency Power and, although the engines were either rebuilt or scrapped once this mode was engaged, the idea was that it was better to let the pilots push the aircraft and scrap a couple of engines rather then lose the plane.
With modern aircraft, allowing pilots to exceed a false maximum torque value would be a non issue once the DFDR data was accessed and raw data confirmed operation within limits.
"As far back as WWII, aircraft were provided with the capability to go over 100% rated power if throttles were pushed past a retaining wire."
Just stop and think for a few moments here, sir. Maybe you've not noticed what's happened to aircraft engines since WW II.
Engines built in the WW II era relied largely on educated guesswork, both in design and in operation. No CAD, no simulations, barely a Materials Data Handbook, just slide rules, hard work, and a lot of engineering experience and intuition.
The engine controls back then, such as they were, were largely based on technology derived from clockwork and springs.
The result was that stuff was frequently massively overdesigned, or if it wasn't overdesigned, some unforeseen set of circumstances would take some critical parameter out of limits and Bad Things would happen. Sometimes Bad Things would happen quite often, after all there was a war on. Sometimes if the cause of a problem was obvious and easily fixed by a design change, there'd be a design change. Eventually.
The "war mode" facility allowed those tolerances to be exploited for a specific short term purpose.
And that was basically the way things carried on for a long time, albeit generally without a war mode button, with a few specific exceptions. Big tolerances, so things were safe. But big tolerances frequently implied extra costs.
The "full authority digital engine control" arriving in thr 1980s/1990s, combined with a whole load of other technology changes in the design and operational phases of aircraft engines (not to mention economic incentives for cost reduction and the usual stuff), allows those massive tolerances to be largely engineered out at design time.
When a modern engine manufacturer says "max rpm = 10k" (or whatever) they mean it. The historic mechanical and other tolerances in a modern aircraft engine have been largely engineered out to keep costs down and efficiency up, and the FADEC is relied upon to ensure the engine only operates in a "safe zone" where things like uncontained failures are almost infinitely improbable.
As part of the new improved approach the effects of going over 10k rpm will have been simulated, analysed, and maybe even costed. There won't be a safe way of going over 10k rpm. If there was, the manufacturers would have uprated the engine and said "max rpm = 11k" (or whatever).
If the manufacturer does want to permit "war mode" its use may well still equate to engine writeoff. But the tolerances that enabled things like "war mode" to be generally worthwhile and effective have largely been engineered out.
"You need to read Sir Stanley Hooker's autobiography "Not much of an Engineer"."
Mmm, not come across that name before, though I'm aware of much of the stuff he worked on, and he sounds like a very very very impressive man. Anyway, hopefully a few folks will have a read, I certainly will.
But as luck would have it, I already know what kind of stuff was used for aircraft engine design during WW II,because that's what a family member was doing at Rolls Royce in Crewe at the time.
Yes there was maths and grid paper (which I didn't mention just now but do understand, as an ex-physicist myself), and slide rules (which I did mention and have used, briefly). I may even still have one of those WW II slide rules somewhere; it was a thing of beauty. Times change.
But I think what I wrote still stands - engineering stuff wasn't as well characterised back then as it is now, doing the numbers was much more tedious back then than it is now, and control systems on engines were far far less accurate back then than people have expected them to be for the last couple of decades.
Half an hour to spare? Try this talk on the life of Hooker: https://www.youtube.com/watch?v=x5_54YUHr7M
Once again sir, thank you for the extra reading material.
Typical Airbus - use software to cripple perfectly capable hardware resulting in a deadly crash. I'm don't recall a single simple hardware failure leading to a crash of an Airbus plane. At least when a Boeing jet crashes there is usually something seriously wrong with it, even if it's a manufacturing problem.
"use software to cripple perfectly capable hardware"
Downvoted, and not just because it's nothing to do with Airbus v Boeing.
It *is* to do with using software-based systems and lots of instrumentation and sensors to allow the engine (and indeed the whole aircraft) to be worked right to the margins of its safe operating zones. Software is being used to *maximise* what the engine hardware/aircraft can deliver, by allowing it to operate close to the underlying physical and mechanical limits. This is not crippling the engine, it's allowed significantly improved power-to-weight and significantly improved fuel efficiency and in principle significantly improved engine diagnostics.
Whether this reduction of tolerances is a *good* thing from an overall safety point of view is a separate discussion.
See my other essay on tolerances posted here a few moments ago (assuming it gets past the mods).
" I'm don't recall a single simple hardware failure leading to a crash of an Airbus plane"
Kind of a combination of both, but a frozen up attitude vane (measures the angle of the plane relative to the airflow) led to the anti-stall system not noticing the plane had stalled and doing nothing about it.
Daft thing is they were testing the anti-stall and had deliberately stalled it. I can't remember why they couldn't recover it - possibly just ran out of altitude trying to make the anti-stall do it's thing.
"THAT was a pretty stupid idea."
Stop blaming. Some poor f*cker is enduring sleepless nights because they unintentionally did something wrong, the most important thing now is to find out what went wrong, learn from it, and prevent recurrence.
Maybe you've never made a mistake, in which case lucky you, because I've made plenty.
I agree, some poor f*cker will be blamed.
Accidents like this are usually the result of a chain reaction of mistakes. The biggest mistake was to design an unstable system, that could not be overridden manually.
And it's very unlikely that the people who made that decision, will be held responsible. It is much more likely that some of them will be sitting in the evaluation committee.
Accidents like this are usually the result of a chain reaction of mistakes.
- The plane builder have some pressure on the delays, push the pressure down to its suppliers.
- The suppliers write as fast as possible the system specifications, which include oh so precise sentences like "The software shall be robust", and hire IT consulting subcontractors to do the dirty work (from software specification to software validation). As those subcontractors are paid by the hour, the timings are reduced again.
- As the
slave-driversIT consulting firms needs to make a profit, they assign on this project 10 newbies out of school for one project manager experienced on the domain (he took a plane twice).
- Strangely, the supplier is not satisfied by the outputs from the consulting subcontractor and require more and more rework.
- As the deadline is getting closer, the review procedures are getting sloppier.
- A SW is out for testing, and due to the length of the previous phase the V&V team have half the time they would need to properly review and test the SW.
- BONUS : If the V&V team belong to the same consulting company that made the specification/design/coding part, then it is asked to be more lenient by the bosses - hey, they need to make a profit!
- The supplier, under strong pressure from the plane builder, accept this mostly-working software, quickly perform some half assed HW/SW integration tests in the best cases and send the whole package to production.
You guessed it, this message is from a disgruntled senior critical embedded SW V&V engineer...
"And that's exactly why we shouldn't trust a system so complex and we should always give a highly trained pilot the last word."
This would be fine if so many military aircraft were not dynamically unstable. Until human pilots have the same ability to process avionic data as birds - many of which are also unstable in flight - the human pilot simply cannot have the last word, or it will be very literally the last word.
Well I guess that poor developer (a synonym for f*cker here) will also have to endure a trial in front of a judge because if I remember correctly, French authorities will treat this as a criminal investigation for manslaughter.
It is about time that programmers get off their high horses and stop being so smug. You can't solve anything in this universe with algorithms and a few lines of code.
Before someone tries to down-vote me, I should respectfully suggest he should do some research on criminal negligence. Dura lex...
Actually, the aviation industry has a pretty mature attitude to this: the priority is learning lessons, not assiging blame. The Civil Aviation Authority's "Mandatory Occurrence Reporting Scheme" is a document many industries (e.g. banks, food manufacturers) could learn from.
Which is unfortunate given that it is vastly more expensive than both. Another shit-headed, misbegotten euro-project.
Apple and oranges...
Both C130 and C17 are old planes, their developpement costs have been repaid.
Furthermore, try to land a C17 on the kind of very rugged terrain the Hercs and A400M can handle, hilarity will ensue. Sure, it can handle short and unpaved runways, but not the random pasture the other can use.
Comparision between the A400M and the C130 are more accurate, but if you look closely at the characteristics...
The price is 150M for the A400M, 100M$ for the C130J, but the A400M can carry up to 37t (20t for the C130J). As you need a single A400M instead of two C130J, this seems a far better deal. The A400M is also a bit faster with a similar range...
So no, it is not that clear cut as you seems to think.
Motor vehicles have a "limp mode" for problems where it is safe to run the engine in a reduced power mode.
There are some sensors which are judged to be critical to safe operation, and loss of data can result in an immediate engine shut down.
Having had an engine shut down because of a wiring fault between sensors and ECU I am painfully aware of this.
>At worse it'll lose a bit of money / reputation for the company
Those knights guys whose screw up with new trading software cost their company over a million dollars a minute (total tab over 400 million) probably didn't feel very good for a few weeks after that. Better than killing someone I suppose but that was probably not the opinion of the owners of the company at the time.
I'm glad that the fail safe systems I had designed and coded back in the 80's were replaced at the end of their life in the late 90's having not caused any one's death.
As for the systems I worked on subsequently, they were (in comparison) straight-forward commercial stuff.
First, my sympathy and condolences to the families of those lost. Very sad.
I agree with several above, this should not have happened or happened on a simulator.
I call them dependencies. Every function, subroutine, class, whatever I write checks that anything needed external to the code block is both available and as much as possible valid. Input parameters/dependencies are checked for existence and sanity. Only then is processing allow to continue on to the primary function of the code block. Output/results, as much as possible, are checked for sanity as well. Much of this "overhead" can be kept in its own routine and knowing what is needed to be checked should already be in the initial design documentation before you write the first line. Perhaps too much bloat and time for systems that need to be super responsive but with todays CPU speeds, multiple cores, huge amounts of memory available dirt cheap I cannot imagine something managed by humans not having the spare 100mS needed to make sure of avoiding, wait for it old-timers.... Garbage in, Garbage out.
But I don't write code for airplanes. However I have often told a project manager they will get the code when it is ready and not before. This wins me few promotions or friends in management.
Warnings are often inhibited during take-off to avoid the crew being expected to take action when they should be getting airborne and ensuring that they stay above V2, minimum safety speed. Once this is achieved they can they deal with the emergency/warnings and decide on their next course of action.
In this case the initial problem was that the 3 engines would not respond to a reduction in power demand, in the process of trying to arrest the rapid ascent/acceleration flight idle was selected and this then allowed the protection mechanism to engage and refuse to provide more power when commanded. By the time it became clear that the engines couldn't deliver more power there was not enough time to shut down and feather the 3 that were broken while having enough altitude and manoeuvre capability to reach the runway. The power lines were not likely to be easily visible from the air, once they committed to a forced landing any attempt to miss them would probably have resulted in an even heavier arrival and more structural damage.
Since hearing the story of Airbus engineer Joe Mangan, I've always been relieved when finding I'm flying on some other brand of aircraft (although of course it might just be that they cover things up even better).
So if someone had spotted the fault, would they have dared report it?
a) Mangan was not an Airbus engineer. He was an engineer for an Airbus supplier.
b) Whistleblowers are often threatened with jail. It's not specific to either Airbus or his direct employer.
c) Boeing has done the same with engineers at their Washington plants who blew the whistle, so by that standard, flying Boeing won't be safe either.
d) Same sh**, different company. Airliners are still the safest mode of transport per passenger transported.
On reading the article my first thought was Who makes the engine?
A. Europrop International a consortium "Europe's four leading engine manufacturers: Industria de Turbo Propulsores (ITP), MTU Aero Engines, Rolls-Royce and Snecma Moteurs, the four partners of EPI."
So I wonder if a 'Committee' was involved or a translation problem!
Funny how Software – being “invisible” isn’t treated in the same way as stuff we can see – even by SW engineers. The config files are as critical to the aircraft as are the propellers. We can see the propellers “Oh, 3 props are missing, maybe we shouldn’t try a take off”. Are the config files there? Who knows? Let’s just go for it!
It’s the duty of the SW designers to make the critical items visible to the pilots – are the files there?, are they the ones you expect? – OK, next item on pre-flight checklist.
I'm surprised no-one has mentioned so far that this is the TP400 engine's second set of FADEC software. The A400M was intended and contracted to be certificated to civil standards, not just military, and for that to happen the software writing process had to be fully documented to EASA standards. The prototype A400M was getting close to initial testing when it dawned on someone that the TP400's FADEC hadn't been documented in this way, and was therefore uncertifiable. It had to be written again from scratch, following EASA procedures. That is probably the main reason the project is so far behind schedule. Yes, the A400M is 'overweight' and can't fulfil some of its initial contracted performance but this is not the show-stopper the FADEC debacle was.
It's funny Wikipedia makes no mention of this. There were plenty of reports about it in the pro aviation press at the time.
Biting the hand that feeds IT © 1998–2019