Remember this comment about a particular project somewhere in the Middle East? “There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns; there are things we do not know we don’t know.” You …
You for got fibbing as a cause.
In most of the companies I have worked in the fixed price estimates took ages to work up from inadequate requirement and then were cut to 'buy' the work.
Surprise surprise, projects run over time and over cost. Unfortunately everyone is in the 'quote low and try to get them on change control' game so it is a cycle that is difficult to break.
Anonymous...because I am job hunting ;)
Being candid will preclude you from getting a job?
Seconding this. I also like the locker room humor and chest.thumping of "senior management" when they lower their price estimate by 30% in about 10 minutes.
I can tell you that it's harsh when one is suddenly alone on a job that requires at least 4 persons fulltime because "we can't afford it".
You can then only hope the projects kills itself before you do,
The biggest lesson
is that there's no such thing as failure - only a lack of success.
The thing about these types of big projects is that they don't suddenly become failures: one day they're a screaming success and over a weekend they somehow go mouldy and turn into the software equivalent of a pot of three-month old yoghurt you forgot about in the back of the fridge. Although that analogy (now I come to think about it) bears a lot of resemblance to projects heading south. They start to go bad as soon as people sit back and assume it'll just look after itself - without them having to do anything.
One of the big projects I was involved with had all the hallmarks of a crashing failure. If you'd asked anyone of "team leader" level experience who was working on it, they'd have told you within 6 months of it starting that it was doomed - couldn't possibly succeed. My introduction to it was a seminar where the lead designer drew the systems architecture - it took about half an hour and covered two whiteboards. If that, itself, is not a red flag then it ought to be. Other things that screamed FAIL were the number of new technologies (basically: everything - nothing about the new system was stuff the current developers / sysadmins had ever done before), the new software products (ditto) and the startling revelation that once it went live there was no way back.
Now I realise that everybody in IT is genetically programmed for optimism ("this time it'll *definitely work ..... oh crap. ... OK *now* it's absolutely ready .... ooops ...") but there's a place where optimism rapidly turns into a wanton dereliction of care and an abandonment of rational appraisal. It seems to me that this point occurs when projects are pitched above about £30M. At that point they become too big to fail, yet stand almost no chance of ever doing what they said they would - so whatever they eventually end up delivering is retroactively called the goal (and those who go around reminding people what the plan _originally_ proposed are summarily dismissed). Generally the only thing that can save a project, once it gets to this size is to replace the IT director and hope the new one is a "cancel everything my predecessor started" kinda guy. That will probably be the only time that he/she truly earns their pay.
The second biggest lesson is that we never learn from past failures. Everyone who's read The Mythical Man Month can tick off almost all the cockups that Fred Brooks wrote about over 40 years ago in almost every project that involves more than a year of lead-time and more than £1M. The only reall success most projects ever have is in concealing the true extent of their failure: either from the press, or from the rest of the company. But isn't that what keeps most of us in jobs?
IT people, the *Real ones*, the ones that actually do the job, and not the "IT manager" types, are far from being optimistic. As in IT contractor and consultant I constantly try to warn my customers that new is not necessarily good (and in fact it tends to be really bad), that cutting costs leads to failure, sometimes to catastrophic failure that could be avoided by spending just 10% more, that if a project looks trivial, maybe you have missed some hidden problem, and you'd better check twice.
And when disaster hits (that is, 90% of the times that I have predicted it), I have to fix it, instead of laughing and screaming "told you so!". Well, at least I have never worked on projects that can kill people in case of failure, and will never do it. (I was asked once, but refused)
Any software company that allows the accident/intentional mistakes of any one individual to kill people generally doesn't get a second chance. Generally truly mission critical does take a certain mind set/personality type to handle but best practices QA should make it virtually impossible for the mistakes of one individual to kill.
Because I did not want to be responsible
I was asked to set up a communication system for tele-medicine. Basically, it was a system that should connect medical monitoring equipment at terminally ill people's house to a monitoring centre, and give a single medic an overview over the conditions of lots of patients. While these patients are going to die anyway, and the monitoring does not in fact require immediate action (otherwise they should have been at the hospital and not at home) I anyway did not like the idea that if a malfunction happens, while this does not actually kill anyone, it could anyway lead to questions like "Could he have been saved? Did he die because of a monitoring glitch?".
And since it was an under-funded and understaffed project, I did refuse to work on it.
re: "Now I realise that everybody in IT is genetically programmed for optimism ("this time it'll *definitely work ..... oh crap. ... OK *now* it's absolutely ready .... ooops ...") but there's a place where optimism rapidly turns into a wanton dereliction of care and an abandonment of rational appraisal."
No. I think you're exagerating.
I've been in IT for over 20 years, and unless I've performed the same tasks with the same hardware/software and configuration, I never say those things. I've been involved in small projects: adding a new building to a campus LAN; and I've been involved in large projects: deploying SAP to manage global manufacturing. I've had projects cancelled prior to start-date, and I've had projects complete over-budget and late, but I've never been accused of misplaced optimism. I also know that working overnight in a lab to prove that some new feature works or doesn't can make the overall implementation less painful for everyone involved.
It's also my experience that most salesmen lie blatantly or through omission. This may explain why I'm not in sales, but I've never used it as an excuse for a failed project or missed completion date; and I've had some of each.
"What do we learn from the other seven?"
My guess would be: "Only one in eight Project managers are any good at bullshitting their success criteria....."
is about the only sane thing I've read about Project management
The biggest problem ...
... is that manglement knows squat about IT and the rest of the glue that holds business together in this modern era.
I hold an MBA (Stanford). They taught me lots about Business, but nothing about Infrastructure. Thankfully, I received all my other degrees before I decided to get the MBA ...
The less the management knows, the more they like to cut what they don't/can't understand. "If I don't understand why a firewall/documentation/change mgmt/ is important, it can't be important.
"Perhaps it’s that if you come out with stuff like that your P45 is never far away."
Er, wot? While it was initially ridiculed, that bit of Rumsfeldia is these days generally cited as extremely perspicacious by just about everyone, including people very much not on the same general side as The Don. Even the Grauniad's taken up referring to known unknowns and unknown unknowns these days.
It is actually a really great piece of analysis, and something that intelligence agencies in particular tend to forgot about a *lot*.
yeah too bad quotes don't make up for proper planning
Yes the quote was decent but what made it memorable was the absolute failure of the individual that uttered it. My favorite moment of the Bush presidency was W throwing Rumsfeld under the bus right after the 2006 election on national TV and Rummy mumbling how nobody knows how complex his job is. Classic hahaha.
I was once told
the correct way to estimate an IT job was to think that 90% of the work would take 90% of the estimated time, the last 10% of the work would also take 90% of the time.
It also helps where the customer does'nt change his mind 6 months into the project and want something completely different..... but still delivered on the original date.
The last sort also infest the metal bashing world
"waddya mean making twice as many widgets takes twice as long? I still want them all tommorrow"
The Rule of 18
I always reckoned on the Rule of 18 for software projects.
Time for the clients to decide what they want : 18 months
Time for the developers to produce code : 18 weeks
Time for operations to get the code in service: 18 days
Time for Tits-up : 18 hours
Time for finding arbitrary person to blame : 18 minutes
And that's before you start implementing anything that requires new hardware...
That's me searching my jacket for the documentation that's not yet written (-18 weeks)
Predates Rumsfeld by a *long* way
Think more "The Right Stuff" as in the book by Tom Wolfe.
This sort of thinking comes from aeronautics research. A classic example was the flight that nearly wrecked one of the X15's when it carried a dummy ramjet.
The planes wings generated shock waves (known) the model generated shock waves too (known). However when the 2 shock waves *interfered* the heating rate went up c10x. This was not known and not *expected* either This lopped off the bottom X15 fin in mid flight and made the landing a virtual crash.
It really can be the things you don't know that can kill you.
That said my gut feel is that *many* of those failures were *completely* predictable given fairly simple back of the envelope calculations in terms of file sizes, numbers of records, length of retentions, time to transmit, time to retrieve etc.
An interesting case would be the Eupol idea of checking *every* entrant against their facial database. NIST estimated a searching a 1.6m face database in 1 sec on a $25k server, with a 0.3% error rate (or 4800 matches).
The database is actually about 4x that size. Now how *many* entry desks are on the border with the EU?
We the unwilling...
do the impossible, with next to nothing, for the unknowing, the ungrateful, as quickly as possible, on very little sleep and lots of caffine.
don't forget the snack machines.
I've been known to live out of them while working unexpected (silly me) 32hr shifts because of lack of planning from 3rd parties and damned unreasonable schedules.