It’s a fair assumption that the operational management of IT follows some kind of maturity model. Many organizations are, in theory, chaotic; a logical next step is to implement a layer of control, and from there one might say one could start building a managed environment. After that – hurrah! – it is reputedly an easy hop, …
"Good enough IT."
You will work 18 hours, get paid for 8, be on call 24/7/365 learn to embrace your wage freeze and "extend the investment return" on equipment by "ensuring increased operational life" or you will be replaced by any of the few million unemployed IT folks. (Or outsourced to India.)
No, you can’t have management tools. No, you can’t have new hardware and no, you can’t buy spares. When something breaks that directly affects upper management, your replacement can buy replacements to deal with your incompetence because you’re fired!
And pay for your own damned support cell phone! That it’s used for your unpaid 24/7 support calls has no relevance to the company. It’s a cost of being employed, like the gas you burn running around the city, or driving to work in the morning.
Yeah. “Good enough IT.” We’ve heard of it…
is walking through the mud good enough?
I think we should strive to drive, in cars, on highways instead.
Why didn't you set out to quantify the savings (or losses) to be gained by better management software. yes, no surprise it wasn't bought after that, but it is a surprise you use this as proof for uncertainness of savings. it only proves you didn't really look into them.
second, while you state that breaches/failures and so on make investments easier, you ignore the relation between missing management tools and the very cause of some (by numbers: many) of these failures
if we ask for good policies to be put in place at the start, there is a certain need to look at better classification of failures into causes like hardware that turns into smoke(technical), lack of knowledge (people), mistake (people), lack of skill (people) and lack of planning(people)
bad management tools are in the "lack of planning" bit for me. you can't set up a good policy that won't include tools.
noone says they need be costly, i.e. having a wiki+database with hourly switchport:ip<->mac mappings or dns zone backups is in the two-digit price ratio, but most companies live without such basics. what they instead do, is run around hunting problems. sure that's good enough, but it will take longer => cost the company money.
"good enough" is where you don't have costly tools that do it by themselves and staff that can't do it by themselves either.
it'll cost you dearly with every outage, with every client pc that takes more than 30 mins to replace and so on. if your helpdesk staff has the account reinitialized because he doesn't know enough details of windows offline sync's suckage, or if the exchange admin wasn't forced by any policy to set up weekly consistency checks and you need a full restore where a service restart should suffice.
summing up the cost of "good enough IT" DOES need a business perspective.
if you don't take the time to look for and notice the small daily fuckups and don't see where they turn into the smaller disasters, everything will look ok.
but i don't think this kind of attitude can be acceptable, unless the companies wages are just enough to hire chimps. still, we don't need to lie to ourselves and think everything is great.
last, dynamic IT is possible and there are shops that are running far better than the rest.
the problem is that if you call up any of the big vendors and ask to "buy dynamic IT" you'll get the shiny marketing materials, the stacks of new hardware, the expensive outdated management tools and the lot of newbie consultants and the huge bill that will actually prove how important your choice was and that'll get you the big raise.
but you won't get the last missing piece: the dynamic stuff you tried to get.
i suggest you call up steve traugott some day and hear his side of "good enough IT" (as in no compromises made) management.