Feeds

back to article What's the point of data centre orchestration?

Flipping the script on the 80/20 rule is the idea of data centre orchestration. It promises to take 80 per cent of the time IT departments spend fire-fighting and reallocate it to service delivery – putting big smiles on board members' faces and increasing job satisfaction in the IT department. Good luck with that, you might …

COMMENTS

This topic is closed for new posts.
Silver badge
Mushroom

Dear Board Members...

80 / 20 is not designed for IT Pros. It is designed for what happens when brain dead users are given access to do strange, weird or just plain dumb things to servers, workstations and ancillary gear. They plug in their own WiFi hubs for convenience. They change settings and load / run unauthorized programs to "make it faster" or chat to their Auntie in the Orkneys sending 1,200 snaps of little Ben in his high chair. They download full length movies on their work account, store them in server directories then moan about the speed of the systems / network being slow. The worse thing is because they can install iTunes at home, they think that is a prerequisite to make adjustments to SQL queries on mission critical systems.

Take the ignorant users out of the loop and you will have the happy magical land of 20 / 80 instead of 80 / 20... Until then, there is a reason 80% of IT workers time is being used for "fire fighting". When you gave the inmates of the asylum matches and gasoline what do you expect?

0
0
Meh

In theory it's a good idea

In practice, automation in IT crisis management seldom happens, because of the innate distrust humans place in systems that cannot provide an absolute guarantee of robustness and eventual recovery when the shit hits the fan. An orchestration system can only manipulate the bits its connected to; it can't yell down the line to the plant engineers or the fire service when disaster strikes. It can't contact customers and give an educated estimate as to when their sensitive kit will be restored to service. It would also have to work without the risk of false positives or negatives, or causing a catastrophic domino effect that can knock out the most cleverly designed, distributed infrastructure. Even more of a problem is detection; a human might take proactive or no steps to mitigate a condition which is approaching crisis status. These conditions might constitute a disaster one day and an acceptable reduction in service the next. A system which makes the wrong decision because it doesn't understand the problem often causes more problems than it solves.

While the big billion dollar companies might be able to think of datacentres as most of us do as servers, where you can knock one out and keep going with the rest, the rest of us poor sods would vastly prefer a human to make the careful decisions when it comes to 'pressing off buttons'. Outsourcing this to blunt-instrument algorithms or third party drones who aren't on the scene strikes me as a recipe for eventual disaster.

0
0
Silver badge

Call centres

Possibly a bad example.

"So you add a rule to your orchestration system that says, this system does not slow down. And then you work through the conflicts that decision throws up, ticking boxes as appropriate."

and then when you find out how much that'll cost you say "fuck it, they're only complaining anyhow, let them wait and make the number a premium one whilst we're at it".

0
0
Silver badge

Yes you can, it's called job satisfaction

re: "There’s no point in delivering a service seamlessly if you can’t impress everyone with your performance charts, hopefully heading upwards, at the end of the month."

If your only worth is derived from the opintion of others...then I feel sorry for you.

Of course, once you hit 100% and stay there for 15 months, every future incident, no matter how minor, will make the service look unstable. Very few will look back at what was done prior to that 15 month nirvana, at the plans and actions that made it possible, and appreciate them.

0
0
This topic is closed for new posts.