There’s a trick we use at Freeform Dynamics when trying to figure out the true significance, if any, of the latest Big Thing being promoted by IT vendors and pundits. We ask ourselves what will be left when the marketers get bored with the current buzz words and move on to the next Big Thing, as they inevitably will. As an …
I like utility computing
Probably not the same sort of thing that was hyped a decade ago. But I have viewed utility computing as basically a set of servers with hypervisors on them (in my case that's always been vmware), some good "utility" storage (in my case that's always been 3PAR), networking, load balancers. Configuration management software rides on top (currently Chef, though in my past - I much prefer CFengine). Ops team provisions systems as needed into the environment after assessing resource requirements and whether or not a new system is needed vs using an older system (or vise versa). Systems run smooth and pretty much everyone is happy.
I have also spent a significant amount of time building and maintaining EC2 infrastructure for the same purposes, the only value I got out of that was at least I can tell people yes I used that for two years and yes it is absolute shit. I won't go into the cost equation again since that one is obvious.
Making decisions on infrastructure based purely on "business needs" can quite often be the bad way to go. Specifically the business may only "need" X, but we know in the future they will quite likely need "Y" too (business is not sure at this time). So business invests to get "X", then a year or so later, they end up ripping "X" out and replacing it with something that can do "Y", maybe it doesn't get entirely ripped out, maybe it just sits as some legacy infrastructure that causes the company regular pain in the ongoing years. Obviously there are significantly more costs in doing both.
The biggest driver I have seen for "business needs" has been cost. They always pay for it in the end, whether or not they pay less by going with the right solution or pay more by having to replace what they originally bought with the right solution later (assuming they have the staff to determine what the right solution is - unfortunately often times this is not the case). Though I have seen situations where there has been greater than 100% turnover in staff over a relatively short period of time which then results in significant change - but even then the ratio is probably still 50/50 whether or not this change ends up being positive.
Making decisions based purely on technology is equally bad of course too - there was a local vendor here that came out to visit not long ago, a storage vendor, and asked if we were happy with our storage, I said yes. He asked how much cache we had - I told him, and he said "oh we can get you a TERABYTE of cache!" -- I honestly didn't know how to respond to that on the spot the statement was so absurd -- I mean did he really think we had the budget to make that sort of purchase? Obviously I know such systems exist but it makes little sense in bringing them up since we don't need them and upper management won't pay for them. Feel free to brief me on a technology architecture that we can leverage at our current level of scale/budget, and then grow into -- at some point getting to this terabyte of cache you speak of if we need it. That's something I'd be interested in hearing. It was an interesting/stimulating conversation though it's not often I get to talk storage to folks, so I enjoyed it.
I've seen that time and time again over the years. It's good to find out what the business needs of course, but you should still pursue the best technology choice (even if that means the best choice is not the latest and greatest).