According to many pundits, here’s the plan for the next generation data centre: we can go to a dynamic infrastructure, with on-demand applications running in our private cloud, and an elastic cloud out there waiting for our applications if we run out of capacity. Sounds too good to be true? It is easy to get caught up in the …
Who writes this stuff?
> virtualisation is a stepping stone to the cloud
(that splashing sound was someone stepping off the stepping stone, falling through the cloud and ending up in the river).
it gets even better:
> unless an organisation changes its behaviour across the board, then things will be done in the same vein as they have always been done
What all this verbiage boils down to is that some IT operations are using virtualisation. Some of those use it to host lots of old servers on fewer, larger boxes and some use it to provide flexibility when they need extra capacity.
Have I missed anything?
Yeah you missed...
> organisations that have changed their procurement policies are more likely to head down the dynamic IT path than those who have not
Some charge by service, some don't.
> most companies are still quite resistant to the use of external cloud
Because service companies can't be relied upon http://www.theregister.co.uk/2010/09/10/microsoft_bpos_apology/
The other resistance...
to dynamic IT (in terms of being able to fire up extra VMs quickly to cover load spikes, rapid failover, and so on) is simply the difficulty of it, as well as apps that simply wouldn't benefit. If the IT department doesn't have political problems with it, but simply don't have apps that would benefit, then they won't look into it.
Failover -- most of these cloud systems rely on running applications that support failover internally, and then either running extra copies "idling", or firing up extra VMs if an existing VM fails. This isn't nice and transparent like IBM mainframes in a parallel sysplex (that is a setup of 2 or possibly more mainframes setup for failover). It can actually detect a fault (including a CPU mis-executing instructions -- it has two parallel pipelines with comparators to fail a CPU where the two pipelines disagree), it can stop the VM at that exact clock, and migrate the whole VM to another CPU on the same machine, or another mainframe in the sysplex -- transparently. Compared to that, having VMs detect faults, or some external VM detect other VMs crashed, making sure there isn't a half-completed transaction, starting up another VM, and having it take over that transaction, it's complex and error-prone.
For that matter, I think some of these IT departments (particularly ones using VMs to consolidate machines) will have some server apps that are designed to run a single copy, not run on a cluster of machines. Inter-process and inter-machine communications, locking, and so on, I bet quite a few server apps just don't bother. So, all the failover (unless it's clean like IBM's), and all the capability of firing off extra VMs for capacity, will not do a thing for them.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Beijing leans on Microsoft to maintain Windows XP support