Server virtualisation deployments are on the rise in Western Europe, and this comes as no surprise given the enormous pressure that IT managers are under to cut costs and drive up the utilization on the gear that they have in their data centres. The continuing maturity of hypervisors and hardware features to better use them, …
I still can't work out how one machine running multiple processes on a single kernel can be "out utilized" by the same hardware running the same processes on top of multiple kernels. Unless using more memory and cpu time running those kernels counts as utilization? Useful work not happening because of everything waiting on IO is utilization too?
Containers are a good idea, virtualising the whole os isn't unless you have some legacy app that doesn't play nicely.
A partial solution
> IT managers are under to cut costs and drive up the utilization
One place I worked set an (arbitrary) target for server utilisation. The IT director flipped out when he was told that server utilisation averaged 19% so we wrote an infinite loop and ran it on the servers - one process per CPU. Utilisation went up, but boy, did the users scream!
 19% isn't actually all that bad when you consider: Most systems only ran 9-5, 5 days a week. That's 40 hours out of 168, so you're already down below 25%. Add in that every (yes, every) server had a D.R. box and your utilisation is halved.
p.s. only kidding about the infinite loop, but it was suggested. The thing is, for interactive servers, utilisation is the opposite of responsiveness. If you want one, you can't have too much of the other. Generally, as idle time of your performance-limiting resource halves, response times double. And that's true no matter how many VMs you run on a piece of hardware.
Well behaved applications can be run on the same OS. Ill-behaved ones can force a server to be rebooted to clean up after orphaned memory segments etc. Cheaper servers made it possible to host applications on their own servers to better manage the applications --- not the OS. Now in order to consolidate nobody dares to take the risk of running the applications together on the same OS instance. Therefore, virtualization is in demand. The root of the problem is as always the crappy application code.
@ Daniel Palmer
The attraction (currently) would also be the ability (depending on your whether your hardware / hypervisor supports it) to have different operating systems and their applications running on one hardware platform. This would allow businesses to have a strategic hardware platform, but some flexibility on the operating systems deployed on LPAR's / VM's / Containers etcetera. You also have the potential of porting applications off older hardware, deploy into a LPAR / VM / Container etcetera, and you can decommission the old kit.
There are limitiations of course - for example, I work with IBM pSeries midrange equipment. The PowerVM Hypervisor (from memory) supports the AIX unix OS and RH / SUSE Enterprise linux OS's (an iSeries kit). No support for other OS's (WIndows 2Kx Server for example).
Although, if you were a masochist, you could have a pSeries server, running PowerVM hypervisor (with HMC for admin), with an LPAR running RH Enterprise LINUX, with that RH LPAR running VMWare, with a WIndows 2003 / 2008 Server 'guest' operating system installed. Goodness knows how that would perform :)
RE: A partial solution by Pete
Very elegantly put Pete, thats the one side of utilization that always gets "overlooked", yet always the first to be spotted.
Virtualisation has its place, we have 2 boxes that run several VM's, but I'm sick of hearing about how its the going to save the planet, the company, my sani... AHHHHHHHHHHHHH THE MADNESS!
>The attraction (currently) would also be the ability (depending on your whether your hardware / >hypervisor supports it) to have different operating systems and their applications running on >one hardware platform.
That's fair enough.. but if your application runs on x86 metal it's going to run on top of Linux or Windows. If you actually need two small machines, buy two small machines. Buying one big machine and making it two sounds like a really great idea but from experience it doesn't work out as simple as that. When something fails in your "Box of all trades" you have to take all the VM's down or migrate them somewhere (so actually you needed more than one box.. ironically).
>This would allow businesses to have a strategic hardware platform, but some flexibility on the >operating systems deployed on LPAR's / VM's / Containers etcetera.
Again, the things you can mix will be limited by the hardware you had to start with. If you have x86 hardware you can't magically run all sorts of weird and wonderful OSes unless they have x86 ports that play nicely with whatever hypervisor you have chosen or you have an emulator to run them. Containers are a good idea, you have the ability to create seemingly separate environments (and with Linux Vserver you can run seemingly different OSes in those containers, as long as they're Linux coloured that is) but without the nightmare of trying to get Xen/similar not to throw up every 5 minutes. Still you only really have one machine and you have to down all the virtualised environments to do anything with it.
>You also have the potential of porting applications off older hardware, deploy into a LPAR / VM > / Container etcetera, and you can decommission the old kit.
Only if the old kit was the same arch as the new hardware, or you have an emulator.
>Although, if you were a masochist, you could have a pSeries server, running PowerVM >hypervisor (with HMC for admin), with an LPAR running RH Enterprise LINUX, with that RH >LPAR running VMWare, with a WIndows 2003 / 2008 Server 'guest' operating system installed. >Goodness knows how that would perform :)
Is pSeries hardware Power based by any chance? That would sort of explain why it won't run Windows. Does VMWare even have a PowerPC hypervisor?... VM != portable runtime.
Missing the point
The real benefits of virtualisation come when you're talking about doing it on a datacentre or business wide scale (the comments above about virtualising exotic configurations or single-server businesses miss the points):
* Reduced cooling, electricity, data centre space, hardware maintenance and hardware purchase costs. This is the number 1 selling point of it and it works since you don't have all your machines running at capacity all the time, so you can combine several physical machines onto one virtualised one and make far more efficient use of resources (eg. with something like Xen which balances CPU load between VMs). Plus a single powerful machine is still more space, electricity and cooling-efficient than several small ones which equal the same capability.
* Easy and flexible deployment of new servers - in most large organisations it can take several days or weeks to purchase, have delivered, rack and install a physical machine. With VMs you can do it in minutes.
* Ability to cheaply give users their own machine with root access - eg. for a developer. If (actually when) they break it it can be easily reset to a snapshotted point or re-created.
There are a lot more I'm sure, so yes there are reasons why companies are going ga-ga for VMs at the moment.
Working for an SMB that generally buys applciations and then thows them on to multiple sole use boxes virtualisation makes perfect sense. However much like when air travel got bigger planes and everyone said ther'd be room to streatch out; virtualisation will just make servers the cattle trucks of the data centre like long haul on a Jet is now the cattle truck of the sky.
As business starts expecting the ability of packing stuff in it will just increase the appetite for running more applications, not getting smarter about it.
My prediction is that virtualisation will have minimal to negligible impact on the consumption of servers. If you want a real impact problem look at the state of budjets and the economy. Without virtualisation as a possibility companies would simply cut application deployment to save dosh.
> According to the box counters at IDC, the number of servers that
> shipped from vendors with server virtualisation hypervisors
> grew by 26.5 per cent in 2008 in Western Europe,
> hitting 358,000 units.
Any server shipped with Red Hat will include a hypervisor, regardless of whether it's going to be used. Similarly, servers will be bought with some random OS that either subsequently hosts, say, VMware or is replaced by something with, say, a Xen hypervisor.
You just can't draw any useful conclusions from that metric.
- Pic Forget the $2499 5K iMac – today we reveal Apple's most expensive computer to date
- RUMPY PUMPY: Bone says humans BONED Neanderthals 50,000 years B.C.
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Review Vulture trails claw across Lenovo's touchy N20p Chromebook