Red Hat claims it can run five virtual machines (VMs) for every three that VMware's ESX runs in the same server hardware. Qumranet technology also enables it to run more Windows virtual desktops than VMware, too. At a journalists' roundtable in London this week, Benny Schnaider, Qumranet CEO, said his - now Red Hat's - company's …
So, is this the already built in XEN in Red Hat or something new?
So, is this the already built in XEN in Red Hat or is the article talking about something new that I don't already have?
So where's the independent bench marks?
So did these guys publish independent externally verifiable and reproducable performance data?
A couple of weeks ago - Microsoft's showed how HyperV would out perform ESX. How, by making the disk back end be solid state with no moving parts. Hardly a realistic configuration for most Windows shops.
A couple of weeks the rumour mill had VMware buying RHEL. Now, were being told that KVM out-performs ESX...
The reality is that VMware has already stiched up the top 1000 corportate accounts - who will not be moving soon to some other virtual infrastructure. More likely they will be upgrading to Vi4 sometime next year or the year after....
What is it ...
It's an alternative to XEN which is said to be a better way of providing virtualisation capabilities in Linux.
It comes in the form of a Linux kernel loadable module and is simpler than XEN and VMWare because it uses much of the infrastructure of the standard kernel (i.e. memory management etc)
As to whether it is "something new that I don't already have", that depends what distro you are running.
But you can get it in working in kernels from the past year or more.
Ubuntu decided to focus on KVM, followed by Redhat who then bought the company who developed it.
Yes, this is something new.
I'm using KVM+libvirt for lab automation, and it's vastly better than Xen -- which is rarely used in HVM mode (the only configuration in which it has support for unmodified guests). Simpler, cleaner, less arcane and more flexible (having its userspace components based on QEmu), KVM still supports paravirtual device drivers but can run full virtualization until the guest installs them -- unlike Xen's paravirtualization where the guest kernel needs to be written appropriately from the bootloader up.
KVM is Good Stuff (tm); it's also included in new upstream versions of the Linux kernel.
About eight VMs seems optimal.
I'm guessing the 30+ figures will leave you with weenie instances too light to do anything other than some really light webserving. Our experience using VMware and Xen (predominantly VMware) for server consolidation here is you can get eight decent VMs on a modern two CPU rack server or blade, after that the instances get a little weak for real business use. I'm not sure KVM will give any better in real world instances, but I'd be interested to know what other VMs-to-CPU ratios users are operating with.
Xen is a hypervisor, The hypervisor runs on the bare metal, and the hosts run atop the hypervisor.
Xen does have one twist which sets it apart from a traditional Hypervisor (such as IBM's VM). Rather than develop and maintain a set of device drivers, which would be a difficult and ongoing task with the wide range of equipment available, Xen sends all I/O via the hosted operating system in Domain 0 and Domain 0 gets near-direct access to the bare metal devices.
KVM is not a hypervisor, Linux runs on the bare metal, KVM is a Linux feature to make it more efficient to run virtualised hosts atop of Linux.
For the average Linux user who just wants to run up a VM alongside their desktop KVM is the better choice. Install one module and some client software and you're running. The VM looks like any other Linux process and managed like any other process (use "top" to see CPU use, "kill" to halt run-away VMs, etc).
The alternative for desktop use is VMWare, but this fine product is in practice a nightmare in a desktop environment because a huge informally-maintained patch set which tracks recent kernel changes is needed to bridge the gap between supported releases and what you might actually be running. KVM gains significant usability from being shipped with the Linux kernel and thus removing the need to deal with source code.
The choice is much less straightforward in server environments, where installing a hypervisor is much less of a chore than for a desktop. VMWare remains the tool of choice there, but Xen is close. KVM has a lot of potential, but is too raw for use at the moment.
My own feeling is that hypervisors are a hack -- a replacement OS for when the OS is too deficient to provide enough services and performance for VM hosting. This made sense for IBM when it invented virtualisation -- no one imagines that MVS would have made a good OS for providing services for hosting, so a new OS was needed. It seems fair to differentiate that OS from IBM's general purpose OSs with the name "hypervisor".
But there's no need for modern operating systems to be deficient in providing virtualisation services in the first place. And running under an OS buys a lot of tools for free (as a trivial example, process accounting to allow billing for the use of the VM). So I expect that the approach of KVM will prove to be superior in the long run.
Let's see the documented benchmark then .....
Claim: Max of 35 VMs .... Patently untrue - I have personally seen ESX systems running in excess of 70 VMs.
People like this should be asked to produce their validated test criteria, benchmarking method, and results by the journo ( if he/she is at all professional ) - without this it's worthless posturing in the hope that some hack rag ( ie the Register ) will pick up on it and write something about it.
Move along .... nothing to see here ... as usual.
I agree with Matt and robin. We do see higher consolidation ratios, but it depends on the requirement. 30+ instances may be ideal for Dev/Tes/UAT environaments, but for production?? - at the end of the day it depends on the app. One customer of ours reported that in the D/T/UAT area they saw significant savings due to the higer ratios available, when they looked at this in production, the returns dropped to single digit percentages due to a) lower consolidation ratios and b) all the other 'stuff' needed to run the virtual estate. Plus the fact with analysts quoting the 'no more than 40% of apps can be virtualised' rule, physical is still needed. The real complexity comes when you have to manage physiscal and virtual, which is what all customers have to do.
@Glen, Hypervisors a hack?
There's one reason why hypervisors might be a good idea: security. I'm talking in the abstract here, rather than w.r.t. any particular product. It ought to be possible to design a htypervisor such that it's not accessible to a remote hacker - a secure "ring -1". Such a hypervisor would not be made accessible to the internet. It would provide network access for its client VMs through one network interface, from which it would itself never accept any packets. It would offer the control services on a separate interface, that the organisation using it would keep completely separate from its main corporate network and the internet. If the virtualization is perfect, hacking the hypervisor from inside a VM is impossible. There are parallels to a well-designed SCADA (plant automation) system.
Another form of hypervisor security is to run it from read-only memory or disk, so the only way to change or subvert it permanently would be to physically replace its media. (I'm guessing, linux-based KVM might happily boot and run off a CD? )
Obviously in an Enterprise situation, us admins like tools that make our lives easier. While Virtual Center isn't perfect, it does a pretty good job and continues to get better. Citrix XenServer obviously has the Citrix management framework wrapped around it and does a good job too. What kind of Enterprise level tools does KVM have? Also, what about vMotion capabilites?
You won't see any vendor or customer provide a head-to-head comparison of VMware to Hyper-V or Xen (or anything else). There's a clause in the VMware licensing agreement that prohibits any disclosure of performance data without VMware's approval.
re: Hypervisor benchmarks
To be clear, Colonel, the reason VMware doesn't like people disclosing performance data is because it likes to work with independent testers closely to ensure "like for like" comparisons rather than have people compare apples and oranges...
It's certainly not because it's running scared of some young upstart like KVM or Xen or.. no, I can't even take Hyper-V seriously enough to joke about it...
p.s Keep frying the chicken...