VMware was founded in 1998, and until the launch of its eponymous product the next year, the PC’s x86 architecture had been considered to be impossible to fully virtualise. Since that time, although VMware continues to prosper, prices of virtualisation tools have fallen to an all-time low – in fact, most hypervisors are free, …
As my old CS professor used to say - "you cannot have it either way"
First and foremost, Mainframe virtualisation and x86 virtualisation have one fundamental difference: Mainframe virtulisation is all about resource and application management. x86 is about that, but also about security. In order to maintain the security features virtualisation has to keep the extra layers in place.
Second, x86 IP networking is traditionally considerably more complex than mainframe. You have IPSEC, tunneling technologies, etc in use. If you take the extra layers out you end up breaking the isolation between the images. There is a rather long list of "does not work" network features on OpenVZ (and Virtuozzo) and that is for a reason.
x86 virtualisation technology is not just a reinvention of the mainframe business. It does a lot of other things. As a result as my old CS professor used to say: "You cannot have it either way". His sayings were actually considerably more rude, but the meaning was pretty much the same.
Re: As my old CS professor used to say - "you cannot have it either way"
A quick search shows that Lpar on System z is EAL 5 certified. VMWare is only EAL 4 certified so I don't think there is any security advantage for x86.
IPSec and and tunneling work fine with mainframe. Too bad some x86 virtualization can't handle them.
x86 virtualization is still trying to catch up to the mainframe.
Cheap as Chips?
I don't know who came up with this expression, but have you been in a chip shop lately?
As the joke goes...
..when asked directions to the manor house, the village idiot thought for a while then said, "well, I would be starting from here".
Mainframe and UNIX virtulisation was designed-in to a well thought out system. The guest OS were generally also of such a planned type.
As already pointed out, there are a lot of reasons for x86 VM that have nothing to do with per-user tailoring. That after all should be something that generally 'just works' by the OS.
We have issues running older Windows and Linux versions due to security (need the OS to run something legacy, can't trust it on its own) and due to the loss of hardware support with time (hence the attraction of virtulised network cards, etc).
For other reasons then things could be improved by a less complex stack, and VM tools that allow hot migration of a running machine from server to server, etc, offer great advantages in uptime. Except of course when those tools come with bugs...
re: "well, I would be starting from here".
I think you mean : "well, I wouldn't be starting from here".
(I'm not even sure this qualifies for the pendant icon; saying exactly the opposite of what you mean is so wrong that correcting it doesn't seem particularly pedantic.)
Good article, with a few ommissions and oversights
It is good to see that finally somebody dares speak the truth about the overheads and inefficiency of full virtualization. While the sales brochures boast overheads in low figure percentage points, anybody who actually bothered to test this on any realistic workload will find that the overheads on full virtualization are in the region of 30-40%, and this applies across all PC virtualization products, be it VMware, Xen or KVM. But most people neither bother doing their own testing nor do they have enough understanding to be able to apply optimizations on bare metal that become virtually impossible (no pun intended) when virtualization is used.
One thing that is overlooked in the article is that KVM and Xen have certain advantages in terms of overheads. KVM uses the core features already built into the kernel (e.g. the scheduler), whereas Xen and VMware bring their own. Xen, however, has the ability to do half-virtualization, where the guest doesn't run a kernel of it's own but relies on the host kernel to run the container's processes. But apart from only being to run the same guest OS as the host, this still involves a container, which comes with more overheads than chroot-style virtualization, a-la OpenVZ (free and open source project that Virtuozzo is based on), Linux Containers (LXC - not yet deemed stable, but it is in the mainline kernel), and Linux VServer (has a killer feature over OpenVZ and LXC - copy-on-write hard-link file unification which reduces memory usage, page cache usage and disk space all at the same time).
VServer's copy-on-write hard-link file unification is pretty much the mother of all memory deduplication approaches. For a start, it's free - once you unify the files by hard-linking them, all the executables and shared libraries will implicitly mmap to the same memory space (based on the inode number). That means if you have 100 guests, you only have 1 instance of glibc, rather than 100. This allows for some truly mind-boggling guest counts on a single host (hundreds). Best of all, there is no expensive run-time memory de-duplication required, a-la what VMware does or what KVM does using KSM (Kernel Same-page Merge) - it is all implicit. The savings in terms off disk space and caches (page cache, CPU cache) are a bonus on top.
Bochs isn't a virtualization tool, it is an emulator and as such shouldn't be listed in the same group as the others. QEMU has the ability to do emulation, too, but it is also used as a front end for KVM (and KQEMU until recently), so it mostly earns itself a place among the virtualization tools.
Finally, switching between VM technologies is not particularly difficult. On UNIX OS-es it is usually as simple as tar-ing the files to a fresh container and re-installing the boot sector on the new VM.
Quite impressed with KVM
Been using esx since 2.5, xenserver since 4.0, and have dabbled in some others.
Recently I upgraded my laptop memory so decided to try out KVM on fedora 15.
The perf of windows 2003 (first VM tested) seems quite good so far.
All that is missing is the high end management tools, but this is just for my laptop testing and not production, so no matter. I'd be quite interested in redhat's VM stack though.
This is why I use OpenVZ
This article raises several points I have made for a couple of years now about why I use OpenVZ almost exclusively. OpenVZ uses the main OS's kernel, its not hardware virt at all but software. Its much akin to freebsd jails but a bit more flexible and elegant.
You elliminate the emulation of hardware, perhaps network interfaces, disks, etc. You eliminate running a very similar kernel on every guest, the only thing you really need different to attain binary compatibility with most x86 linux applications is everything except the kernel, if you need to run something under a debian system you can use a RHEL kernel with all the debian underpinnings in an OpenVZ container (or vise versa). Also you can resize the space given to a virtual environment on the fly without having to resize filesystem images, and those files are visible from the host node FS, this also has the benefit of eliminating double filesystem overhead, where you loop mount an EXT3/4 (or whatever) filesystem inside another filesystem.
This series needs the linkfarm for easy reference
1) Before 'the cloud' was cool: Virtualising the un-virtualisable
2) Before the PC: IBM invents virtualisation (14 July 2011)
3) Fun and games in userland (18 July 2011)
4) Virtualisation soaks up corporate IT love (21 July 2011)
5) Cheap as chips: The future of PC virtualisation (25 July 2011)
There's only one thing I'm waiting for now...
And that's a shared GPU. There are a lot of things you can do with virtual CPU and RAM (that aren't actually virtual), but if I could have the ability to share the GPU... that would rock. Obviously, I could do things like run a fully accelerated desktop inside a fully accelerated desktop - great for gaming - but it would mean I could also take advantage of the GPU for rendering, for instance in CAD. A remote client could then have the benefit of the GPU, without actually having one.
In latest release of VirtualBox you can now use something called "Pass through". This means the VM gets full exclusive access to the graphics card, which means you can run full speed. This is highly experimental and only works for Linux hosts.
After a brief search, I turned up PCI pass through - very intriguing! KVM supports it as well, it looks like. Highly experimental, as you say, and it seems there isn't a lot of hardware that fully supports it... but still. The ability to "virtualize" a GPU has some very interesting connotations!
And once I get that working, I'll need to figure out how to get a client to use the GPU during a VNC session... fun times!
the Obvious That Needed To Be stated
You can't get a quart out of a pint pot.
That is all our grandmothers would bother to tell the PC virtualisation consultant as they showed him the door.
Unfortunately, I don't doubt that it will be need to be stated again and again, as this technology continues to be mis-sold to managers who will not think to refer the matter to their grannies first.
Congratulations on an excellent series of articles. It has taught me a lot. I had some gut feelings about it: it is nice to have confirmation of them.
As mentioned by Destroy above, when running a series like this, can you please give links to each part at the end, if not on the bottom of each page?
It would be valued.
(irrelevant afterthought: Is your editor nicknamed regedit? ;-) )