One of the dirty little secrets of virtualisation is the performance cost: operating systems running inside a virtual machine are slower than those running natively on the same hardware, sometimes by quite some margin. This is termed virtualisation overhead, and with current whole-system virtualisation, it's a given. It always …
Native Perfomance is here today...
In Linux it's called Containers and a package called OpenVZ has been about for a while and the newer LXC is becoming quite popular now. The down-side? Just one kernel running all those containers rather than a hypervisor running individual "PCs" each with their own kernels - that means you can't run 2 different OSs, although with care you can run 2 different Linux distributions as long as they're both happy with the same kernel.
This technique isn't new either - Solaris and FreeBSD have had similar features for a long time and LXC for Linux while part of the stable kernel isn't perfect yet, but it's good enough for me to use in production in the data centre. 100% native performance in each container with none of this virtualisation wastage.
I'd take issue with the first sentence. Windows XP running under VirtualBox inside an Ubuntu landlord performs better than it does on bare metal. I surmise that Linux's ability to use more RAM, and generally to manage resources better, offers XP benefits equivalent to caching on all hardware accesses.
Until you try to access the video card from the VM....hacked drivers at best.