Xen is a PITA. KVM is a no-brainer.
"And now? Bye, bye Xen... we meant KVM?? Oh really.... Red Hat needs to learn that switching horses all of the time causes grief for their customers. With that said, I think KVM is likely the Linux developed FOSS virtualization of choice for the future. But who knows what Red Hat will say next year..."
It's worth getting rid of Xen.
And why? One word:
KVM is fantastic and is included in all relatively new kernels by default. It's hilarious because even though you may see a fight between Suse or Xen and Redhat.. Suse already supports KVM. So does Debian, so does everybody and their mom if they use a Linux kernel newer then 2.6.20 or so.
So it takes pretty much zero effort to get KVM implimented in Linux, because it's their by default. Redhat saying they are supporting KVM is about as difficult for them as announcing that they support the PCI bus.
KVM, basically, turns the linux kernel itself into a hypervisor. So instead of having to have a extra 'management console' with specialized commands for managing processes and starting and stopping VMs.. you can just use a X terminal. Want to 'pause' a VM? Hit ctrl-z! It's that simple. You can use top for monitoring VM performance. The VM runs just like any other application.
That's easy usability.
It uses Qemu for userland portion right now, but it can use pretty much anything. KVM kernel support is suppose to be fairly generic.
How many hypervisors you know support iSCSI? How about Fiberchannel or regular old drive image on NFS? It has logical volume support, sata support, pata support, scsi support. You can use pretty much any nic card ever invented. KVM gets proven support for all of that, because it inherents all the hardware support, networking, and storage management features of Linux by default. All of it proven and being widely used in all sorts of things.. _right_now_.
And not only having the same level of friendliness as Qemu or Virtualbox (minus the fancy gui) it supports all the 'killer' enterprise-ish features.
It has paravirtualized drivers for Windows and Linux. For high performance I/O. Without PV drivers, for example, it's basically nearly impossible to get better then 100Mb/s speeds out of a fully virtualized guest. With PV network drivers for Linux and Windows exceeding 1Gb/s performance is quite possible without any special hardware support.
It has the ability to do snapshots, to hotplug storage and memory. It has USB 1.1 support.
It can do live migrations from host to host with virtually no downtime.
Lots of performance tweaks and optimizations.
Basically there is no reason why it can't compete on a feature for feature and performance basis with fully-virtualized guests on Xen or Vmware ESX. (of course KVM, being relatively young needs to play some catch up. But this is no virtualbox were it's going to be desktop oriented, it can do that AND compete with hypervisors)
So on and so forth.
So supporting KVM for Redhat is a no-brainer. It's better for Redhat, it's better for their customers. No need to install any extra software (beyond the modified qemu userland), no need to futz around with trying to shove a hypervisor underneath a perfectly working OS. No need to deal with Xen and it's PITA ways. It's there, it's always there, and it requires no extra effort by the end user besides firing up the VM and installing their favorite OS.
(well some networking hackery must be done, but it's pretty minor.)
This way virtualization becomes a integrated feature into the OS. It'll be common as a install of openoffice.org on a Linux machine and just as exciting.