There are plenty of reasons why you might to choose to virtualise UNIX (or any other OS) rather than co-host applications. Firstly there is the issue of housekeeping and patch management - anybody who has worked on a co-hosted environment in a service-critical environment will know about the problems of co-ordinating outages for things such as patch management, introducing OS upgrades or some configuration changes that can only be done by bouncing the machine. Yes, there are times when the Hyperviser needs such treatment, but with the ability to move VMs dynamically, then downtime for that can be reduced.
Then there is the issue of isolating problems. Unix is actually not very good at that - a badly behaved application can bring the whole machine down. Anything from filling up swap, to forking to many processes can, even if it doesn't crash the machine, bring throughput to a virtual crawl. Anybody who knows IBM mainframes will know that there are far tighter controls workload management, but that's a far more rigid and less fluid environment. UNIX, Linux and so on are simply not like that - it's a strength, and it's a weakness at the same time. Doing your workload management at the hypervisor level can allow you to greatly limit the impact of badly behaved applications. Then there is the convenience fact of facilitating consolidation - yes, you might consolidated a dozen Unix apps onto a single co-hosted environment, but then you've got to sort out all those version and library differences, the clashing naming standards, the shared configuration files, the kernel settings. It's often more trouble than it is worth.
Also. virtual machines work particularly well in development and test environment. VMs can be bounced, libraries changed and the like without impacting everybody else.
Now this is not to say that co-hosting doesn't make a lot of sense. However, I'm much less convinced about co-hosting unrelated applications. For larger organisations it often makes sense to develop consolidation strategies that allow you to present "appliance-like" services. It's perfectly feasible to produce farms for co-hosting databases, J2EE environments, web-services. In that case you have virtualisation at a different level - that of software services. It's much more efficient in hardware utlisation terms than having a VM per application instance. You can then have an environment which is optimised for running a given type of workload.
Eventually is seems likely that everything will be virtualised by default - look at the mainframe arena. However, that's not instead of co-hosting, it's as well as.
As for my main problems with (machine) virtualisation. Firstly there's config management, especially insofar as it affects software licensing, management, performance and capacity planning. If you are going to move your apps all over an ESX farm you had better have a way of dealing with all those issues (and the software licensing one can really bite you in the backside - there are plenty of companies out there that will seek to optimise their revenue through their licensing models that con't recognise the reality of virtulaisation). Then there is the support problem - I've lost count of the number of suppliers that don't support virtualised environments. Some of it is FUD, some of it is real. Then there is the issue that machine virtualisation can be inefficient and ingrain bad practice. One thing that VMs do is chew up memory and disk space as every OS carries a considerable overhead of both. One of the major problems is that none of the OS's that you are likely to run on VMWare and the like will share code pages. For those that have used CMS on VM, that was specifically engineered so that different instances of the guest OS would share code pages.Not much chance of that with Windows or Linux (unless things have change, IBM gave up on doing the same with Linux under VM, but if it were possible it would save huge amounts of memory).