@Anonymous Coward 11:33
Wow, you either are a or work with some really crap developers. We do rendering in virtual environments. We take a system and run every single core right to the wall. We've got somewhere between 60 and 75 people company wide, and about 25 Virtual Servers running almost 100 VMs. About half of those VMs spend their time doing some really heavy lifting jobs.
A typical workload looks something like this:
Sample VM 1:
- Poll a folder on the FTP server to see if there are any new files
- Decrypt multi-gigabyte zip file
- Unzip the zip file and pop the various bits on a file server
- Generate and print giant HTML production sheet
Sample VM 2:
- Read instructions from Database for a given job
- Render various image transforms on every image
- Move images to giant Buick-sized printer
We have VMs executing variations on this theme all over the place. If we had to use physical boxen to do these tasks it would be unmanageable. The crappy developers that write our software require the we use a different system for every giant printer we print to, as well as a different system for each type of job we accept. We don't have the physical space to house that many units.
Virtualisation has been an absolute lifesaver for us. Our CTO is dead-set against virtualisation...absolutely hates it with a passion. Because of this we had to throw dozens of different types of tests at the virtual environments to prove they were as good as having real physical boxes do the same job. Shockingly enough, running ESXi, these systems are within about 2% of native metal for the same job. (And I can drive utilisation of the physical server way higher, thus saving muchos big time budget wise.)
Now we have VDI environments for everyone instead of running things on people's local desktops, and other than file servers every single server environment we run has been virtualised.
You want a complaint about virtualisation, then complain about the obscene fees charged for anything but the hobo-class management tools. As a company, we’re way too small to licence the VMWare stack for our servers that actually allows us to do COOL things. By our calculations it would cost us in the neighbourhood of $100,000. I just re-did my entire server fleet (including UPSes and refurbishing older servers into workstation-class desktops) for about that amount. There is absolutely no way in hell we could ever sell $100,000 in management tools to the brass. So instead, we are left dragging VMs around using the crappy (and capped at 20Mbit because VMWare are [censored] [censored] [censored]) VI client for everything. If there is a complaint to be made against virtualisation it is not in capability, but in the costs of the tools required to really make it shine.
That said, I’ve had my fill of metal systems. I did that for long enough to never want to do it again. Run an OS whose configuration is important and I want to keep on a metal system? Never. Data goes on a RAIDed storage server with proper backups, and applications with valuable configurations go into a VM. Recovery from failure is just too important to leave to the astonishingly abysmal quality of modern hardware.
Speaking of which, I have to go RMA 3 disks, a motherboard and a half-dozen sticks of RAM. [Censored] [Censored] [Censored] [Censored] [Censored] <-- carrier lost -->