A study (pdf) we undertook in 2008 with the help of the Register readership acknowledged what you all knew to be true: contrary to the hype, IT wasn’t in fact broken/on-fire/rubbish, it was actually doing OK. However, those working in the field happily acknowledged that things could be better. The burden that IT works under …
Load balancing VMs across your server estate is something that occupies a lot more time that I would have thought when I started using virtualisation. Similarly, moving VMs from node to node takes quite a bit of time, depending on configuration. (Admittedly, they are ESXi nodes with local storage, and no management software.)
One thing I have learned now that I am three years into production use of virtualisation, (including a full VDI deployment,) is that Virtualisation is still for the Big Boys with the Big Toys. The more I work with it though, the more I realise just how much easier this would all be if we could only afford all that stupendously expensive management software. Without the right tools, the net result can just as easily be “more stuff to deal with.” We’ve managed to overcome this with some very strict procedures and a *lot* of scripts, but someone just starting out would not necessarily see a reduction in maintenance overhead. I think that if you have the real management tools to accompany virtualisation it can be a phenomenal time saver.
Sadly, the reality of virtualisation is that in order to extract real benefits you must know an awful lot about it going in. You have to plan your network and your disaster recovery procedures around it, and you absolutely must test everything extensively. Smaller shops simply don’t have access to these kinds of resources, and even medium-sized outfits struggle to realise any benefits.
The barrier to entry on production virtualisation deployments really is the planning and management overhead, and it is way too high for any but the most dedicated, or the well-heeled.
(Or in cases like my own, the desperate. Not enough space or cooling to have everything be physical servers. Not enough money to build a real datacenter, not enough money to afford management tools either. You learn an awful lot about virtualisation through trial-and-error, as well as coming up with your own very extensive set of “best practice” documentation for every conceivable issue.)
for me I mostly use the same tools for VMs as real hardware. I have been using cfengine for many years to manage distributed linux environments and I just add support to my configuration to detect the presence of vmware and set policies accordingly. I use the same nagios, and my highly tuned cacti(result of a few man years of development) to monitor everything. I've had on my "TODO" list for a while now to write stuff to parse esxtop and pipe it into graphs but haven't had a chance to do it yet. I know it's preferable to watch the hypervisor vs the guest but so far watching the guest works pretty well(well enough) too. My cacti collects more than 25 million data points a day. My cfengine configuration is nearly 20,000 lines of code(not excluding GBs of data that are flat out copied to systems).
At a couple different events I've been to I've come across companies specializing in VM management, but it's rare that they can extend that expertise to physical systems as well, when I ask them their faces just go blank.
I build systems pretty much the same way if they are a VM or if they are physical - kickstart(again the result of a lot of tuning work over the years). And whether it is physical or virtual I try my best to NOT have any critical data on the local disks of the systems, everything is either on a SAN(RDM/iSCSI/FC) or a NAS. If a VM totally blows up I can rebuild it in a matter of minutes really, a VM blowing up is a rare occasion(happened once in the past year, when I upgraded the VM hardware and then figured out I had to downgrade it again).
Also simplifies backup and replication. I can almost count the number of times I've done snapshots this year in VMware on one hand.
If you are mostly building something yourself, you can extend an existing "physical" management system to deal with virtual machines, or patch together something new out of a group of small/cheap/specific apps. Done correctly it's entirely possible to treat VMs about the same as physical boxes. Takes a lot of customisation and/or planning though. Mostly over that hurdle, thankfully.
Still…vmotion, live backup and such toys would have been nice.
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip