The feedback on virtualization experiences from those participating so far in our latest online workshop has been generally very positive. A main focus of the comments has been on server consolidation and the cost savings that come with it, and some of the results achieved seem pretty impressive: We're running about 15 VMs per …
Security Implications of Virtualization
I would be interested to know if the security implications of heading towards a fully virtualised environment have been discussed as part of these workshops?
Cost / Space savings need to be weighed up against any security impications that this may bring.
Threat Mapping and Risk Analysis – A broad threat mapping exercise should be undertaken to look at the level of risk and threats associated with virtualisation that is specific to the environment / business market that you are in.
Consideration will be needed in terms of patching and virus control being in a centralised environment; also what Denial of Service and recovery methods that will be required to manage a virtualised estate.
How will virtualisation impact on industry best practice advice in terms of ‘segregation of administrative duties’ e.g. virtualisation administrators assuming the role of traditional network engineers as we move to more layer 2 devices becoming a virtualised commodity.
No single security model should be applied across all groups or zones, specific threat maps and associated controls need to be identified and feed the creation of specific security zones. This would contain breaches within one zone and help protect against know attack types.
As an example consideration of the following would be required;
Hypervisor breaches (virtual machine management system) and the ramification, possibilities include:
Access to the restricted hardware layer resulting in data leakage.
Compromising other attached Virtual Machines controlled by the same Hypervisor and in effect gaining unauthorised access to user data and systems, in a hosting environment this could result in the entire customer base facing a breach as a result of a successful attack on one instance.
As a developer virtualisation means I can run multiple platforms from my single desktop machine and switch between them all at the press of a button - no more rebooting to check if something works with config $Y.
It also makes kernelspace development very easy to test and debug sanely!
In my experience...
... it's a lot easier to manage and sure it takes up less space, however I honestly think it costs the same. There's no real cost savings to be had.
Saying that, the management is truly amazing. Backup to disk and tape daily without downtime, I can power off a whole server room without it causing issues for users, hosts get patched weekly - again without downtime, and should a host ever die the VM starts back up automatically on another host within about 4 minutes.
For DR and HA reasons alone it's worth the investment, let alone the management aspect. Knowing the resource level for 200 servers on a single screen or being three clicks away from taking a full image-level backup of the whole server is nothing to be sniffed at.
Loving the Solaris Zones
5 minute no-interaction install for a new environment (with Jumpstart/JET :) ).
Use bugger-all resources (7 Oracle and 7 SAP instances in seperate VMs on one 64GB RAM box? Sure, why not.).
Flash-archive installs to move hardware OSes (even from ancient beige-box Solaris 8/9 hosts) to VMs without the applications noticing they've been moved.
No VM license issues.
Need a sly Wiki set up? Slap a zone on a development box and get your mod-php on.
Want to test some LDAP changes before rollout? Flash-archive a live box into a zone and see if everything works.
Happy days. I recommend it.
All about the DR
I'm exactly the same, it's all about DR for me, everything hops across, replicates and I can sleep at night. It's never been about consolidation and I can't think it will be for a while either.
I'm developing applications for Windows, Linux, the web and several mobile phone platforms, and I use VirtualBox VMs for each different combination of OS and SDK (mostly "#! Linux" instances). It keeps each instance reasonably clean, and stops the host (a 3Ghz quad-core 8GB WIndows 7 x64 box with a Samsung PB-22J system disk) from getting clogged up.
The biggest advantage is that all the VM disk files get backed up to an external hard drive and to Carbonite, so if anything goes pear-shaped, I can be up and running again on another machine in minutes.
Single points of failure / software costs?
Is it just me or are there potentially two critical points of failuire with virtualised setups? One is the base server hardware itself - if that dies, instead of losing one client, you lose 10 or 20 in one go? It basically means you need at least *two* beefy base server boxes *and* have a way to do regular snapshots that be can be restored on either box. BTW, some virtualisation software I've seen cause large pauses when you take snapshots, which isn't too clever.
The second potential single point of failure is the NAS box that everyone seems to mention w.r.t. virtualised systems. What if that goes down? Potentially multiple base servers dead and dozens of sites inaccessible with no way to restore them either. Again, wouldn't you need two identical NAS boxes (and these things aren't cheap!) that are kept exactly in sync?
As for the virtualisation software, licenses can be very expensive for VMware and other commercial VM systems - the money you save on hardware can be partly swallowed up by the software costs. As people have said, if a large chunk of your clients use common software (same Web+scripting+SQL DB+CMS), it's more efficient and cheaper to put them on a single OS than spread them across multiple OS'es (virtualised or not) and in those cases, non-virtualisation wins out.
What I'd like to see is free, good virtualisation software come with server OS'es as standard - it's becoming prominent enough to be considered as part of a server OS. Red Hat are making some strides with KVM (RHEL 5.4 and 6 might finally ship a usable version) - once virtualisation software costs drop to "zero" and you don't have to involve a third party to virtualise, you'll see lot more companies consider it.
I'm surprised there was so little mention of testing. VMs allow you an easy way to create a test system and do whatever evil things to it you wish, but without risking any infrastructure. Test it in a VM, then deploy it when it's solid--or leave it in the VM, unless it needs a server to itself.
@ Single Point Of Failure
I'm sure at the high end there are a number of ways around this, but I'll let someone else cover that.
At the low end where I've been playing recently, there's a lot of attention around DRDB (http://www.drbd.org/) - essentially network RAID-1, so you can mirror your iSCSI volumes on two different NAS units.
We've been using Proxmox (http://www.proxmox.com) which, although not a live CD, is a distribution which installs to the local disk and fires up a nice management console on boot for managing KVM VMs. So theoretically if our main box drops, we can have a mirror up and running on spare hardware within a few minutes. Not exactly HA but good enough for us, and it's completely free.
It's not bad - currently needs a bit of tweaking for iSCSI and DRBD but so far the turnaround for new features has been pretty good, and they've stated they're already working on it.
- Boffins attempt to prove the UNIVERSE IS JUST A HOLOGRAM
- Review Raspberry Pi B+: PHWOAR, get a load of those pins
- Review Reg man looks through a Glass, darkly: Google's toy ploy or killer tech specs?
- MEN WANTED to satisfy town full of yearning BRAZILIAN HOTNESS
- +Comment 'Stop dissing Google or quit': OK, I quit, says Code Club co-founder