A colleague of mine recently remarked that x86 virtualisation makes no sense to any organisation that is cost conscious. I am an early adopter of virtualisation and wanted to know what he meant. “When using virtualisation, you are paying for far more software licences than you would if you were to take the time to implement …
Is it me being thick
Is it me being thick or all of your "pros" read like a list of "Windows Specific Problem"?
"x86 virtualisation" does the same sort of thing that user separation provided on, well, almost anything but the first and rather trivial time sharing systems all those many years ago. Or what mainframes offer. Or what chroots or jails or zones or containers or a host of other features found in many systems offer. But hey, now windows can benefit from it too, several decades after the rest figured it out, through (third-party) add-ons. Isn't that nice?
It is indeed of particular import in the "windows" racket because that thing just doesn't do very well as anything but eyecandy. And I'd say not even there, but let's not shed bikes. Other systems fail too even if they do fail far, far less and less eggregiously than this. Personally I would indeed say that if software falls over, as it does, and it takes down the entire system with it, then that system isn't very good at all.
Yet if Trevor can't swap it out for something better --and sysadmins are frequently in this position even if they can do and do know better-- then this virtualisation thing comes in mighty handy for containing the damage while reducing the scap heap of under- and misused-by-the-system hardware a bit. This is Trevor describing what it does for him in his shop; he thinks it's serving him well. More power to him.
Let's not forget that there are other benefits from virtualisation such as the ability for a server to be hosting windows boxes by day but switch to linux images for batch processing outside normal hours etc.
Redundancy also makes no sense for any organization that is cost-conscious, by your friend's argument. Why spend that extra money on additional power supplies, cluster failover, etc. when you can just have everything in one box?
Back in the real world, properly-implemented virtualization allows for tremendous flexibility with less management overhead per physical device. It arguably requires greater up-front implementation costs and effort than a single box does, but it pays off in the long run for most shops.
As I've commented elsewhere, if you're not implementing virtualization, you're probably doing something wrong.
Makes sense for server virtualization
Makes sense for server virtualization. Desktop virtualization / VDI, however, isn't as easily justified. The licensing of Microsoft OS (VDA subscriptions) and the hardware/storage cost required far exceeds the cost of just providing a desktop PC to the user.
And don't forget the byzantine restrictions on the use of Office in a VDI environment! Microsoft's approach to VDI is the best advertisement for Linux I've ever seen. I have lost count of the number of companies that have decided that VDI is necessary to make their business more flexible, but won't pay what Microsoft is asking.
Especially the "you have to have both Office and Windows licences for each endpoint that connects to the virtual desktop. Considering these folks could be connecting to a single desktop from a dozen or a hundred different devices (personal computers, laptops, mobiles, tablets, hotel systems, friends’ houses, etc.) this is just insane.
The kind of money they would have spend on that crap, they spend retraining their staff for Libre Office with Zimbra or Gmail on CentOS/Mac. (And porting their documents. Some spreadsheets are a pig to port.) And they rid of that god-awful ribbon bar! They’ve never been happier.
It’s funny, you know…10 years ago I was running Linux servers and Microsoft desktops everywhere. Today, I am running fleets of Windows servers front-ended by Linux, Mac and Android endpoints.
Bizzare how it all works out, eh?
Even for server virtualization is not that easy. I've seen countless database servers virtualized on over committed infrastructures (both the physical servers and SANs). The problem is that too many salesmen simply push customers to over commit resources and that's were the savings come in... however, there's still a huge difference between a single server that has 2 x 4GB dedicated HBAs and the same HBAs shared between multiple virtual servers. It may or may not be enough, depends on the workload.
I would disagree. I have one customer who has 2500 virtual desktops deployed at a cost of $623 per desktop. This includes everything, even the 10Gb network ports in the datacenter. That cost on top of the realization of soft cost savings and time to deployment for new applications has massively reduced their overall support costs per desktop.
I'm not saying it's a solution for every problem but it is real and it's getting better every day.
Microsoft crony says
"A colleague of mine recently remarked that x86 virtualisation makes no sense to any organisation that is cost conscious."
Says someone who has never looked at or considered open-source software.
Cost of Microsoft Virtualisation
I hate to break this to you, but if you have more than 10 Windows Server licences worth of servers to virtualise, Microsoft's Hyper-V virtualisation (and associated management tools) provide the lowest virtualisation cost on the market. (Unless you are prepared to go KVM/Xen without any management tools but your own shell scripts.)
RHEL's Enterprise KVM is the best virtualisation/management cost item if you are not hosting a large number of Windows Server licences, and you don’t need more than the basic functionality. (Which, let’s face it, most of us don’t need anything more than is found in KVM.)
But if you are in fact hosting a pile of Windows Server-based VMs, then Windows Server Datacenter is in fact far cheaper than any other alternative.
I say this as someone who has done the research on behalf of a number of companies that desperately need virtualisation, yet can’t even /dream/ of paying the kind of money VMWare asks.
Every saved dollar is critical, but you have to look at the total cost. The cost of the virtualisation platform, the cost of the licences you need, and the cost of nerd-hours to keep the lights on. Microsoft are actually quite competitive in the server virtualisation racket.
VDI is another matter entirely, and I have nothing but seething, bubbling rage for Microsoft regarding desktop (and office) virtual licensing. It’s why my VDI is CentOS + XRDP + Libre Office. And the devil take the first user who cries “Windows.”
which is why
which is why I continue to shamelessly beg vmware to unlock vSphere standard edition, give us at least the same amount of vRAM tax as vSphere enterprise plus, if not unlock it entirely. I'll get along just fine with just HA + vMotion, the rest of the stuff is not important to me but I'm forced into those higher tiers of licensing due to arbitrary decisions on hardware scalability in the licensing.
waiting for KVM to mature more....hoping that can be a solution. Though it will be hard to switch away from vmware after having used them for 12 years.
The author handily ingnores the fact that patching and rebooting 100 virtual servers has a labor cost too.
The security argument is even more laughable, given the difficulty of tracking, monitoring and cleaning those servers should a windows worm make it onto the network. Just tracking ownership of a fleet of servers that has no limiting factor is bad enough.
Patching and rebooting
Patching is a centralised operation. It's significantly easier in a virtualised environment because I have one application per operating system. I have to test if that patch on that OS affects that application. I have no weird interactions between all these different applications in to debug. One app per OS. Test and release centrally. That part is easy as pie.
Now, rebooting is again made easier by "one app per OS." Rebooting the OS reboots the infrastructure under ONE application. Just one! I don't tank the whole business with a single reboot, I don't have to schedule reboots around 15 different departments. I call up the people who use that application in question go "hey guys, I need to reboot the server for updates, mind if I do that tonight at 7:00pm?"
I get a yay/nay and move forward.
I can schedule and co-ordinate each application independently of the others, and that is a bloody GODSEND. You see, I work in a business where IT doesn't have the almighty word of God. We don't dictate when computers will be available. We work with the affected business units to ensure the best possible quality of service with the fewest possible interruptions.
That means worrying about things like downtime. It also has to bear in mind the real world, where we have telecommuting workers in the systems 24/7.
I can not even conceive what it would take to coordinate a shutdown of the entire corporate infrastructure at any of the companies I oversee. A miracle perhaps. Or 6 months worth of proactive planning.
Virtualised and containerised environments make patching/rebooting EASIER. Yes there are more widgets to reboot, but you can do it without nearly as much angst or worry.
As to tracking and monitoring and securing a fleet of Windows servers, have you tried combinations of some or all of the following:
Windows Server Update Services
Microsoft System Center Suite
++squillions of others
If managing a fleet of servers - physical, virtual or otherwise - to know "are they up, are they patched, are they infected - is a difficult chore for you, then you are doing it wrong. It's easy to do...and there are programs that let you do it for free.
Managing computers is EASY. Managing people (and budgets) is hard.
Windows worm? Can you not lock the running image from permanent changes so a simple restart brings back the fresh article? Pretty sure you can in VMWare, especially so for VDIs that can all run of one base image.
"hey guys, I need to reboot the server for updates, mind if I do that tonight at 7:00pm?"
A proper BOFH would only ask if he knew the answer was 'No! don't do it! I have a job running and it won't finish until 7:15 so please, please, please wait until after 7:15!!" and only then would a proper BOFH proceed to schedule the reboot at 6:45. Then again in my office the email telling everyone to log off for a reboot usually goes out roughly an hour after everyone has rebooted their own system because the application they use most often hasn't been responding for 40 minutes.
I sense the spirit of Simon in you.
Has a dark side too, what happens when nasty bugs on the storage layer make you lose 50% of your infrastructure in bizarre ways...
I have seen it with Hyper-V and VMWare too, although less on VMWare (which recovers better IMHO)
And besides, something I'm now really tired to explain, forget to run anything on a VM that requires IO, it simply doesn't perform nearly as good as real iron. The only solution I have seen is to keep the VM's virtual HD on the local RAID controller and that is asking for trouble.
Both excellent points. Two additional cons to virtualisation are network and memory bandwidth saturation.
Perhaps one should then buy some decent boxes or start to use some decent virtualization technology.
One of the real money gainers of virtualization is reducing the number of ports, and raising the utilization of these. I mean fewer converged 10Gbit ports perhaps running at 30-50% average utilization is much much better than having several hundreds of SAN and 1GBit Network ports running at max 5% average utilization.
Where do you get $100K for Windows Server Licensing?
If you need to run Windows, you can buy Windows Server Datacenter edition for $2200 per socket and put as many VMs as you want on it. Buy a pair of HP DL380s w/dual Xeon X5690s, spend less than $10K on Windows Server, buy 48GB of RAM per server and run 40+ virtual Windows servers on the pair (assuming that these are web or application servers that use 5-10% CPU on average). You don't even need to use Hyper-V as Citrix XenServer is free with no memory or VM limits (vs. VMware pricing). XenServer's management XenCenter console is easy to use for a simple management and also free.
It's about $2500 Canadian, for about $5000 a server. Or about 20 physical servers. I.E. Not that many boxes. Assuming you're talking Windows Server only, and not System Center, Exchange, etc...
Seems that the real problem is that...
...just about any maintenance on a Windows box REQUIRES a reboot. Time how long it takes between actual reboots and you have a reasonable metric. There have been reports of non windows boxes going over a year between reboots.
Of course, using a VM adds another layer, and that layer adds both complexity and overhead. This makes things less efficient, which adds cost. You pay your money, you take your chances. Your choice.
Having recently "done the maths" for a client, it is actually very difficult to say which is more cost effective. Our conclusion was that it was very much swings and roundabouts.
As mentioned above, virtual sprawl creates a different management overhead and you tend to shift the cost from hardware to licensing and support. After looking at power, rack space, network connectivity, FTE support bods, licenses, applications and many many more potential factors the TCO over 5 years was <£5,000 difference for ~100 VMs, or ~ 15 large shared boxes. Even 100 physical servers was only slightly more, but future energy costs could make that look less attractive.
Virtualisation is an option - its not a cure-all. Vendors will always sell you the "benefits" of their product.
List of Windows specific problems
It seems that the entire article talks about virtualization being used as a band aid to work around windows specific problems...
On Linux the licensing costs are far less, depending on what your running you *may* have to pay for applications but you can install as many instances of the OS as you want, and application licenses typically cost the same wether your cramming them all onto one box or have a dedicated box for each.
On the other hand, with Linux it is also typically much easier to install multiple applications on a single server, and updates which require reboots are far less common. Also an application restart is usually fairly quick and measured in seconds compared to a reboot which may take minutes. Installing multiple versions of Java is easy on Linux, and you can easily adjust the path for any application requiring a different version (some distros can do this automatically too), and there is always chroot too.
As the article points out, windows is also more vulnerable to malware, and isolating systems would make sense... What it doesn't point out however is that if there are a large number of windows images running in a vm environment chances are they will all be part of an active directory domain and have all been built from the same disk image. Once you get onto one, you can dump out password hashes (including those of logged on domain users) and use them to attack other boxes... On Linux you actually have to crack the password hashes before you can use them (which you wont be able to do if the passwords are suitably strong), and unlike windows linux typically does not allow direct remote logins to the root account.
A single application using all your system resources is also primarily a windows problem, on Linux we have ulimit...
Depending on configuration, a program which is thrashing the disk on a virtualized setup can also impair performance of the other images running off the same storage.
Virtually all of the problems noted in the article can be alleviated by using open source instead of commercial software, wether you use virtualization or not.
"Windows is even more vulnerable."
What - even more vulnerable than *completely vulnerable*? Are you sure?
I thought we had got over that "M$ Windoze sux" c**p.
ALL operating systems suck. Looking at you too, 'droid.
Rollbacks can be another advantage - if you do discover some unforeseen glitch blows up the accounting package, you can roll back to the pre-patch snapshot in a minute, even if that glitch took out the virtual filesystem it was running on. I've seen (Win7) patches take the machine out to the point it wouldn't even reboot properly - and would spend over an hour fiddling with its own files each attempt. Then there's testing: fire up a clone of the virtual server, patch that in isolation first to be pretty sure it works, *then* put the patched version into production. Try that with physical hardware!
I can't help but wonder...
... if it only is bearable to manage with just the one app per OS image...
then it's not worth much as an /operating system/, a system intended to manage resources presumably in the face of app contention and so on. Seems like a prime candidate for some hot small shell script love action.
<speculation type="wild">Heard via-via, years ago, of someone reimplementing the core of then-contemporary windows (probably 3.1, it was a long time ago) or at least enough of it to run various programs, in but a few kB of FORTH. Extrapolating from hazy memory, a couple megs instead of the "official" several GB and a massively reduced memory footprint to stuff yet more apps in that virtual box. Maybe wine will fit the bill if not as spectacularly. litepc comes to mind, though that's merely clever scripting and kicking out useless bits. That in itself is useful because it reduces the maintenance profile already. </>
But really, while this sort of shenanigan is really interesting and all, there needs to be more pressure on vendors to provide apps that (too) run on less broken platforms. Do run windows servers if you must, but do complain to vendors too, as a service to humanity.
The Microsoft Partner Program we're on states that for the physical Server 2008 box, we're entitled to run 4 virtual instances of that server. Multiply that by the 2x licenses we get for Server 2008 Enterprise all of a sudden we can run 8 virtual machines on two servers with two Server 2008 licenses.
Then we also get a license for Server 2008 Datacenter which allows for unlimited virtual instances....
I'm not sure how that's poor value? Well, actually... I have no idea how much the partner program costs to be in... so meh :)
licences for vservers
I currently have 28 instances in tocal and 12 running on one home system - This has a tiny CPU, 4GB or ram and ~8TB of (slow,green) disk but it works *beautifully*!
[root@dieter ~]# vzlist -a | wc -l
[root@dieter ~]# vzlist | wc -l
Nice thing is there is not one bit of licenced software installed anywhere!
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- OK, we get the message, Microsoft: Windows Defender splats 1000s of WinXP, Server 2k3 PCs
- Spanish village called 'Kill the Jews' mulls rebranding exercise
- NASA finds first Earth-sized planet in a habitable zone around star