Virtualization is not new - mainframes have been doing it for ages, and other non-x86 operating systems have been slicing up servers for quite some time as well. Yet if I had to pin a single IT label on the first decade of this century, I'd tag it as the decade of x86 virtualization. Virtualization went mainstream in the …
Power off servers?
I always cringe when I think of powering off the servers. Whenever we have an extended outage that requires taking a server room down there are always a couple where one component of one of them that doesn't come back.
Did a migration project once, where the stuff was so old that we predicted a failure rate of 50% on the boot disks inside the servers, project was strangely enough put on hold :)=
Why can't you knock up a quick web app for users that lets them cycle a VM (from another, working, desktop, obviously).
It only needs to execute a shell command with a single parameter, no ?
@Tom Chiverton 1
Well first off, because I use ESXi. There are possibly extant applications that can do this, but I have no understanding of how to accomplish what you are describing in ESXi 4. (In 3.5 it was possible; the service console wasn't crippled.)
The applications I have seen that would give users access are actually reasonably information-dense. They would require the user to know what the heck they are doing. Also; there are, (at last count,) something like 15 VMs they would have to be aware of, and dozens more that Touching Would Be Bad.
I could lock them out of a certain set of VMs, depending on the management software. The question though is where do you draw the line? As soon as they can manipulate their VDI VMs, they will want access to the other VMs their work depends on.
Take for example VM X, which is a VM hosting a rendering engine that backs onto VM Y which hosts a database…they are going to demand access to VM Y for the odd occasions VM Y does something strange. Then what happens if they decide VM Y is doing something strange, when it’s not? (Perhaps it is just heavily loaded, thus laggy.) They reboot VM Y without turning off the rendering VMs, and I end up having a very bad week.
In other words; it’s not quite as easy as all that…
So, are there pressing reasons to not use ESXi 3.5?
Not that I've researched that product or anything. I was wondering abstractly about powering off, or rather seamlessly powering on stuff again, and dependency graphs. One could have, say, the VPN gate send a WOL signal to the required box, though things get much more complicated when the box is virtual. Virtual WOL, anyone?
Then you point out the dependencies. Assuming the dependency fan-in is heaviest at the consumer end, just leaving the server VMs running might make sense. If not, well, since dependencies don't change often it's something that fits in the proverbial small script, taking advantage of you not giving everyone remote access. And with VMs having become cheap, the dependencies can even be fine-grained.
It'd be nicest if all your VM-servers could be treated as a single cluster, with automatic or even script-driven migration. At six, or once the total number of running VMs drops below some threshold, it moves everything still running to the designated overnight server of the day and powers down the rest. At eight, it boots up the others and reshuffles the things for the day. Automatic load balancing would be nicer, of course, but if you can do it by hand AND movement orders can in some way be scripted, a simple script can do it too.
Personally I regard scriptability a basic requirement for just about everything that isn't specifically ment as a GUI front-end (for something scriptable), so software that fails that requirement isn't fit for service in my shop. But your requirements may be different, of course.
SNMP(v3) support is acceptable too, as that can be scripted. I don't suppose ESXi 4 supports moving VMs around through SNMP, does it?
Gee, this sort of thing is what minis and mainframes have been offering for a coupe decades. Part of my dislike for x86 in general is that it's so full of badly reinvented wheels. And yeah, certain vendors milk the mainframe market so much that it priced itself out of most other markets. Then again the "enterprise" x86 vendors are doing exactly the same. But I digress.
As to BSODs, well, I personally will no longer support that vendor, but choose your poison. Now that the desktops are virtualised you do have bought yourself a possibility: You could come up with some desktop linux VMs and offer them to maybe just a small set of users for starters, with easy migration back. Just for trying, see. Trick the new hire with that and some won't even notice. The LiMux guys seem to have a clue or two there, btw.
I understood virtualisation until I read that analogy.
Are you a pointy-haired boss? I suspect not. Re-read the article :)
What Trevor didn't say is that before he goes through the analogy with them, he says "Imagine you are the captain of a luxury cruise liner (or powerful warship, etc, as appropriate to their personality). Thus, he appeals to their vanity, power lust, sense of self-importance and desire for more control.
Quit giving away my secrets.
Simple answer to all of the highlighted "problems" - VShpere Enterprise with the View Premier Addon. That gives you HA+VMotion for your host reliability worries, DRS+DPM for load balancing and out of hours power saving, and View Composer so you only need to run the number of VDI's you need..................
Oh - and turn off power saving on your VDI's and tell them to reboot after a BSOD.
As far as 6 shiney new servers vs all your old kit - Per CPU license costs do help justify buying the shiney new servers............
How are the six shiny new servers and management software that costs more than real estate going to help me with the eggs in one basket problem?
And where am I getting the money for the overpriced software?
Eggs in one basket - Simple - You get 6 new servers with 24x7 4 hour response warranties, ideally with predictive failure so that the engineer is on site before you know there's a problem, and you make sure you have enough capacity in the system to cope with a server being down to work on...........
You want an enterprise level VDI solution without spending any money - You've highlighted a load of "Problems" which have already been solved with existing software.
The money from the overpriced software and new servers for that matter comes from the long term savings. Our VDI deployment starts this September. From then, the 90K we currently spend every year on replacing 20% of our desktops goes towards the VDI project. After 3-5 years most of that money will go elsewhere, leaving us enough to replace/add servers as needed and buy the odd thin client.
Look at the long term gains, and it's a fairly easy sell.
What kind of world do you live in? "Long term savings?" The setup you are describing would cost me more in a single year than I have budget for THE NEXT FIVE. Not my “Server budget” for the next five, but my entire IT budget. Desktops, switches, servers and licensing. I work for a small business; thus my blog is about what helps SMEs try to get the same quality of hardware and software as the big guys without having to spend more on IT than the annual company revenue.
It might be viable if you are a large corporation with over a thousand people, or a medium-sized municipality. For a company with 60 people, what you propose simply isn’t going to happen. It also needs to be mentioned that “4 hour response” isn’t remotely good enough. If something goes down, the VMs need to be moved and fired back up immediately. 4 hours downtime on the wrong day would see the difference between profit and loss that year. That means not only collapsing all your eggs into a small number of baskets; but having an extra super-expensive basket to boot.
What you do with really expensive hardware and software we do with wetware. Manually having someone yard the drives out of server 1, and put them into server 2 is a downtime of about 5 minutes while the backup boots. The servers themselves cost less than $2000, and software is free. In the worst case, where we have to move VMs the old fashioned way, we are down for a couple of hours while they stream across the network to their new home.
What we pay in extra wetware time to do this kind of maintenance is, (and I did the numbers) about a twentieth the cost of the licensing alone. Licensing would be cheaper if we had fewer servers, but then we need significantly more complicated and expensive servers. There is no escape from that game except refusing to play.
At the end of the day, we have a choice: get locked into an American vendor, and send our money into the back pocket of some rich guys we never met, or spend less than that amount to pay the salary of someone locally.
You say all the “problems” with virtualisation go away with the right software and hardware. Well, you’re not 100% right about that; I have played quite extensively with the niftier virtualisation software. It’s CLOSE…but it’s not there YET. Still, it’s far easier than doing it by hand, I agree.
The "problems" I am highlighting are that UNLESS you buy the ridiculous software, and get very complicated and expensive servers, these are real, honest-to-god issues. Did you read the article, sir? I am pretty darn sure that I mentioned that very blatantly.
Or are you just trolling? If you are, I give you +1 internets, because I bit.
Now you can kill EVERY desktop with a single click.....or two.
magic budget fairy
Oh, I'm nicking that.
maybe don't use Hyper-V if you really want to get away from eggs in one basket
that fact you describe "VHD" indicates you're running Hyper-V, which may be good enough for really small shops, but you should look into some of the sale literature from the alternatives to convince your manager with the purse strings to get you onto a more reliable solution.
not a fan of the boat analogy - I'd keep it simpler. 1 server, 1 funtion ,using 10% of capacity, 90% wasted. 1 server, one hypervisor, 15 virtual servers, using high capacity. Simple! if you have multiple hosts, you can overcommit and load balance with ease. Also, if you move away from Hyper-V, you'll cut down on any guest OS bloat creating a single point of failure or consuming resources. Maybe I'm, behind the times, but I still think of Hyper-V as being a type 1.5 hypervisor - it's still close to being a Hosted solution like Virtual PC or the old vmware GSX...
BTW - the decade of x86 virtualisation - you're spot on there!
I used to use Virtual Server 2005. I then moved to VMWare server, (both 1 and 2). I have tested or put into production Hyper-V, KVM, XEN, Virtualbox, VMware Server 3, 3.5 ESXi 3.5 and I currently run my entire network on ESXi 4.
(Admittedly, I have VMWare Server 2 on a few overpowered file server as super-emergency backup capacity, since it will handle the same virtual infrastructure version.)
As to "more reliable solution," ESXi 4 is FANTASTIC. I use "VHD" because "virtual hard drive" makes sense when you say it in English, and the acronym then decodes into something meaningful. What the merry nether fnord does VMDK decode into that normal people can understand?
As to the boat analogy, your version uses numbers and percentages. PHB’s eyes glaze over right about…
From your last post it is clear that you've looked at quite a number of solutions before standardising on your virtualisation tech of choice.
Any chance of a write up at some point of what you liked / disliked about them for those of us that have less experience in this area (I've only looked at VMware and VirtualBox)?
Yes...and no. There will of course be a writeup of "why did trevor choose this server technology over that server technology." The issue is that this is a desktop managment blog. Not a server one. That said, I do have that article planned for September...
You got me there. As soon as numbers come into it, management will either race ahead in an unpredictable direction, or drop off immediately!
I have no clue what VMDK means. VM DisK maybe?
btw I'm no fan of VMware Server. I think it;s better than virtual Server, but vmware have end of liffed it and are showing it no love now it's becoming a legacy product liek GSX before it. but ESXi is awesome alright, despite the lack of a good service console!