The IT roundabout is still turning....
FFS........Why not call a spade a spade..... 'Software' PC-over-IP = Thin Client.
that is all.
VMware has unveiled the latest version of its ESX-based virtualization software to capitalize on Microsoft's rollout of Windows 7. The company has launched VMware View 4.0, featuring a new communications protocol called PC-over-IP to provide real-time screen rendering, plus the ability to deploy and manage tens of thousands of …
"VMware believes the vSphere 4.0 changes in performance and management will make its desktop virtualization attractive to organizations rolling out Windows 7"
Having just upgraded to vSphere 4 we have found that the management client doesn't actually work on my Windows 7 desktop without a very dirty work-around (no problems on Vista/XP). How on earth am I supposed to roll-out a Windows 7 estate when the very tools I need to do it are crippled?
What is VMware's knowledgebase solution?
"The vSphere Client is not currently supported in Windows 7 and it does not run in Compatibility Mode.
To workaround this issue, install a Windows XP virtual machine in Windows 7 using Windows XP Mode."
The phrase 'cart before the horse' comes to mind.
I still don't buy into this whole virtualisation thing. If you are wanting to run diverse OS's on the same machine to save on cost then I can _just_ see an argument for it. But running multiple copies of the SAME OS (whether that be all Windows or all Linux or all whatever) on the same machine still doesn't make that much sense to me.
The VM itself eats up some resources (it is bound to), you are introducing yet more gloop into the basic platform, in the case of Windows you end up running into licensing issues, and all your apps are running on the same OS anyway! Just use ONE copy of the OS and run all your apps on that. Simpler. Less hassle. More efficient. Cheaper.
And I don't believe the "security by dividing up your environment" argument either - all your OS's are still fundamentally running together (on top of the VM). The only time this would not be the case is if you had some VERY specialised hardware (which doesn’t actually exist for all practical purposes).
As for the reliability argument - if you have an issue with reliability then change your OS and/or the applications that you run - adding gloop under it all isn't going to fix any of these problems.
This VM stuff is emperor's new clothes if you ask me, which you're not, so I'm telling you anyway.
Thin client desktops = cheap as chips machines with a much longer life cycle. Install nothing but a lightweight linux and then run your virtual machine on top of it. By not replacing desktops that can't run the latest greatest & Windoze saves a fortune for IT depts.
Need more users or more speed - simply buy more powerful servers. It's a piece of piss to setup a new vm server - much easier than upgrading desktops en-masse.
Also centraliszed vm's = huge benefits for sysadmins - updating vmware images is a damn sight easier and safer than a cocked up desktop rollout = safety + money saved.
Then there's centralised backup of vm images is 'a good thing' (TM). If a hdd dies on the desktop machine no problem - chuck a new one in image the machine over the network and hey-presto (ohh yes users are idiots and will store docs on their local drives in case you forgot).
So still can't see any benefit - you must be blind then......
ver said anything about thin clients, and I wasn't even considering the desktop when I said that I though VM was emperor's new clothes.
Whether or not you want to use thin clients (and I do not necessarily disagree with such use, despite being (apparently) blind) has nothing to do with whether you use VM on the server end - whether your thin clients talk to a Windows box, a Linux box, or a Cray YMP running two thousand instances of the ZX Spectrum OS is irrelevant - your thin client doesn't (or shouldn't) care! Just to spell this out (for anyone hard of seeing out there), VM has nothing to do with thin clients - they are two entirely different issues.
I was referring to running VM on the server side (which is where 99% of this stuff is used). Why is "centralised backup of your VM server" better than "centralised backup of a single OS server"?
So before you start ranting and calling other people blind, check out your own eyesight and read what is actually being written. By the way - I went for an eye test just last Friday and was told I have very healthy eyes thank you :-)
And dont forget about vMotion- being able to actually move LIVE VirtualMachines to another server to upgrade hardware in hours, or dynamically move heavily utilised VMs (users) to a client that will give them the extra performance they require, or move VMs off when they are just ticking over and give power users more.
Then there is DR- your entire Server/Desktop footprint being hardware independant, no need to worry about rebuilding servers etc- just restore the backup and your up and running without the need to reinstall applications etc.
And its not just big business that saves money- we replaced just 24 ageing servers with VSphere for not only less than the price of 24 new servers but also with major savings on man-hours to perform the migrations. We actually migrated our Exchange system overnight remotely using a warm migration.
Those who doubt both VMWares capabilities and the amount of REAL time and money it saves have obviously never used it, and should check out a demo as the showtime is just as impressive.
VMWare is not a fad or a gimmick- it really can be as good as the hype.
Centralised backup of VMWare is better because of DR. It means you dont have to care (apart from using the wide variety of kit that VMWare supports) about what hardware platform you restore to.
It means you can treat entire server restores as you would the restore of a Word document.
Its also quicker- meaning you can backup more in less time as your backing up contiguous files, and normally off a very fast SAN.
As an example- we can restore our Symantec Vault store and Server (600gb) and have it up and running in under 4 hrs. Try doing that when you have disimilar hardware and havent got the luxury of hot servers and replication at a DR site.
You should rephrase your "I dont buy into it" comment as "I simply don't get it", in that you do not understand or want to understand how an Enterprise VMWare environment works.
I hear what you are saying, but why is it easier to move a VM image running (say) a database server from machine A to machine B than it is to just move just the database server application?
If you need/want to update the software then the VM doesn't really help - it's the same database server application that you are updating whether it's running on top of just the OS or running on top of an OS that's running on top of a VM.
As for your servers being hardware independent, well, that's what the OS is design to do; abstract the hardware in an independent way. My apache web server doesn't give a hoot whether it's running on a P4 with a quantum disk and an intel network card any more than if it is on a dual xeon with a WD disk and a 3com network card. I can move my apache web server to any other machine I like. That's the whole point - it's the OS' job to abstract this sort of stuff away. Apart from adding another layer of possible unreliability and bugs, and certainly lower performance, what would a VM give me in this instance?
And if I want to run something else (say, an email server) on the same machine as my apache web server then the OS will let me do this. It multi-tasks, and does it quite well. Why would I want to wrap my email server up in a VM instance running ANOTHER instance of the OS first? To slow it down?
Because you dont move the data. You just move the server (in that I mean the servers memory of the vMotion network). It literally takes a couple of clicks and seconds, no downtime, no bother.
When updating the software you have the benefit of snapshots- in that you can protect against problems by snapshotting the current server state (again in mere seconds) before performing the upgrade. You can even clone the server, upgrade the clone and leave the live server running. So development and testing can be done on exact copies of the server, not ones that depend on perfect documentation.
Of course you can never totally free yourself from some restrictions- but there is no denying that VMWare makes things a lot less painful.
Of course the OS lets you do that, but as we all know not all apps coexist happily, and with the way VMWare allocates resources dynamically we have yet to have had any server run slower than its physical equivilent. In fact our SQL servers run far faster because of the number of spindles we are able to allocate using HP EVA SAN technology. In fact its quite staggering to realise how little our old physical servers were doing- replacing 24 individual servers with 3 Dual Quad-Core Xeons blew me away. I couldnt quite believe how well they performed- especially now we have our dev environment running on there, so thats 33 servers running.
Plus think of the electricity savings in replacing that much kit with a SAN, 3 hosts and 1 physical host to handle backups! One other neat thing VSphere does is allow us to move all the servers onto 2 of the hosts, shutting down the 3rd server as it is idle and backups run- then the 3rd server is fired up again as soon as the extra processing power is required. Clever clever stuff, but also something that saves our company money- especially as the servers idle at weekends too. Thats about 150-170 days per year that the servers are running at 2/3s power consumption.
VMWare really is gobsmacking- but to appreciate it you HAVE to throw away all the old conventions and be prepared to think afresh. I may sound like a bit of an evangelist but I wasnt prepared for just how good it works, and how much easier it makes my job.
Most enterprise enviroments use Windows- it DOES care what hardware platform its on. Try moving from a HAL with a P4 to a Core 2 Duo and see what happens without any preparation.....BSD anyone?
And as I said performance isnt an issue- as all the hosts are clustered VMWare DRS (Distributed Resource Scheduler) will dynamically allocate resources where it is most needed. Moving machines automatically to new hosts if required. VMWare itself is baremetal- so its overhead is minimal- at no point did we have to spec the hardware to take into consideration anything but the hosted OS's.
Why is it easier? I can more a running VM from one physical server to another live with no downtime. Ever try to coordinate a downtime with a database or exchange for thousands of users?
True doesn't help much with app upgrades, but the wonderful thing is that I can take a snapshot and rollback immediately if the upgrade goes bad, etc.
Being hardware independant means I can take the virtual machine running on an old Dell with different scsi cards, etc and move it to a new HP within seconds. I don't have to install an OS, I don't have to re-install the app (and hopefully remember all of the manually changed settings done over the past 5 years), it's completely self contained and I can complete it quicker... i.e. less downtime to the business which means less $$
Why run them on the same system? Because as you scale up the number of services in your environment, isolating them is a good thing for your sanity. Security is the obvious first thing, it's much easier to protect a webserver if that's all it does, it's much easier to protect a mail server if that's all it does. Additionally think about patches you want to patch the mailserver from the latest exploit, but it changes some files that the webserver uses... do you want to pull your QA group away from whatever they are working on to do a regression test across your webserver? How about organizations where there isn't the lone admin, where different people do different things (Oracle DBA's don't go on our webservers). I can now only take downtime for a single service rather than lots of multiple services (I allude to this above, try and coordinate simultaneous downtime for your web, mail, dns, firewall, etc at the same time to patch the OS have fun with that). The penalty is measured in single digit percentages relative to peak. So unless you are running your system 99% utilized to begin with you will see hardly any slow down.
and this I don't see much to change that here. VMware keeps touting this PC over IP so you can route a vm to a video card. Ok. Let's scratch the surface. How many vms can you map to that video card?
WTF. So, if you want 15 vms with high end graphics you need to populate the server with 15 video. Are you bloody mad? Take all of your vm density arguments and throw them out the window. This is bollocks and VMware needs to come clean instead of hiding this in the details as usual.
VM's usually come into need with Windows environments ... because MS wants you to use a zillion servers to do what other OSen suffice with two. I've been told that I can't achieve real DR capabilities if I don't have *three* DC's running! Then comes SQL Clustering ... add two machines ... IIS needs to be in another one ... blah blah ... and some clusterized apps don't play well with non-clusterized apps. So the need to virtualize becomes critical.
Other stuff I've seen is the development servers problem. It is much easier to just buy one Blade with tons of RAM, hook it up to the SAN and give it gobs o' storage. Then you set up as many dev servers as you want, thus freeing up your budget for the actual production servers, while meeting dev and QA server needs.
I do agree, however, that instead of a zillion "desktop PC" VMs, you could just solve that with a much cheaper solution involving a Win2003/2008 Server with a zillion Terminal Services CALs ... which might be even cheaper than going the VM way. Especially when you can just slap Linux on the PCs and make 'em run rdesktop at boot time. Whoopee!!
Biting the hand that feeds IT © 1998–2019