Sometimes we are forced to acknowledge that there is a group of people even more knowledgeable and informed than even Register journalists: you, our beloved readers. So we turn to you for help with a question that bears proper scrutiny. All this talk of cloud (and let’s face it, there has been a fair bit) has prompted some more …
The performance and reliability of our virtual desktops has singlehandedly destroyed all confidence our users have in the IT dept. and the good relationships with them.
The costs of engineer support and dual licencing (we’ve had to go for local copies too) must have wiped out the efficiencies too.
Other than that, I guess it's ok.
Big Company virtual desktop
I work in the IT side of things but unfortunately not on the VM Desktop area but I have spoken to some people who do and am somebody who uses it myself! I had the option 5 months ago to keep my 4 year old XP machine or move to a vm desktop with Windows 7; I choose the later which I regret somewhat. We have fairly powerful Blades and very fast SAN/disks and a vSphere/Citrix infrastructure that is very well managed with plenty of internally developed tools to manage the quite large but fairly new v. Desktop infrastructure.
A virtual Desktop must be cheaper or equivalent to a real Desktops (£440 = i5, 4GB ram) so management must have decided that a 2GB of memory and 30GB hard disk space is enough for each user’s desktop to safe cost = I can tell you might be enough for somebody in the front office but not a power or “medium” users and due to the small memory the i/o to the swapfile is slowing down everything making the experience rather “slow” depending how much I do or other people are doing on the blade/san array and I think most people know that Windows 7 plus our internal apps eat up most of that 30GB hard drive space. Another issue is that my machine died (not sure what happened) 3 times in the first 2 months and it had to be rebuild from scratch = annoying as I basically have no desktop until somebody in support gets around to rebuilding it (no VM snapshots to save diskspace). My last complaint is the image compression that makes using things like google-maps a horrible experience or looking at pictures a rather weird experience -> I assume this was done to save bandwidth as serving >= 40 thin clients on 1 floor with 1GB inter floor connectivity a requirements.
Considering my virtual machine is a few miles away from me using it works well at the moment (unless I open up a few memory hungry applications). I never experienced lag when using my mouse and keyboard, text and visio diagrams are not compressed and appear on my screen in real time like then should and watching a HD video also works fine. I use a USB headset/cam to receive and make video calls, can list to music and connect from home rather easily to my virtual machine.
Virtualization on the Desktop is the way forward as 95% of users are not “power” users and do not require their own machine under their desk. It gives the company far more control of the desktop estate, you can quickly provision 100 desktop for some outsourcing contract that connect remotely in anyway or you can scale back when required. You can move virtual desktop to more powerful infrastructure or dynamically increase the “performance” if needed (not currently an option for us).
computing worked fine on mainframes in the 70's and 80's. Accepting the advances in the technology, the difference between then and now is little more than a GUI.
Depending on scales of deployment desktop visualisation can save money, centralise control and ease administration. It should go without saying though that only having one server and thus one central point of failure would be rather silly, as would not taking measures to ensure that any network hardware failure is a show stopper.
In my opinion the most important issue of desktop virtualisation is confidentiality and thus who runs the servers? If it's in house then the HR dept have their work cut out in vetting those that will administer and have access to the virtualisation servers. If the virtualisation is outsourced which is most likely to happen if cost savings are involved, then it is a matter of placing trust in the expertise of third parties, their own HR recruitment procedures and the staff they employ.
As more companies, in an effort to make shareholders happy, outsource data storage and virtualisation to third parties I would expect the reports of data theft and the leakage of sensitive information to increase. As should everyone else who doesn't think humans are infallible.
I would never place sensitive, private or personal information on a server that is not under my direct control or can be accessed by persons unknown.
Keep it simple
Providing all users with individual hardware is unsustainable. The control and management of the thousands of machines is difficult as end users are unpredictable.
Our VDI project has been successful because we created a workstation that uses a defined set of applications and user persona is limited. This is the first phase of our deployment and as I mentioned before has been very successful. We are converting existing old hardware using an in house developed windows form as shell and running the boxes into the ground. We will be replacing them with thin or zero clients as they fail. Applications that can be virtualized are and they are delivered to specific user through AD.
Licensing will, as it has always been, difficult as software providers mature to this methodology.
IMO, it will only be a matter of time until VDI is everywhere including accessed from your home.
The technology will continue to mature until the struggles of the hardware past are a distant bad memory.
Our virtual environment is used in a different way.
We have a 14 machines that just sit there and process data. Nothing more. Justifying using this in a desktop environment is near impossible.
14 machines, keyboards and mice taking up valuable office space and power.
Virtualisation resolved all our issues. We have vdi configured and ready to go making deploying a new machine a sinch. 2 mins was the last calculated time.
With the 14 machines already in use, I have a further 2 for accessing legacy data in sage. This data is rarely accessed and prefer this not to be on the desktop of the head bean counter. Connecting is no problem and running the application is as stated by them "easy peasy".
Another 2 desktops saved.
All these machines run XP 2GB Ram and 20GB Hdd.
No issues. until the server falls over. Then I lose all 16 machines until I can resolve what ever issue occurred.
But I won't be jumped up and down on as they aren't business critical.
We did look at running our servers in a virtual environment. But If the server running them did fall over..... I'd be in a place I don't want to be. So our server and day to day desktops are good old fashined kit.
A nice comprimise.
I'm the network admin for a school. We're currently happily running 70-100 VDIs (depending on the time of day), and will be going full scale up to 600 next summer.
For us, storage was the biggest pain. We tried running our trial VDI's alongside our servers, and quickly ran out of IO. We splashed some cash on two new SANs with a TB of Cache and Dedupe, and are happily outperforming our fat clients. Login times are now ~ 1 minute as opposed to 2 mins+.
For VDI get your numbers right. There's a good tool called Quest VDI Assessment. Run that on as many desktops as you can, and it will tell you your average and peak disk IO, memory and CPU. After that it's just a case of doing your sums, planning for some failover and buying some new toys.
We're now looking at Samsung PCoIP thin clients. So far very good, and have just watched the Matrix full screen streamed from one of our virtual servers.
Oh God not more of this
"People didn't buy the 'ignorance is strength in Cloud security' twaddle. Damn, what the hell do we do now?"
F-ing sales & marketing people.
Why won't they all sod off and die?
Depends on the network
Many people think about the newest, shiniest parts of any infrastructure without giving some serious consideration to that old chestnut, that the pace of any group is the pace of the slowest member of that group. So if you have a dozen VM desktops running off a shiny new IBM BladeServer, with MS Windows 2008 R2 and a brand new app server, it's still going to underperform if you're using 100MBps Cat5e crossover cable. It's the pinch in the hourglass, and the reason why the 'cloud' hasn't exactly been popular amongst end-users.
Desktop virtualisation will only become viable, reliable and stable if all areas of the infrastructure are upgraded to support a greater amount of network activity comessurate with the increased demand from the client terminals.
Whilst I agree that your server should be Gigabit networked, theres no reason why having 100mbit down to the end user device should impact performance. VDI protocols like RDP and ICA/HDX work all the way down to GPRS bandwidths, so 100mbit is more than ample for a virtual desktop.
Storage and network
Lots of comments but few people actually discussing the question being asked (a reminder: “How is desktop virtualisation likely to impact my existing network and storage infrastructure)
For larger enterprises, VDI is often the straw that breaks the camels back when it comes to storage infrastructure. Desktop workloads are vastly different to server workloads (e.g. logon storms) and many companies who have utilised existing storage that has quite happily being hosting their virtual servers find it now can no longer cope. This either requires expensive storage upgrades, or addition of some of the new IOPS "sink" technologies like Whiptail or Atlantis ILIO (still both expensive)
Another option is to utilise local storage and leave your expensive SAN/NAS for your virualised servers. Kaviza VDI-in-a-box (now Citrix VDI in a box) is a good candidate for this, as it uses commodity hardware, and just scales out using local storage as you need to add capacity.
Network-wise, the increased demand for storage bandwidth (if you aren't using local) may force you to investigate 10Gig ethernet. VDI is also 100% network dependant, so having reliable WAN and Internet links is paramount. There's no "offline working" scenario with hosted desktops, so multiple-resilient links are a must if you have business criticial offices connecting to centralised VDI infrastrucutre. And they don't come cheap.
Also, if you're delivered a "rich user experience" including videos on your VDI infrastrucutre over the WAN, then you might want to consider WAN acceleration and caching devices such as Riverbed or Citrix branch repeater.
How much you need to invest/upgrade will depend on the size of the organisation and the product sets you choose. It's a minefield, and can easily blow up in your face (hence the icon), so be careful out there.
The VDI sales architects state virtualizing the desktop does not deliver the total virtualization ROI if you don't virtualize the apps. A four month project to evaluate virtualizing all the software used in my department found a definitive conclusion: software applications can be virtualized; software tools cannot. 50% failure rate.
And I just saw presentation where the current tool for virtualizing apps is known to be..ahem...difficult. A new app virtualization environment will Be Here Soon; but still does not solve the Software Tools Don't Virtualize problem.
- Product round-up Coming clean: Ten cordless vacuum cleaners
- Review We have a winner! Fresh Linux Mint 17.1 – hands down the best
- Product round-up Too 4K-ing expensive? Five full HD laptops for work and play
- 'Regin': The 'New Stuxnet' spook-grade SOFTWARE WEAPON described
- Worstall @ the Weekend BIG FAT Lies: Porky Pies about obesity