back to article We need to talk about desktop virt

Sometimes we are forced to acknowledge that there is a group of people even more knowledgeable and informed than even Register journalists: you, our beloved readers. So we turn to you for help with a question that bears proper scrutiny. All this talk of cloud (and let’s face it, there has been a fair bit) has prompted some …

COMMENTS

This topic is closed for new posts.
  1. Tim #3

    Sh*te

    The performance and reliability of our virtual desktops has singlehandedly destroyed all confidence our users have in the IT dept. and the good relationships with them.

    The costs of engineer support and dual licencing (we’ve had to go for local copies too) must have wiped out the efficiencies too.

    Other than that, I guess it's ok.

  2. This post has been deleted by its author

  3. adnim

    Client-server

    computing worked fine on mainframes in the 70's and 80's. Accepting the advances in the technology, the difference between then and now is little more than a GUI.

    Depending on scales of deployment desktop visualisation can save money, centralise control and ease administration. It should go without saying though that only having one server and thus one central point of failure would be rather silly, as would not taking measures to ensure that any network hardware failure is a show stopper.

    In my opinion the most important issue of desktop virtualisation is confidentiality and thus who runs the servers? If it's in house then the HR dept have their work cut out in vetting those that will administer and have access to the virtualisation servers. If the virtualisation is outsourced which is most likely to happen if cost savings are involved, then it is a matter of placing trust in the expertise of third parties, their own HR recruitment procedures and the staff they employ.

    As more companies, in an effort to make shareholders happy, outsource data storage and virtualisation to third parties I would expect the reports of data theft and the leakage of sensitive information to increase. As should everyone else who doesn't think humans are infallible.

    I would never place sensitive, private or personal information on a server that is not under my direct control or can be accessed by persons unknown.

  4. DonnieD1

    Keep it simple

    Providing all users with individual hardware is unsustainable. The control and management of the thousands of machines is difficult as end users are unpredictable.

    Our VDI project has been successful because we created a workstation that uses a defined set of applications and user persona is limited. This is the first phase of our deployment and as I mentioned before has been very successful. We are converting existing old hardware using an in house developed windows form as shell and running the boxes into the ground. We will be replacing them with thin or zero clients as they fail. Applications that can be virtualized are and they are delivered to specific user through AD.

    Licensing will, as it has always been, difficult as software providers mature to this methodology.

    IMO, it will only be a matter of time until VDI is everywhere including accessed from your home.

    The technology will continue to mature until the struggles of the hardware past are a distant bad memory.

  5. David 39

    Joyous

    Our virtual environment is used in a different way.

    We have a 14 machines that just sit there and process data. Nothing more. Justifying using this in a desktop environment is near impossible.

    14 machines, keyboards and mice taking up valuable office space and power.

    Virtualisation resolved all our issues. We have vdi configured and ready to go making deploying a new machine a sinch. 2 mins was the last calculated time.

    With the 14 machines already in use, I have a further 2 for accessing legacy data in sage. This data is rarely accessed and prefer this not to be on the desktop of the head bean counter. Connecting is no problem and running the application is as stated by them "easy peasy".

    Another 2 desktops saved.

    All these machines run XP 2GB Ram and 20GB Hdd.

    No issues. until the server falls over. Then I lose all 16 machines until I can resolve what ever issue occurred.

    But I won't be jumped up and down on as they aren't business critical.

    We did look at running our servers in a virtual environment. But If the server running them did fall over..... I'd be in a place I don't want to be. So our server and day to day desktops are good old fashined kit.

    A nice comprimise.

  6. Shaun 2
    Happy

    I'm the network admin for a school. We're currently happily running 70-100 VDIs (depending on the time of day), and will be going full scale up to 600 next summer.

    For us, storage was the biggest pain. We tried running our trial VDI's alongside our servers, and quickly ran out of IO. We splashed some cash on two new SANs with a TB of Cache and Dedupe, and are happily outperforming our fat clients. Login times are now ~ 1 minute as opposed to 2 mins+.

    For VDI get your numbers right. There's a good tool called Quest VDI Assessment. Run that on as many desktops as you can, and it will tell you your average and peak disk IO, memory and CPU. After that it's just a case of doing your sums, planning for some failover and buying some new toys.

    We're now looking at Samsung PCoIP thin clients. So far very good, and have just watched the Matrix full screen streamed from one of our virtual servers.

  7. dephormation.org.uk
    Facepalm

    Oh God not more of this

    "People didn't buy the 'ignorance is strength in Cloud security' twaddle. Damn, what the hell do we do now?"

    "Desktop Cloud?"

    F-ing sales & marketing people.

    Why won't they all sod off and die?

  8. dcolley

    Depends on the network

    Many people think about the newest, shiniest parts of any infrastructure without giving some serious consideration to that old chestnut, that the pace of any group is the pace of the slowest member of that group. So if you have a dozen VM desktops running off a shiny new IBM BladeServer, with MS Windows 2008 R2 and a brand new app server, it's still going to underperform if you're using 100MBps Cat5e crossover cable. It's the pinch in the hourglass, and the reason why the 'cloud' hasn't exactly been popular amongst end-users.

    Desktop virtualisation will only become viable, reliable and stable if all areas of the infrastructure are upgraded to support a greater amount of network activity comessurate with the increased demand from the client terminals.

    1. Neil Spellings
      WTF?

      Whilst I agree that your server should be Gigabit networked, theres no reason why having 100mbit down to the end user device should impact performance. VDI protocols like RDP and ICA/HDX work all the way down to GPRS bandwidths, so 100mbit is more than ample for a virtual desktop.

  9. Neil Spellings
    Mushroom

    Storage and network

    Lots of comments but few people actually discussing the question being asked (a reminder: “How is desktop virtualisation likely to impact my existing network and storage infrastructure)

    For larger enterprises, VDI is often the straw that breaks the camels back when it comes to storage infrastructure. Desktop workloads are vastly different to server workloads (e.g. logon storms) and many companies who have utilised existing storage that has quite happily being hosting their virtual servers find it now can no longer cope. This either requires expensive storage upgrades, or addition of some of the new IOPS "sink" technologies like Whiptail or Atlantis ILIO (still both expensive)

    Another option is to utilise local storage and leave your expensive SAN/NAS for your virualised servers. Kaviza VDI-in-a-box (now Citrix VDI in a box) is a good candidate for this, as it uses commodity hardware, and just scales out using local storage as you need to add capacity.

    Network-wise, the increased demand for storage bandwidth (if you aren't using local) may force you to investigate 10Gig ethernet. VDI is also 100% network dependant, so having reliable WAN and Internet links is paramount. There's no "offline working" scenario with hosted desktops, so multiple-resilient links are a must if you have business criticial offices connecting to centralised VDI infrastrucutre. And they don't come cheap.

    Also, if you're delivered a "rich user experience" including videos on your VDI infrastrucutre over the WAN, then you might want to consider WAN acceleration and caching devices such as Riverbed or Citrix branch repeater.

    How much you need to invest/upgrade will depend on the size of the organisation and the product sets you choose. It's a minefield, and can easily blow up in your face (hence the icon), so be careful out there.

  10. G Olson
    FAIL

    virtualizing apps

    The VDI sales architects state virtualizing the desktop does not deliver the total virtualization ROI if you don't virtualize the apps. A four month project to evaluate virtualizing all the software used in my department found a definitive conclusion: software applications can be virtualized; software tools cannot. 50% failure rate.

    And I just saw presentation where the current tool for virtualizing apps is known to be..ahem...difficult. A new app virtualization environment will Be Here Soon; but still does not solve the Software Tools Don't Virtualize problem.

This topic is closed for new posts.

Other stories you might like