Desktop virtualisation is today’s hot topic in IT circles but as with every innovation, rushing headlong into it without proper planning could land you in trouble. How can you go from zero to 60 in measured, sensible steps and avoid a car crash along the way? In an ideal world, you would have all your ducks lined up in a neat …
So let me get this straight
You put all of your PC desktop images into large servers held in the data centre.
You then use something on each desktop to run a virtual session to those large servers.
What are those devices on the desktop? Oh yes, PC's.
I know that the devices on the desktop will be cheap/low power PC's, but bearing in mind how powerful even a basic PC is nowadays, where is the saving?
If you were to sell it to me as an administrative saving, or a deployment cost saving, or even as a data de-duplication saving, then I may be interested. But as a power saving?
Of course, if the desktop devices were diskless, low power consumption (ARM type power) real thin clients then this may make sense, but we've been here before, and commodity PC's always undercut specialist net devices (where are Tektronix, Oracle, NCD et. al. with their thin-clients, Netstations and X-terminals now. Oh yes, out of that business). The cost ends up being the screen, keyboard and I/O devices, not the PC itself.
Where savings are being made at the moment is that older low-power PC's are being used as the access devices, but this is unlikely to give you a power saving, and is not going to be a model for phase 2 and later roll-outs!
How about ...
>> You put all of your PC desktop images into large servers held in the data centre.
Not quite, you normally use a shared image. Just think how much redundant disk storage you have with a gazzillion copies of (in a typical large setup) identical images ?
>> You then use something on each desktop to run a virtual session to those large servers.
>> What are those devices on the desktop? Oh yes, PC's.
>> I know that the devices on the desktop will be cheap/low power PC's, but bearing in mind how powerful even a basic PC is nowadays, where is the saving?
Well for starters, the device can, and should be, diskless. The software image it runs should be fairly small, and loading it off a network server is very easy. Then you don't need all the processor horsepower. In fact, for most tasks, most desktops spend lots of time doing nothing. So a machine optimised as a DV terminal should be quite low power (and quiet since it shouldn't need all the cooling). Put a few hundred machines in an office and cut the power consumption on all of them by some significant figure and you make a big difference.
It is true that you largely offset that by having large servers, but you don't need as much server as your removed computing power from the desktop.
But a huge saving, and note the second comment, is that you centralise management. There is next to nothing to configure on the desktop - and if it breaks then you can just unplug it and plug in a replacement. Want to roll out an application upgrade ? Do it once on the server image, not 100 times to each desktop.
It does mean new skills, and has it's own overheads - so it's not for everyone. I've looked at this in the past - and we rejected it (in hindsight for the wrong reasons) as it wasn't the right choice for us at the time.
Thanks for explaining, although I did post this as a springboard to get replies.
As I have supported diskless UNIX systems for several years in the past (and will be again very shortly), I do understand about sharing a system image (which, incidental, on windows breaks a whole host of software unless you jump through hoops to redirect stuff away from the C: drive, which will be read-only, somewhere else - personal experience of pain here), and also identical hardware on the desktop. It's not a new technology except to Windows shops.
Citrix, VMWare and Microsoft are waaaaaaay behind the curve here compared to UNIX, both in diskless and remote display, and I have to feel that bending current windows to make it fit into a diskless/remote display model is the wrong way to go about it. Better would be to have made a 'new' windows with native thin client support and some compatibility with 'old' windows, than using a crowbar on the existing models. After all, MS did product switch before with NT. Maybe Longhorn should have been this, but they apparently could not get it to work without ex DEC system architects and IBM's assistance (WinNT history 101).
And I did talk about de-duplication, which is effectively what shared image is all about, and I did also talk about low power, diskless desktop display systems, but after a quick search, the only people I could find selling them was Wyse, who sell a diskless system running Windows CE for about the same price (once you factor peripherals in) as a basic PC. Many people in the past tried diskless PC's, and almost all of them are now NOT doing it (the earliest I remember was DEC Pathworks, which had diskless DOS systems with a network filesystem).
My closing comments about having been here before with other architectures still stand IMHO. I still think we have been here before, and I also still think that the current in-vogue implementations are flawed and designed to maximise revenue for suppliers rather than provide a good environment for customers.
look....i never stop
I've done a whole lot of looking and I'm still not convinced about which way to go. I absolutely have to reduce the management burden on our desktop team because it may well kill us if we don't. They spend far too long with admin and it's a burden we feel we can reduce substantially once we find the right solution.
But where to go is a challenge. Metrics on DV costs are hard to come by and based on the pilots we've run, i'm yet to be convinced that there are actual cost benefits - in fact it's arguably the opposite. That said, I don't have budget pressures, I have resource pressures that I want to smooth, which is where DV makes huge sense to us. I believe I can do one outlay and remove the unmanagable and largely undefinable fluctuations in admin loads, which gives me a huge win - and gives me 12 people back for the jobs they're meant to be doing. (We have a 'LOT'of desktops,10's of 000s.)
That's my theory anyway. If anyone has experience of this, please share.
One Workspace, 12 people doing the jobs they should be...
I can fix this issue for you... remove the dependencies of the user estate from the infrastructure, automate and then manage one workspace rather than 10's of '000s, get your team back from firefighting to doing work that means something to your strategy. (there are even some cost benefits!) Quite how we communicate outside of this forum though?
Is any part of this article in English?
Btw it doesn't work
By the way - it doesn't work.
Citrix is a joke - sharing out the GUI from winblows apps, ooooooooh! X could only do that in about 1975.
Nah, stick with the power on the PC. Servers are for server stuff.
If you want users to be able to move sessions around you're going to have to think carefully about it, all the 'turnkey' solutions aren't solutions, they are expensive and dangerous barges.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Review Tough Banana Pi: a Raspberry Pi for colour-blind diehards
- Product round-up Ten Mac freeware apps for your new Apple baby
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'