Machines fail. Hardware has a lifespan, and hard disks have a shorter life than most. Heads crash, bearings wear out, and motors fail – and data gets lost. The resulting downtime is lost productivity, information workers away from their data and their applications. There may not be as much of an effect on your business as …
standardised desktop images
> many businesses have standardised on common desktop images. ..
I use Ubuntu running off a USB device and my data is off in the cloud somewhere so we don't even need desktop optimisation tools ..
It's also worth mentioning
that many Linux flavours, such as OpenSUSE, have had customisable installer systems for many years, as well as the standard image option. On OpenSUSE, it's trivial to set up an AutoYaST configuration on a customised install disk to specify the OS and applications, network configuration, online software sources, etc. based on the current machine so that even if re-installed onto a different set of hardware, the system will work like the saved setup. Ready to run, with all services present and both OS and apps correctly updated to the latest available. Of course, if you like having to ensure all your computers are fully hardware compatible, then I suppose a brain-dead 'image install' is fine.
Just saying, for the benefit of those who might otherwise be led to believe that Windows is the best or only way to do this stuff.
We run a full Dell ship for our desktops (of which we have about 8000 accross 4 domains) and in the last 12 months we have been using Altiris to deploy images and software. This uses a linux deployment OS that is booted over PXE with a standard set of drivers, a unified image is copied down to a partition on the local disk using multicast where possible then written to the local disk. The drivers for the particular hardware are injected as part of this process and the Windows OS is sysprepped on the fly to join the domain. Once the OS has booted post deployment installs the antivirus and the agent for Altiris which then installs a standard set of software that has been queued up. The imaging process takes about 25 minutes from pc with a blank disk to full windows OS then software installation for our standard suite takes about 15 minutes. The beauty of this system is that the OS image is retained on the local disk so providing the disk is sound, in the event of an OS failure redeployment of the local image can occur without having to copy back down over the WAN.
We only rebuild the image to include service packs as patching is also handled by Altiris which has advantages over WSUS. Obviously this kind of solution can be pricy for smaller businesses but in a larger business or enterprise the advantages outweigh the cost.
Desktop rebuild is easy.
I have kickstart files that describe how the machine should be built, and I have a Cobbler server to run the rollout.
Set the machine in question to be reprovisioned (via the web GUI), and reboot it. A few minutes later, the machine is rebuilt.
If there was anything in /home or /etc that needed salvaging first, it's the work of a few minutes to boot with a USB key and copy data off the HDD before reprovisioning. There's almost certainly a way to do this through a Cobbler setup too, but I've never had to find out how.
I've never worked in an environment that used roaming profiles.
They've always been shot down as too expensive on the cost of storage + backup side.
So rebuilds have always been a bitch.
I did like Altiris when I worked in a shop that supported it. But at only 450 employees for the company and the small IT staff that goes with it, we were never able to sufficiently optimize the deployments to get build time to less than 4 hours. Images went fast enough and typically included AV and Office suites, but all the custom crap that came afterward still took time and had to be monitored. SAS deployments were the worst: up to 4 different batch files to be run, each dependent on the previous; worst part was that the batch files kicked off pre-installers that in turn kicked of installers, so the batch file exited before the installer finished and you couldn't use the end of the batch file as the signal to start the next install. Then the powers that be wanted the system full patched (including all the applications) when we built it, not waiting to get updated as the patch server applied them over the next several days.
All well and dandy....
"Desktop recovery can be even quicker with virtual desktops"
All well and dandy for a 1000+ machine single-location business. But pushing VDI over a WAN? You'd need a separate VDI server for each location, and if they only have 10-20 desktops, it starts to look quite bleak indeed. Any solutions for the conglomerates of "lots of little locations"?
Why the agro
I use citrix extensively. Licenses can be expensive, sure. But in the context of recovery, the solution is very simple. Our clients use wyse terminals. Chuck the broken one in the bin, plug in a new one, and its running in under 1 minute.
The beauty of the citrix platform is its a many to one relationship, ie, you have many connections using a server, that only needs to be patched once, yet many clients benefit. Rotate the servers through availability cycles, and clients will never even know a server has been down for maintenance.
When you look at the total cost of ownership, buying equipment is a very small fraction of the costs - maintenance frequently costs far more. If you can setup your design / architecture in such a way as to minimise maintenance, I reckon it pays for itself very quickly in normal operational terms; the benefits to recovery are a bonus on top of this.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'
- Review Tough Banana Pi: a Raspberry Pi for colour-blind diehards
- Product round-up Ten Mac freeware apps for your new Apple baby