A few weeks ago, we conducted a Reg Tech Panel survey about managing windows desktops. More than 1,200 of you participated. Many thanks. We will publish the results in a free pdf (no registration required) later this month. In the meantime we think it worth exploring a strong theme that emerged from the responses. So we are …
Just simulate a typical office desktop crisis
Get the ok to perform "a typical office desktop crisis" from one of the higher-ups. Then come in late at night, unplug all the machines and pile them in an IT storeroom (or white van at the back entrance)
Your line the following morning is "oh God, we've been robbed!" and then you proceed to enact your DR plan of handing out stock machines to key people (you have a DR plan, right?)
Once you have wide-eyed managers screaming "just buy us some more desktops now!" you get to tell them the 2-3 day delivery times and watch their heads explode.
Then you can bring the desktops out of the storeroom/van and tell them you were just illustrating the point of how important desktops are to the business.
Its nigh on impossible until the entire system crashes or one of the directors machines crashes under the sheer weight of pr0n on it. Most smaller companies simply see IT as a loss centre until it doesnt work. The only way I've ever managed to get upgrades through is when its been company policy from the top or using pressure frpom some IT aware senior manager
Don't Even Try
If you approach bosses as an extortionist you should be fired. You can keep records of failure rates but until the boss's PC dies, he will not care.
A much better approach is to persuade bosses to convert to thin clients for half the cost of an upgrade of PC hardware thick clients. Then you can upgrade the software on a few servers and let the thin clients last a decade with fewer problems with bosses. It is much easier to persuade bosses to upgrade/consolidate a few servers every few years than hundreds or thousands of thick clients and the bean-counters will be impressed with reduced power consumption and maintenance cost.
This recipe is doubly cost-effective if you can run GNU/Linux on the thin clients and terminal servers as there is no per-seat licensing costs. There may be a few apps or users who are difficult to move this way, but the overall system will be in much better shape operationally and financially using a mixture than using the expensive solution everywhere.
Short answer is,...
I don't know/care, I sit at my unix terminal playing with the SAN and looking smug.
Longer answer; I give diagnostic support to the Desktop guys to assist in eliminating specific causes and analysing any wider issues, only if a) I get on reasonably OK with them, and b) I get on reasonably OK with the customer who is trying to get the problem fixed. Otherwise, see short answer.
OK, probably not directly answering your question but highlighting why some problems take longer to fix than others. (I just require courtesy - you'd be amazed how little is out there)
No persuading necessary
We use a well-known International brand of hardware manufacturer which fail often enough that the users get regular reminders of what happens when their desktops fall over.
If you have a turnover policy for the PCs
The company should provide capital to each group to actually DO it. It sounds like a brain dead "DUH!" statement, but more often then not, computer equipment turnover policies state generally 3 to 5 year replacements but due to budgeting cuts and bottom lines, they tend to go 8-10 years before replacing the old junkers. It has always boggled the mind that company say to do you best and make the customer happy... but don't spend any money to do it. It doesn't make sense. Every small business owner I know spends the money needed to get the job done so they can MAKE money over and above their spending. Large companies seem to loose sight of this since they're on the stock market and have to please overfed, brain dead analysts that think tax write-offs are more important than proper acquisitions for greater profit margins. Sometimes a loss one quarter due to expansion and capital expenditure will lead to more profitable quarters to follow, but analysts will kill a company for such "nonsense" because they had a down quarter when they could have maintained a smaller increase in profits. And then they'll kill the company if it maintains only small gains. To fix the IT issues (as well as ALL equipment turnover issues) we need to fix the stock market analysts and re-school them on proper understanding of business tactics in the LONG term and tell them to stop trying to analyze anything that hasn't taken more than 6 months or a year to accomplish and preferably 5 to 10 year trends. Their job could be much easier focusing on 30 year trends for older companies and for younger companies, focusing on their build up and expansion and comparing it to how they're managing their income and bonuses. Let the company accountants deal with the tax issues and focus more on how the company is doing as a whole in managing their assets instead of focusing on how big their tax write-offs are for a given quarter.
Well heres my plan of attack
Typically managers don't want to part with their cash let alone spend money on overtime so I can come in on my weekend off to complete routine maintenance. I would make the boss aware of the finance involved with upgrading the machines / maintenance on the machines. Then forecast losses made through destruction or elimination of certain terminals and typically 3 day waiting period for a complete resolution. Or you can buy the new equipment pay me my overtime and have more productive staff?
Licensing in a small business is by far the worst thing to convince them to spend out on. So break out the IT law and keep dropping hints by printing off cases where companies (and more specifically directors) were getting spanked by failing to make sure their software is legitimate. They soon relinquish the Directors new cuban cigar fund and hey presto licensed machines.
As for disaster recovery there are plenty of cases and reports on data failure leading to business failure. Presenting the facts gives you a none biased way of well presenting the facts. I don't get involved with money I don't know how well the companies doing and how large the directors fund for the 4 weeks in the Algarve is but I present the facts and let them make their own minds up. You may say I'm pressuring them by leaving hints of cases around but quite simply everyone gets bored by IT and talking IT to directors you may as well be talking dutch for all they care or understand. So a longer campaign may be able to fire their interest. Especially as licensing is in their interest!
Widen the lens
Disclaimer: Events and conditions in this post were inherited; don't blame me. :)
Standardisation is the core issue, and the hardest, particularly with SMB's. How many times has a new employee or project required a quick jaunt to Best Buy because Der Management didn't want to wait and / or pay the costs for overnight shipping? After a couple years, you have a right proper hodge-podge of different models, even if you can convince them to stick with one manufacturer instead of "the cheapest that will do the job".
Thin clients are one way of reconciling this (and using those oddballs), but that also requires standardisation of the network, what with that gawd-awful daisy chain of D-Link and Linksys mini-hubs looped through the building because Der Management believes, against all evidence, that the 8-port Bay Networks (THAT old) is just fine to support 80-100 workstations (semi-seasonal)... as long as the Corporate Directors (4) each get their own directly cabled port out of the 8.
Utilising the random mix of computers as thin clients is a good solution, but you have to make sure you are ready for it - one good 24-port switch and some planning (and some weekends, unfortunately) can provide a much needed boost to productivity and stability. Once you have things working reliably, THEN go to Der Management with the paperwork that shows NOT going to thin clients is stupid (and is costing THEM money)... but give it a couple months after they get used to the faster pr0n.
Note: I don't know if they ever finally got their act together; I left a little after a good friend and network engineer I hired from MCI (there WERE a couple of 'em) got the network up and running on a good switch and we fixed all the cabling and ripped out all the mini-hubs. He was fired the week after for going $34 over the $5000 invoice limit... that was signed by the Pres' son. And then they congratulated me because of whatever I did - no one got kicked off the UNIX box anymore. I put out my CV and never looked back.
First forward the CEO's private email to ALL, you know, the mail from his mistress.
Then when you are fixing it, let slip that it was caused by a virus that got in through an unpatched windows bug, and out of date virus software.
Bonus points if you can get the CEO to OK overtime for you and a few mates to correct the problem.
It is, as stated above, virtually impossible to get the right investment in internal IT. As there's no direct revenue generation from the expenditure anyone with their hands on purse-strings will keep them tight.
In my experience, in organisations of various sizes, the key to driving a successful preventative maintenance strategy is to self-start it. it's 1st & 2nd line staff who most desperately need the standardisation and consistency across the desktop infrastructure, there's no pain greater than attempting to implement a universal update or change only to be felled by a wealth of machines which require varying levels of patching and mainetnance beforehand.
Beginning by auditing all the machines, filling out the asset database, reviewing your case management processes for opportunities for improvment, you will lay the groundwork for the holy grail of a standardised desktop environment, once you're doing all you can with all you have and can prove it you can then begin to approach management.
I've found some success is to be found by keeping good relations with the rest of business, put up a forum or website so everyone can give you feedback regarding which systems work and which don't, what the rest of the business want from their IT. If you can prove that your ideas and proposals have merit and wider support among other businesss units you're more likely to succeed. For instance if by implementing a WSUS server you can demonstrate a percentage increase in the efficency of the sales divison you're more likely to get the approval.
**DISCLAIMER: WSUS servers may not nessecarily improve sales effiency and/or returns, the accounting team may overload your tech forum with unreasonable demands during month end and the techies you've charged with auditing and asset management may attempt mutiny!**
Thin clients are nice
And they work. They work especially well in discouraging users from browsing youtube.
Of course, thin clients do wonders to bills (power, but also air conditioning), and savings scale very nicely with the number of users.
As for thick clients, you can always anticipate problems. Hence monitoring software. Most system vendors provide free tools for their systems which are able to log system events and forward them to a central repository (including SNMP traps, or more commonly, e-mail notifications).
So if any component starts operating only marginally, support people get a heads up on the problem.
Now, it's a wholy different problem to persuade management (or the beancounters) to actually okay system repair costs, so the support personnel can either risk their budget and run preemptive repairs or wait until the part breaks -- at least then they'll magically know which part needs to be replaced.
Paris, because she knows the difference between thick and thin.
The problem is not so much that desktops are seen as unimportant, but rather that any part of IT is seen as unimportant until something goes wrong. The reason it always seems so apparent with desktops is because it requires a fairly large shift in PC technology (such as the introduction of USB, which is what really killed NT4 in the corporate desktop space) before a good IT department can't simply find a workaround solution. The fact that time and hence money are often being wasted developing such workarounds is often missed (even by IT staff in a surprising number of cases). This is furthermore often compounded by "IT-savvy" bosses, who see how "easy" it is to install a few apps on a single PC at home and figure that's all there is to managing an enterprise desktop deployment.
Changing this attitude requires something of a culture shift, because the IT department need to be seen to be solving real problems and pre-emptive fixes often just look like change for changes sake, which rarely goes down well. And so a large part of the solution becomes something of a marketing exercise, effectively selling the IT department to management and providing clear guidance that cost management is a priority - the "Dragons Den" line of knowing your numbers is critical when dealing with business people, because then you're talking a language they understand.
Of course, all this starts with an IT department that is willing to embrace change, which is often not the case. It's often too easy to see that things "kinda work" and take the path of least resistance, to leave well alone until it finally breaks. Getting the IT department out of that mentality is often as hard, if not harder, than getting the business to see the benefits of IT improvements.
It takes some work
Make sure you are keeping track of the calls, have that data.
Analyze a month of one sample group's trouble tickets and calculate how much downtime the user group experiences in a month. Multiply that by 12 for a complete year estimate and divide by 1880 (working hours in a year) and thats how many resources you're going to 'add' in productivity.
Real example: 50 users with experienced 2300 hours of downtime a year, 1 1/2 headcount lost in productivity. Estimating 1 1/2 people at $75k, we replaced the group's PC's for $22k. This figure did not include the added mental health of IT or posslble savings within IT, just the business group.
Additional risks avoided: data stored on local disks, management of software licensing and patching.
We also included a true lifecycle of 3 years for laptops and 5 years for desktops based on hardware failure rates. Also laptop vs desktop costs are higher but are justified with work from home 1 hour a week.
This is often a struggle to include the entire TCO including downtime, hardware costs, software costs, IT time and risk.
Calculate the average salary of the workers with new PC's and the downtime
Ah, a shot at fame.
Desktops aren't much different from laptops, or servers, or routers. They're hardware prone to failure with software that also needs maintenance. And since maintenance too is change, you integrate both in your change management system which you need anyway if you have mission critical systems to run. And lo and behold, even your desktops have a mission critical function, at least some do, much like the CEO's laptop is usually "mission critical". The rest, as they say, is policy. Backups? Sure, remote if possible. Upgrades? Scheduled and hopefully at least 90% automated. Replacements? Staged in waves with spares for immediate needs. And so on. If you don't get to make policy, you up and run away. Or at least, I do.
I don't. I recommend system replacement if a system is a dodgey model or old. If the advice isn't taken, well, that's a shame. My boss doesn't blame me if I recommend replacement and they don't do it though. If I worked somewhere where the bosses started blaming me, though, I'd put it in writing. I work at a computer surplus though, so ALL our systems are old, but we have a virtually limitless supply of spares. A few systems are under admin a level up from me, and are Windows boxes. They are the ones that have the most problems by far... Since they just have images for a very few models, and the images are relatively slow due to all the security software and junk on them, it'll be like "Oh no, the box croaked, I need one of these 3 models of computer, and max out the RAM".
Our other systems run Ubuntu. So, in case of hardware failure (we have Dells so usually power supply or blown motherboard caps.. *cough* *Gx270* *cough*), usually I can just swap the hard drive directly over to a new box even if it's a completely different model. Otherwise I have an imaging system (which we also use for machines we sell) that gets a 8.04 install put on completely unattended, in a bit under 15 minutes.
It REALLY changes the dynamic regrading keeping old computers when I know I could get any PC, run the install CD (or automated network install) and know it'll work. Windows admins are used to it, but doing it the Ubuntu (and MacOS) way of having basically all drivers on the CD is *MUCH* easier than either 1) Getting drivers and building a custom install, then having to find similar enough machines in the future for that custom install to work. 2) Install stock then load a bunch of drivers afterwards (and hopefully there's no catch-22 with SATA or the like.)
Ya still using desktops?!
- hump up on the cloud hype
- keep only back-up systems until they die out and become obsolete
- give everybody a netbook/laptop with a docking station or better yet an Android with a keyboard with encrypted/secured theif proof drives to store no data (everything is on the cloud 'rite?)
- get some telco to give you a first year hard discount on Voice and Data plans
Why bother with desktops at all?
Better yet let them work from home, boot some cloud based image while working and let the desktop be their problem... cool one.
So, in short: desktops are seen as unimportant until they're proven obsolete and I guess they will be before you see your back-up servers die out a healthy death. 2015? 2020? Anyone?
Money, Money, Money.
The best way to get the attention of non-it management to to talk to them in terms of money. They think they understand that.
If large proportion of the staff are unable to work because of computer down-time, they will be bitching to customers, suppliers, rivals and anyone else who will listen, and getting paid for it! This is a scenario that will get the attention of any sentient manager.
Damned if you do, damned if you don't...
The main problem I usually face is that the better a job I do, the less I appear to be of value. In most cases I am lucky enough to have inherited a sloppy job, and slowly spend the next year making sure it all works, and putting contingency plans in place, even if they are not explicitly asked for.
I always say that what the client wants and what the client needs are two totally different things, and often are mutually exclusive. My favourite was being asked to install Blackberry Enterprise software on a server (at a client's with a single server) back in the days it needed to run on a separate server. It doesn't matter how many times you refer them to the documentation, it is still your fault.
As far as desktops are concerned, it simply comes down to productivity. The more time their staff waste waiting for their desktop to respond, the less time they spend doing real work. With the better clients I eventually get to a rolling replacement program, swapping desktops every three or four years, but even then it is a fight to persuade them not to always give the newest computers to the bosses. Some poor buggers never get a brand new desktop.
You don't say "We need to protect these desktops from a theoretical exploit that might happen but out network defenses should stop because We Have A Firewall (tm)".
You say, "Man those employees sure do waste time checking personal email and watching stupid online videos (buy a proxy server), installing unsupported, unnecessary software (buy AV, anti-spyware, anti-malware), keeping up with Microsoft's latest gaffe / patch (buy patch management), running local web servers for development / local databases for development when we have perfectly capable enterprise solutions already in place (buy host firewall / IPS)
"If only we had an infrastructure in place to keep the bean counters counting!
"If only we could make sure that everybody could do their job without mindlessly handling tasks that someone else is already being paid for!"
You need to sell it as a way to increase productivity and profitability. That is the only way to get any non-technical C-level executive to buy in (willingly).
Make a business case...
Having fought this issue for quite some time I have a few insights to share. The first and perhaps most important issue to bear in mind is that your philosophy of what is "good practice" or not will generally have no impact on the people who control the purse strings. As with any major expenditure in a business, you have to make the business case for it. Thought it goes against the grain of most IT personnel, you will be best served to put together a formal presentation to your superiors, and include research, facts, figures, and preferably a demo. In our case, here is what our presentation consisted of:
We tracked the hours spend doing desktop support, and lumped the type of support into general categories. Those categories that could be made to "go away" with a desktop refresh/bit of modernisation were added together and the cost of the man hours spent on these categories was tallied.
We tracked the cost of replacement components, shipping as related to RMAing in-warrantee systems and other "hardware costs."
We performed an informal poll amongst staff asking what their biggest complaints about the IT systems were, and noted the % of these complaints that could be dealt with by new gear. (Morale is important too.)
We then gave a demonstration of the ease of maintaining newer systems (vpro demo, ghost deployments, etc.) and a quick lesson in how homogenous hardware deployments make IT’s life simple. (And a smaller number of spare units go a much longer distance.)
All told, this impressed the requirements of a desktop refresh on the paymasters, and it will be fitted into next year’s budget. The thing to remember is that everything has value...you simply have to find it. Hardware costs and the man hours spent on support are obvious ones. Don’t overlook the ease of udates (single set of hardware is single testing environment to worry about), image deployment, and disaster recovery (getting a spare unit out to the user.) Also always remember that especially in these "tough economic times" anything that boosts morale while also boosting shareholder value is a sure win. Presented in such a way…how can they refuse?
Voting system? Where? I don't see it.
"You may have noticed that today we started trialling a comment voting system in select stories (including this one)"
Where? I don't see it. I even went and logged in, thinking that was why the "voting system" was not in evidence, but same diff.
"...upcoming forum system that we plan to launch in mid-November."
But will the forums still allow AC posts? Hmm... if not, I guess I'll be creating a new online persona for myself for, er, infrequent controversial/troublesome posts ;)
The machine that goes ping!
One approach for ensuring a reliable upgrade cycle is to not own the equipment, but to lease it & include a refresh component to the agreement.
That way, equipment becomes an known operating cost rather than a capital cost & can in many places by used as a tax offset without having to fiddle about with depreciation.
(beer - because it's only rented, too)
desktops - pah!
My company 'solved' the desktop problem by going entirely virtual. No real PCs any more, just VMs running in some honkin' big server, with each person allocated a personal VM they can login to from any thin client. Sounds great - but now of course we have two 'new' exciting scenarios:
a) the server goes belly up and nobody can work
b) your VM dies and you can't even work at a spare desk.
Took me 4 hours to get someone to 'reboot' my VM last week, and when they had to do an emergency patch to the server, the entire office had to down tools for half a day.
Welcome to the nineteen sixties.
You dont want to do it like that
The issues that cause these "IT" problems relate to bad working practices and planning
1. Vulnerabilities in the software due to bad developement practice and implementation
2. PC users not being responsible or interested in their equipment as it keeps changing
3. Lack of proper planning when the network was designed.
4. Lack of IT proficient staff.
(1) It is possible to code software without bugs / security holes, however as the software vendors are not held responsible for the failings in their products they do release without the product being "finished". The myth that it is so complex that it cannot be done is just an excuse for pushing rubbish out the door so as to get the money in faster.
(2) As software vendors continually move or change the interface so as to continue making money for a product they have already developed, IT skills depreciate rapidly over time. Most users are not interested in having their computer working properly as it "isn't their problem". If you were to compare their attitude with other workers who use tools as part of their job, you would see how strange this is. "I can't make my screw driver work" is not really an acceptable excuse for low performance outside of IT. "The manager ran amok with his screw driver stabbing his colleagues because he had been sent a letter laced with LSD" would be front page news but when it is malware its just one of those things, not a total failure of site security and foresight.
(3) Most networks are not designed but rather just grow until their performance / security becomes an issue. If a network designer was brought in earlier then a consistent policy for infrastructure modification / expansion would control the explosion of kit that occurs when departments are given a time limited budget for new kit.
(4) Most jobs now contain some element of IT so why employ someone who doesn't know how to do all the job. If they are highly skilled in some other area but not IT then they will need to be trained sufficiently for them to be able to do all their work rather than having to employ someone else to do what they will not. The premise that lots of semi-skilled workers are a financial saving is false, yes 90% of the time a monkey could do any job but they are paying skilled staff rates for the 10%. The old adage "pay peanuts, get monkeys" is the current working status of most companies and the reason why consultants who can do the 10% are able to get contracts.
That's what you rest your ledgers and inkwell on. None of this nasty electricity here, I'll have you know. If it can stick balloons to ceilings, what's it doing to your bowels, that's what they don't tell you, oh no.
Cost Savings by Updating
For us this has proven more a cases of individuals being willfully obtuse rather than difficult to explain, but here's what we will do in the most extreme cases.
We run an application/time comparison, giving them estimates how long time it takes to boot up new/old machines, the general application run-times and down-times on new and old hardware, we also show statistics on how as things grow older and have longer times between re-installs they become more and more cluttered.
We then often get the question "why not reinstall them more often" which is usually resolved by giving them an idea of the time and cost of a reinstallation, both for us and the user who will be without a pc for a given amount of time and then will need to possibly familiarise themselves with new versions or changes since the last reinstallation (we have a running updated image software which includes standard software packages and so on)
We then run a cost comparison to the cost of reinstalling an existing pc and purcashing and installing a new machine, then transferring data.
In most cases we only advice the exchange of a pc after 3 years where their costs have been depreciated, so they don't have a double-budget pr user on the pc side.
Last but not least we use all "old" pc's as stock for courses, training and temp borrowed out for all department, so even when a pc is replaced it's not completely without value.
This has meant for us that in 9 out of 10 cases when we come across old hardware that has an error that is not worth the time it would take to "solve", we get the ok to replace the machine without any trouble or discussions.
Usually only new bosses or bosses who's department have had a stop on spending.
Mission Impossible II
I've been down the VM route and it does work, as long as management are willing to spend enough money on infrastructure and servers. Network overheads tend to be high and servers need plenty of ram and disk otherwise it really is back to the 60's. Linux of any variety isnt comfortable enough for most users. The favourite argument against switching to Linux is the amount of time retraining users. I did a global rollout a few years ago, moving to leased Compaq kit which did away with the need to repair machines on site. Standardised software loads on machines which reduced the amount of supported packages. All automated so the user got a replacement machine and plugged it in. End result, company made savings by cutting the IT contractors out of the company. I know of another huge corporate where one of their critical systems runs on OS/2 and was written in COBOL. It can only run on old hardware which cant be repaired anymore and they wouldnt pay up to have it redeveloped for a Windows environment. That one got outsourced to India for a cheapo VM solution and their IT dept got throughly shafted. I've been supporting desktops and servers since 1988 and probably seen every scenario going. Companies are cutting experienced IT contractors all the time, I think Barclays has already shown the results of that game plan recently, but they try and make do with practically half trained monkeys and it doesnt work. Or it works until there is a screw up and then the half trained monkeys turn into headless chickens. For me IT support was always about service but thats been totally thrown out of the window IMHO due to costs.
hmmm very complicated yet simplistic
Where i am we have a change control system where by every patch or update that wants to be rolled out is tested first on a small batch of machines, usually techies or support staff first, and then when tested ok and any bug issues finalised it cna be rolled out across the many domains in use.
Obviously you can never cover all machines, imagine how mocked a support staff member would be if he was using a win2k machine for work?!? or an old toshiba portable laptop...
The best way is often by doing it on your own machine after making sure it won't affect anything else on the network infrastructure and then sort out pushing it across the rest in my past experiences. In smaller companies without an IT manager or director this is much harder though as it is seem as a black hole for money until the smelly stuff hits the spinny thing.
Even worse when an encrypted drive toasts itself(very common for portable machines) and you have to break it to the poor user that their 70+gig of data/work/itunes/movies is gone forever. And why is it there, because they never wanted to copy it across the 1gig lan or sometimes the 100meg lan as it was too slow to do...
And even purchasing the tools to allow such data recovery often go unheeded and leave you stuck. so the policy is, if it goes onto a machine via a USB adapter and it cant be read then thats it, its gone.
It's even more prominant when there are kind of powerful CAD stations knocking about and they fall over, or when the power goes off to users who have machines on multiple domains so keep their feet warm with a stack of 3-5 machines under their desks...
I say shoot the lot of them and give them pencil and paper, no pens as they cost too much...
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip