Australian businesses – particularly SMBs – have little confidence in their disaster recovery strategies, according to research detailed to the media yesterday by Acronis. It is not clear if this is because most SMB backup and DR strategies are managed by the owner of the business, usually not a technologist, or because …
Have been in the thick of this in Australia, 'tis both mostly a money and mindset issue.
I've seen this time and time again. IT budgets barely cover what users want anyway, and projects being what they are with non-planned feature-creep extras etc. that suck away the last few bucks, so you can only just afford the stock standard minimalist disaster-recovery package. Whilst it usually looks good on paper and at board approval meetings (and keeps auditors happy); it, in reality, really sucks.
As with one's PC, business-wide backups are a 'she'll-be-right-mate' technology that everyone pays lip service to--full stop. (Except for the budget-cursing, fingernail-biting, fingers-crossed IT (projects) manager,)
and with good reason
I have seen how a lot of them do them....or just don't bother. A lot of business owners had never even thought about it and those that had see it more as a necessary evil rather than actually investing the money in safeguarding their investment.
A lot of companies will still be rotating 5 +year old DAT's and DLT's. Or I backup to an external usb drive. And where do you keep the drive? Over on that shelf there. So the disaster you are protecting against is clearly not fire flood or other natural disaster
Most smb's in australia don't trust their backups because they don't work due to user stupidity. I know, I work in IT in Australia and more than once we have turned up at businesses running 10 year old machines with a tape backup and continuous message appearing everday for several years warning that the tape doesn't have enough capacity and the backup has been aborted. Others I have seen the backup message come up saying something like "last backup done 400 days ago, backup now?" On checking it seemed that the computers were moved some time ago, the backup drive is now plugged into the wrong machine, not the server where it shoud be, the power supply for the backup drive isn't actually attached to the backup drive, its tucked away in a draw somewhere and what are backups anyway?
I could go on, but you get the point. SMB's can't afford dedicated IT staff, they can't manage the backup regime and hardware themselves, so best to assume on failure that the backup won't work anyway and be prepared for that eventuality.
And now is not the time...
to be finding out the fact. At least Vodafone in Oz are lucky in that they can ask whoever slurped their customer data recently, if they can kindly have it back please.
Not an isolated case
Prostate cancer researchers don't trust backups either!
Most disaster recovery processes are NEVER tested
Lots of companies have DR, or business continuity strategies - some are required to have them, by law. The problem is that what happens according to the theoretical, ideal, document - written in the cool, considered environment of an office usually bears little or no resemblance to the reality of trying to implement a recovery programme after an actual disaster. Of whom none will have ever experienced a real-world IT disaster.
So while your planners might have considered how to recover to the "B" site in the case that your production environment is subject to a fire, or has suffered a crippling power outage, or was flooded or ,... It probably hasn't considered what to do if all your sys-admins go down with food poisoning after a dodgy meal in the staff canteen - or even the 'flu.
Even with the common-or-garden disasters, it's inevitable that things won't go according to the book. There will be some changes that didn't get incorporated, or some incompatibilities that were missed out. However, the cost of testing a full-blown DR and the risk that you can't get the B site up (or revert back to A, afterwards - the forgotten final phase) and the sheer upheaval that it all causes means that most MDs are quite happy to remain ignorant of the true state of their emergency procedures. After all, if the worst does happen they can always get another job.
DR/BCP isn't a one-way exercise either - even if you have got it
Some years ago I worked in a UK govt department. We had what was considered a "mission critical" system with a DR / BCP setup that had a duplicate IT environment across town, dual fibres, switches etc and all transactions on the live database constantly replicated to the backup.
Problem was, although we could (in theory) cut across to the backup system within a matter of minutes (just re-point the clients at network level) there was no effective way to resume normal working on the live system afterwards.
So, whenever there was a problem with the live system I'd ask if my team of data inputters and analysts could flip over to the backup. The answer was invariably "no, because we'd then have to have >1/2 day downtime while the backup is then restored to the live system afterwards" Regardless of whether it was a 30 minute or 30 hour problem, we'd just twiddle our thumbs and wait for the problem to be fixed on the live system.
I got the call
"Hi, I know you guys made some copies of our systems when you were testing configs... Is there any chance you kept them and can we get those back?"
It could be worse, one of our clients flooded and caught fire and then flooded even more.
Austrailia doesn't trust backups
I wouldn't trust any backups made by PC's (or any small computer) as they do not have designed into them data integrity. Data integriy is the most important part of backs and PC's cannot offer it.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Google offers up its own Googlers in cloud channel chumship trawl
- Something for the Weekend, Sir? Why can’t I walk past Maplin without buying stuff I don’t need?