A bungling fire safety contractor caused a complete shutdown at hosting firm Webfusion's data centre today, crippling thousands of websites. The firm's domain name arm, 123-Reg, was also temporarily offline. "An external third party was carrying out routine maintenance in our data centre, and testing our systems for fire …
Contractor gripping his stomach... "Where's the toilet? "
Disaster tolerance? Transparent alternate site failover?
I suppose not, doing things properly is so unfashionable these days.
doing things properly
As Im sure others (the cake perhaps) will point out. You get what you pay for. no-one does things properly anymore, so don't assume they will. sigh.
Get what you pay for
We pay Donhost £50 a month for an unlimited reseller package, can't expect complete fail over for that. I also don't expect the boss to fork out anymore for anything better.
No DR site?
Don't they have a DR site? What would happen if there was a real fire (and the related flooding)?
Moon on a stick
So you want failover and DR and all that malarky, but you're going to a budget hosting provider like Webfusion?
If you're with them, accept the fact that you *don't* have these services available - certainly not without additional spend.
In my experience it's the ones who spend the least that complain the most - the people who were spending a proper sum for decent service in the first place already had and subsequently enjoyed the protection their spend gave them.
Fairly typical of their service
Its not like its their first and only outage in the last 12 months, I finally gave up on their poor hosting last month and moved to a VPS with Global Gold, zero outages and very noticeably better reponse times from my websites have left me wishing I had done it much much sooner.
They started going down hill when they moved their VPS servers from Holland to the UK, it should have improved things for UK clients but it most certainly didn't, performance dropped noticeably after the move and rock solid performance turned into regular long outages and slow response times.
Their not all up ....
Trying to get to http://www.goodlifehomebrew.com/ to order something, only got their phone number because google cached it .....
Does anyone know...
... if they have a BofH working there and, if so, were any managers trapped at the time...?
Fire Safety Officer
What's the betting it was Keith Laird......
Forgot to pull the fuse?
I guess in essence whoever was doing the checks forgot to pull the fuse before pressing the "on" button. Wouldn't want to be going into the office tomorrow to see the boss who's just been faxed the bill for all this.
Security guards can also shut down data centers
At a previous job, a security guard (who was an untrained rent a cop from some third world country) tried to silence an alarm on an exit door -- by pressing the emergency power kill button next to the door. It was a large, red, well labeled button under a clear plastic cover; and it shut down all of the power in the data center except for the emergency lights.
The security guard couldn't read the sign that said "EMERGENCY POWER SHUT OFF" because it was in English rather than whatever third world language he spoke.
Server sent into "safe mode"?
Let's see. If there's a real fire, either it'll be put out before it reaches anything serious, in which case having the servers go offline is an unnecessary outage, or it'll burn the place to a crisp, in which case whether they were in "safe offline mode" or not at the time becomes moot.
This process sounds like nothing more than a recipe for unplanned outages to me. It'd make sense if there was a mirror site to seamlessly cutover too, but as there clearly isn't I'd have thought that, in case of a fire alert, hanging on for as long as possible with someone's finger poised over the EPO button would be a better strategy.
Server still down
Our server is still down.
Wouldn't re-boot after the outage. Despite promising to look at it and to call back, a number of times, they never did. Eventually after nearly a day and a half they looked it but couldn't get it to re-boot. Then they said they didn't have enough engineers to be able to restore our server from the backup. Their support guy said they were too busy with all they other servers that were still down.
They even had the gall to aks me to pay for them to restore from backup even though they caused the problem in the first place. Can you beleive that.
So server has been down almost 3 days (68 hours) and still not up. It is a completely and utterly unacceptable service.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?
- Three offers free US roaming, confirms stealth 4G rollout