in data centres are the result of PFYs trembling as they read the bug fix list on a Super Tuesday update.
Earlier this year I managed to sleep - somehow - through the Kent Earthquake. And in 2011 and 2012 about 30 centimetres of snow blocked my village for one or two days at a time. I also know someone who got flooded last year - she lives next to an estuary just a few kilometres away. Five weather systems can collide over the …
Just need incompetent contractors, like the ones that cut through a water cooling pipe in our datacentre. Fortunately with the servers on a raised floor, the water was switched off about 2cm below the underfloor wiring cages, otherwise some critical NHS equipment would have gone pop.
Or the contractors working on a separate companies servers that accidentally drilled through our T1 pipe which at the time was the fastest line available) forcing failover to the secondary site...
Or the electricity supply company that instead of switching off the electric supply to the datacentre, as planned, with our generators running, instead spliced in the replacement supply on top of the existing supply, causing a sustained surge. Unfortunately the UPSs turned out to be..well...uninterruptable...as the German supplier had wired the UPSs incorrectly so they were basically hard-wired on. Fortunately the fuses on the majority of the server's 3 pin plugs blew before any lasting damage was caused.
Anon for obvious reasons.
so yes thier was a flood but not the kind they were thinking of, 4 days to get a replacment busbar and all the backup power was conected infront of the said busbar supplying 3 floors.
i pulled a lot of power cables up 3 floors of riser to keep the lights on.
...and get a meaningless answer.
What counts as "disruption"? A member of staff being half an hour late in to work one day because of a road closure?
If the question actually asked is even vaguer, e.g. "affected" by natural disaster, then it swiftly goes up to 100%. Why? Well how many hard drives do you get through? The price you paid a few years back was affected by the flooding in Bangladesh that knocked out a lot of production.
Massive disruption a quarter of the way around the planet. Time to pull out the disaster recovery plan! And then... err... adjust a few rows on the projected expenditure spreadsheet. It's those kind of jobs where you really earn your money.
IOW the usual claptrap study commissioned by someone with something to sell.
> What counts as "disruption"? A member of staff being half an hour late in to work one day because of a road closure?
I was wondering that.
Our hosting was affected by a "natural disaster" - the power went off during a snow storm when a broken neutral on the 132kV lines up the coast broke and blew across the phase lines and tripped the circuit off. The lights were back on again in 1/2 hour thanks to some employees of the local DNO braving the foul weather to physically disconnect some cables to isolate that section of line and allow ours to be turned back on.
I think it was that outage that persuaded the boss to replace the failed UPS !
> Ours WAS next to the bunsfield oil refinery.
So like a lot of idiots, you built a facility right next door to a huge oil storage and processing facility - and then complained when there was a problem there and it affected you ? AIUI, Bunsfield was put where it was specifically so it was "on it's own" and it had road access to the motorway without all those tankers going through built up areas.
I guess that open space and motorway access was also attractive to idiots who's idea of risk assessment is to wail afterwards "it's your fault for putting that hazard there that I built right next to when it should have been obvious that it was a bad idea". That's in the same league as people who move into a house next to <something> (say an airport) and then complain about the noise from that <something> (such as the noise from the aircraft).
We had one in our town - some incomer moved in, and then complained about the clock that had been striking for ... well longer than I've been alive ... and forced it to be silenced.
I once had a set-to with an expert from NCAR over a broken window. I used it to forecast a large earthquake or a tropical storm - I forget which. He doesn't post on uk.sci.weather these days but recently, neither do I. Scientists can be natural disasters. I had no idea who he was and went for the throat in my usual impeccably impervious manner.
So no great loss there. I on the other hand get upset too easily so my going AWOL is best for everyone, me included. I wonder if he post anonymously. I won't.
Natural big weather depends for convergence on gravity wells called weather fronts, a form of Lagrangian Point. Neutral points tend to attract convergence. Wiggins, a Canadian, in the era of the cowboy was using them to forecast the Storm of the Millennium. He died thinking he was a failure but he forecast Krakatoa instead. You can get his manual on the Gutenberg Project.
Today the Canadian-EFS is perfect for forecasting US tornadoes or world-wide volcanic eruptions. IKYN, anyone can do it if they know the rules for noughts and crosses.
Another mystic (in Coimbatore) with a lousy IT facility, uses/used? (retired now?) a ladder fixed top and bottom to a concrete wall. You can quite easily imagine how that works, he has no idea. California a land of eccentrics that easily matches Britain's quotient has any number of adepts. It has something to do with purine nucleotides I think. I believe they become more soluble during volcanic unrest. the solute is liable to sedimentation on a tidal scale. Just guessing obviously.
But back to the subject, old kit is liable to disruption during earthquake activity but so is modern stuff. The problem is likely to be crystals in the electronics. For that reason old kit that has stood the test of time can still be working perfectly in the periods of high outages. (Look for Blocking Highs in the Atlantic Approaches.)
Keep a couple of spare old-faithfuls running as a back-up. If you don't switch them off too often, they may never be needed. Good luck.
Having Service go down is one thing, losing Data quite another.
SO as long as you can get back up within your SLA and you have offsite storage and backup, it's all in a year's worth of normal activities no ? Then there's that Datacenter out east that got moved from the ground floor to the second because of floods....it's now on the third floor....ugh.
Biting the hand that feeds IT © 1998–2019