UK cloudy firm Giacom's data centre has been knocked offline by a fire at a nearby electricity substation. The firm supplies "over 1,000 resellers" with a hosted cloud email service, based on Microsoft's Exchange Server 2010, through its MessageStream network. This system fell down on Friday when the power went out at the data …
Good old 'cloud'.
Thing is that it wouldn't be so kack if these co-called service providers weren't so cheap when building out their infrastructures.
Its not only a matter of UPS's but also generators. There should not be any excuse for a DC to be down because a sub station has been hit and is out for several days. Especially if its meant to be a high availability DC.
Of course if the company is willing to take the hit fair enough. I hope in that case they are not selling five nines availability to their customers.
Pint in celebration of great planning by Giacom management.
"We are one of only a handful of hosting providers to maintain three synchronised copies across two sites of each Exchange mailbox, but in this case, switching to these alternatives was not an option due to the scale of the problem,"
"We mirrored the data across three servers. However, they were all plugged into the same 13A socket"
Or, really, it's closer to:
"We can't run services from our backup site because it's a poor copy of the original site and only there so we can say we have 'redundant' services with 'mirrored' data."
The fact that it's a BACKUP site and designed for JUST THIS CASE, when the primary goes down is - apparently - neither here nor there.
It's like saying that a copy of the Facebook database put on a laptop and connected over a 56k modem is a "synchronised copy" across multiple redundant sites. Technically, yes. Realistically, stop talking rubbish.
If your SECONDARY cannot take over from your PRIMARY, it's not a backup, redundant or mirrored. At all. No matter what capacity issues you may fabricate.
Re: plugged into the same 13A socket
We mirrored the data across three servers. However, they were all plugged into the same 13A socket
So what you're saying is that they suffered the traditional "cleaner outage" problem? :)
We took the precaution to choose a provider who was also hosting some VERY major clients, clients who insist on testing everything that moves (redundancy, power fail checks, personnel screening, ISO 27001 and banking law compliance). It means we have the benefit of all of that without having to pay for it ourselves. It still costs about 20% more than a "standard" provider, but it's worth the extra.
Of course, we still have a full outage scenario in our BS25999 BCM (and test this), but it's less of a worry than clients choosing bad passwords until we go 2 factor in a few months..
The scale of the problem?
The point of having data synced across one or more sites is so you can carry on after one of them takes a nuke. This doesn't sound like a nuke scale problem to me.
More proof that "cloud" is meaningless marketing bollocks.
Since "cloud" can mean in one, badly provisioned data centre or dozens or redundant data centres spread around the world then it's a pretty pointless term to use.
It's sad that IT has been taken over by marketroids and adopted a term based on a manager's interpretation of a Visio diagram.
Cloudy email; otherwise known as email
@Cameron, and sad that the Reg feels it necessary to follow suit. Still, we must feed the trolls I suppose.
I'm a bit appalled..
We run a small service, and even we have our stuff mirrorred over two geographically separated data centres (there's a good 120 km between them), and that includes DNS management..
Oh well, at least it was a true cloud service - all the email went up in smoke..
@Cameron and @b166er
At management level there has always been this problem that a term that is sufficiently descriptive of a service risks exposing a manager for knowing nothing about the topic at hand. This is why the "cloud" concept is such a hit: heaps of people spouting meaningless BS about something they only have a vague grip on.
It has thus become a hit with managers, marketeers, politicians - basically all the people who cannot be trusted being near equipment but control the budget to do it right or wrong. Translated: willing prey for consultants of the bigger companies (because large means you don't *need* a clue, you just drift in on brand and degrees instead of experience).
El Reg needs to use that word so people WITH a clue (i.e. most readers here) are kept abreast of what BS is spouted, giving us a fair chance to utter the same BS and thus bend the discussion towards something that can actually be delivered. Because if you cannot - guess who gets the blame? Not those with vague specs..
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Android engineer: We DIDN'T copy Apple OR follow Samsung's orders
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- VMware reveals 27-patch Heartbleed fix plan