Cloudy virtual remote outsourced data
What could possibly go wrong?
US discount airline JetBlue is warning of delays and cancellations to flights after a number of its systems were knocked offline Thursday. The airline blamed the logistics nightmare on a Verizon data center outage, claiming the disruption had cut service to its website and online check-in and booking systems, as well as the …
Where does it mention cloud or virtual or even outsourced?
Are you saying anything not in your own building is cloud?
Sounds like a totally traditional data centre operation to me with no DR/failover plan. Would probably have been better off in a cloudy virtual setup.
Very poor that (in fact pretty unbelievable) they base their whole operation at one DC.
Except that when you are at the receiving end of such a situation, as a paying customer, will you be annoyed? Doesn't have to be flight-related, this could happen with ATM's, paying for goods at the supermarket, getting trapped in a lift, appointments system at your Doctor, Dentist, Hospital conks out, loss of electricity to your home, etc.
Technology is a dangerous thing, people are seduced by the benefits that they totally ignore the things that can go wrong. Bye Bye Risk Analysis. Technology is the 21st Century $DEITY, worshipped 24/7.
Except that when you are at the receiving end of such a situation, as a paying customer, will you be annoyed?
Of course. And I'll berate the fools for using MS tech in the first place. We've seen them pull dumb crap in the past, and some people haven't learned from it. Massive amounts of "annoyed" customers will help to fix that. ;
Yes it is :) their dc should not be so easily taken offline. The blame should be shared by both companies. Jet blue also just learned a valuable lesson about carrier neutral vs carrier dc's.
Taking out a well designed dc requires a fluff up of epic proportions. Ev1's Houston dc got taken offline because an electrician dropped a spanner into the main power switched cabinet in exactly the wrong position. Not only did Murphy do a little dance but it took out a significant amount of their power switching equipment and pdu's. Maybe Verizon hired the same tech?
"Not really Verizon's fault, in the end."
LIKE HELL IT ISN'T.
It is easy to say "not Verizon's fault" until you have done business with them. Incompetence is an absolute GIVEN and i've said this before
Verizon is doing everything it can to F#KC UP and ignore all infrastructure except:
2) where market / money has been definitively lost, thereby trying to make up lost ground
So Verizon cloud going TITSUP? "Logistics nightmare" is Verizon's modus operandi when it comes to land services, it is why I left and why I tell every business I can to leave. Just this past Wednesday I told another one of my customers to leave Verizon and go cable VoIP; their fax telephone line was down...due to a Verizon service outage.
"Taking out a well designed dc requires a fluff up of epic proportions."
I am aware of one particular 'DC outage' that involved a dump truck taking out the corner of the building. The subsequent daisy chain of events took the DC offline for days.
You see, as it collided with the building, it also took out some form of water main, which was now jetting up towards the underside of the truck (which is now forming an inclined plane pointing into the hole in the wall since the front of the truck was on top of the rubble.
All the water was basically hosed into the DC.
Try factoring that into a risk analysis report and see what the beancounters say :)
I would love to see an auditor's statement that evaluated the effectiveness of their DR plan. I swear most of these companies just write down anything to pass an audit with zero chance of the "plan" ever working. And then they pray, or when something like this does happen, decide the risk of failing over is much greater than just waiting for the power to come back on because, you know, they've never actually TESTED a full failover and fail back.
As usual, poor design. Probably because some accountants decided that nah, they didn't need the extra cost of real disaster proofing.
That said, they might be right. With the limited to zero liability companies like JetBlue have when it comes to not delivering services, they might not have a financial need for anything resembling resiliency. So long as these corporations are allowed to put all the risk onto the consumer, and not assume ANY financial risk themselves if they don't deliver services, this will continue to happen, and nothing will change.
One of our customers raises a ticket every three months for us to fail-over one of their systems to the DRC site and back again - and the ticket isn't closed until all three machines have gone both ways without problems - at the same time.
For their operations centre, they have two of them, both staffed 24/7. They take week about as being the 'duty' centre, but the off-duty one is always shadowing and watching what the other is doing so they can take over in a matter of minutes.
While it is more complicated doing stuff on their network, it is nice to have at least one customer for whom cost comes after reliability and resilience.
Not to mention that many dc's get paid by their utility to switch to generator power at times of extreme demand on the grid. The dc gets to test it's generators for free and the utility gets a significant drop in demand. Iirc super bowel Sunday is an example of one of those times.
Working in the Air Transportation Industry, I see this only too often.
"You don't need that DR site. It costs too much and is never used" says the beancounters
"ok kill the DR site"
two weeks later the remaining DC goes 'Phut'
The airline and/or the airport basically stops working thus losing millions.
Beancounters are nowhere to be found.
I know of a 'Tier 1' airport with NO DR location or even a DR plan. Not gonna fly there again.
Biting the hand that feeds IT © 1998–2019