back to article Misco: We're moving to the cloud after yesterday's bit barn meltdown

Reseller giant Misco has confirmed it will embark on a crash migration program to shift its infrastructure into the cloud - a day after a datacentre meltdown that froze its e-commerce front-end for six hours. The dash to the cloud while having to maintain a creaking inhouse infrastructure illustrates the dilemma facing …

  1. Drummer Boy

    Let me guess

    The accountants wouldn't spring for the money to do DR properly, and got bitten.

    They now run off to the cloud, rather than doing the job right, as the cloud never has outtages, right??

    1. Anonymous Coward
      Anonymous Coward

      Re: Let me guess

      You see the same thing in the cloud as well. People not buying HA configurations and then getting bent out of shape when their non-HA set up is not highly available. Still, kind of a sign of the times when a company that makes their cash selling tin decides to move to the cloud.

  2. Alister

    “Moving forward we’ll be moving our infrastructure to a cloud platform so we have continuity of service and are not relying on a back-up fail over failing”.

    Typical management knee-jerk reaction. What they should do is look at what they spent on DR (clearly very little) and then work out a revised plan which actually works.

    If they want to transfer it all to public cloud offerings, fair enough, but it won't magically give them "continuity of service" unless they put the work in.

    They still need the same sort of planning - "cloud" is not magically robust, if you lose a server, and don't have any form of DR, then you're stuffed, no matter where it's hosted.

    1. Anonymous Coward
      Anonymous Coward

      Management is not going to blame themselves, 'maybe we should have bought insurance'. It used to be that they would blame the system provider. It isn't always management though. I have seen many a situation where the sys admin hoses up the works and then blames the system for 'crashing.'

  3. Martin hepworth

    DR/BC

    yeah cos all the big folks like Netflix dont need multiple redundant instant DR setup to cope with AWS etc falling over.

    1. Adam 52 Silver badge

      Re: DR/BC

      In this case, from what they say, RDS would have saved their bacon.

      I've done the binary search of RDS snaphot restores to find the corruption time before, and whilst it isn't pleasant it's a lot easier than buying hardware and doing lots of restores and txn log replays.

    2. Anonymous Coward
      Anonymous Coward

      Re: DR/BC

      Even with the cloud, a lot of times the problem is that management knocks off the HA line items to lower their cloud cost. If you use Geo stretch clusters, it is pretty hard to knock down, e.g. Google search, Bing search is never down. It costs serious money though.

  4. Alister

    Datacentre - or Broom closet?

    The more I read about the Misco outage the more I wonder just what "Datacentre" means in their case.

    Picture a standard bit-barn - usually a prefab building the size of a soccer field:

    you walk in the front door , and (if it's any decent bit-barn) you have to go through various physical security checks.

    Having passed them, you go through the security gate / airlock into the data-floor, which may be divided into separate halls, or for our purposes is just a single massive area.

    Off in the distance, in the middle of an otherwise bare floor, stands a single 42U rack, and as we get closer, we can see that it's partially populated.

    Close up, we see a firewall, a switch, a few ethernet cables and a 2U server with "Web1" written across the lid in marker pen, under the dust. On the floor of the rack is a box of floppies, marked "backup".

    That's it ladies and gents, there's Misco's robust e-commerce front-end...

    :)

    1. Roland6 Silver badge

      Re: Datacentre - or Broom closet?

      Off in the distance, in the middle of an otherwise bare floor, stands a single 42U rack, and as we get closer, we can see that it's partially populated.

      It's called downsizing; once that entire data floor was occupied by a single mainframe...

    2. Fatman
      Joke

      Re: Datacentre - or Broom closet?

      <quote>Off in the distance, in the middle of an otherwise bare floor, stands a single 42U rack, and as we get closer, we can see that it's partially populated. decades old, valve operated IBM mainframe in its rusting case.

      Close up, we see a firewall, a switch, a few ethernet cables and a 2U server ancient green screen terminals, and old Bell 103 modems with "Web1" written across the lid in marker pen, under the dust. On the floor of the rack is a box of floppies punch cards, marked "backup".</quote>

      Fucking bean counters hard at work increasing the executive bonus pool.

      There!

      FTFY

      1. Anonymous Coward
        Anonymous Coward

        Re: Datacentre - or Broom closet?

        Those old systems worked well. No such thing as pages erroring out. No eighteen factor authentication. No hour glasses or spinning browsers. No Java. Turn the old, reliable 400 back on. They also went down about once every decade.

    3. John Sanders
      Devil

      Re: Datacentre - or Broom closet?

      You're not too far from reality... only half a notch.

    4. Anonymous Coward
      Anonymous Coward

      Re: Datacentre - or Broom closet?

      The more I read about the Misco outage the more I wonder just what "Datacentre" means in their case.

      From the pedlars of Systemax?

      How about a clusterfsck made up entirely of these quality boxes?

  5. Captain Scarlet Silver badge
    Facepalm

    Oh Lord

    Please don't mess up the website, its fine just learn from mistakes and make sure it copes better next time. Don't want to have to jump to another reseller.

  6. John 104

    “We had a server outage that caused corruption of our data centre.."

    Huh. I guess their data center comprises just one server? Or did one server cause the entire infrastructure to become corrupt? Typical management. This guy obviously has no idea how his business is run from a technology standpoint. As noted above, Rush to the cloud, it must be better. Meanwhile, talented, qualified engineers are getting shown the door due to idiots like this guy. I'm sure we'll see an article here on el Reg in 5 minutes about how they are also going to adopt DevOps to solve all their problems...

  7. ma1010
    Facepalm

    From bad to worse?

    Oops, our backups didn't work. But why work on fixing the problem in our infrastructure that caused it? Instead, let's just sign up to use "someone else's infrastructure" that we don't control or really know much of anything about. Of COURSE that will work so much reliably than anything we control.

    1. Phil W

      Re: From bad to worse?

      Absolutely! The Cloud* is brilliant and totally resilient and definitely never has any down time, and if our office loses it's Internet connection we definitely won't have any problem continuing to take orders over the phone using the order system that we can no longer access....

      Personally I have one very important first rule for externally hosted cloud solutions. Can your business, or the business segment that relies on that solution, continue to function without it?

      If the answer is no then you should keep it in house, and do proper redundancy and data recovery. With the right infrastructure your solution can be just as reliable as the cloud, and if you lose net access you can still see it even if your customers can't. Also just as importantly, if it does go wrong you have full control over fixing it instead of twiddling your thumbs waiting for an explanation and/or estimated fix time.

      *WTF does that really mean anyway?!? The Cloud? So there's just one right? AWS, Azure etc are all just one big service. Arrgh, don't get me started on that part.

  8. Pete4000uk

    Cloud burst?

  9. Nate Amsden

    6 hour outage

    What a joke, I say wake me when you are at hour 24 or 30, and people STILL don't know what the cause is or how to fix it.

    Yeah I've been in those before on a few occasions, 90% of the time due to application bugs.

    Battle hardened

  10. David Roberts

    PFI?

    Outsourcing data centres is just like PFI - it takes the cost out of the capital budget and removes the provision for ongoing maintenance.

    All you have is payment from the current account for the service.

    It may turn out to cost you more, it may turn out to be less secure efficient and reliable, but meanwhile you have maximised value, sold off a capital asset, reduced headcount and returned cash to the shareholders.

    Future problems? Blame goes straight outside the company to the service provider.

  11. Anonymous Coward
    Anonymous Coward

    May I suggest Monster Cloud to Misco as a reasonably priced dependable cloud solution.

  12. Anonymous Coward
    Anonymous Coward

    Misco management should take careful note of the issues 123-reg customers are currently facing. Shifting server hosting to a cloud provider (= a computer in someone else's datacentre) might be a good strategic decision if considered in the cold light of day, but it should not be a knee-jerk reaction to a server crashing and it is definitely not an alternative to good service continuity planning (and testing).

  13. Scaffa

    They keep saying "data centre" rather than "data centres" - which I guess explains why their cock up was so unavoidable.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like