back to article AWS Sydney's outage shows the value of a walk in the cloud

To understand the lessons of this week's Amazon Web Services outage in Sydney, which took down the local AWS cloud for a few hours, take a walk down Huntley Street, Alexandria, an unlovely street in a light industrial suburb. Huntley Street is interesting because its footpaths are riddled with an unusual concentration of …

  1. DainB Bronze badge

    Yes, you don't

    "I don't know if Huntley Street flooded during Sydney's weekend deluge and Amazon Web Services isn't fingering Equinix as the source of the Sydney outage."

    No one else in SY3 noticed anything. So unless AWS uses its own power and telecom cables at SY3, which it does not, there's zero indications Equinix had any issues with power, comms or anything else. Something somewhere might have been flooded, but it was isolated to AWS only and only AWS can tell you what it was.

    What almost no one wants to talk about is that 2 banks (Commonwealth and Westpak) out of major 4 coped outage because AWS farm in Sydney went down. Now please tell me two remaining ones are not looking to use AWS for their transaction processing, because if they are next time AWS goes down it'll take with it whole financial system of Australia.

    1. P. Lee Silver badge

      Re: Yes, you don't

      Banks using a cloud without geographical redundancy?


    2. Anonymous Coward
      Anonymous Coward

      Re: Yes, you don't

      Westpac outage had absolutely nothing to do with AWS, was a power failure at Western Sydney Data Center, operated by Fujitsu, other companies were also affected.

  2. Phil Kingston Silver badge

    I always used to get nervous when I saw road works near the office during a period of inner-city regeneration - they'd be forever cutting cables.

    And there's only so much you can (economically) do to have a resilient set-up - that office had redundant fibres out the building, down distinct conduits in opposite directions down the street. Sadly, they converged in one exchange. Which, when a bad flood meant cars were floating around it and smashing into its walls, didn't help much. C'est la vie.

  3. X-Static

    control so much of what goes on outside their fences?


    External services should have more than one source always in a Cloud DC. It's probably something else that caused AWS to go Titsup in Sydney over the weekend and the convenience of stormy weather is getting people all confused.

    Also no real excuses when it comes to the Weather or a specific DC outage. Westpac, Commbank and any enterprise that that designs and implements it's core and critical applications to only operate as a single point of failure in one Data Center are stuck in 1999 and need to hire someone as their CTO who understands what High Availability is.

    It's 2016 and applications can function extremely well and actively across multiple DC's and it's not that difficult to do.

    1. Rob Isrob

      Re: control so much of what goes on outside their fences?

      You can only control so much that goes on outside your fence. Probably more than one case where DC access went titsup even with multiple telcom providers - which just so happened to be running fibre through the same trench - that the backhoe cut through. I'm thinking Northwest Airlines a number of years ago for one example where just that happened. Google is your friend . . .

  4. LosD

    Seems it just shows the importance of designing multi availability zone and multi region into your deployment. Most properly designed sites stayed up (some was affected by an API breakdown that made automatic zone change fail, though)

  5. Anonymous Coward
    Anonymous Coward

    How come no articles on the frequent reg outages?

    1. Drewc (Written by Reg staff) Gold badge

      Crap analogy - as Running stuff for other organisations is not our business,

      P.S our site has had a couple of outages in the last year.

  6. steamrunner

    And the point of this article is?

    Swap a few modern buzz-words out for their previous incumbents, and this article could have been sourced from any of the last few decades...

    ... and the answer is still as remarkably simple and exactly the same now as it was back then: put your sh*t in more than one data centre! Preferably with the second (or better, third) at least in a different city if not another country all together. Even better, use different business partners/providers for the others so there isn't a business-level SPOF along with the physical ones.

    This shizzle isn't hard, people. In fact, in this day and age, the likes of AWS make it positively trivial...


POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019