back to article GitHub: We're sorry (again) about (another) outage

GitHub has issued a mea culpa for the latest outage that left users unable to access the code-sharing site. The company said that a power outage was to blame for Thursday's downtime. The issue did not go unnoticed. Yikes. GitHub is down. Since my boss doesn't understand git, this is a clear opportunity to play Xbox. I mean, " …

  1. Notas Badoff Silver badge


    "mia culpa" ?

    And yes I know about the corrections button, but since reporting via email is difficult here and I've already skipped over three articles with "bene" for "been" and the like, this one is too precious! OBTW: "wer recently"

    1. Will Godfrey Silver badge

      Re: Self-referentiality

      Not very good at this are you? That should have read:

      "Due to the great importance of the subject there wasn't time to perform the normal detailed copy check."

    2. Anonymous Coward
      Anonymous Coward

      Re: Self-referentiality

      Sorry, too busy pasting tweets to speel chek.


    3. Destroy All Monsters Silver badge

      Re: Self-referentiality

      Miau Pulpa, shurely

  2. Frederic Bloggs


    Git is a distributed SCC system? Isn't the whole point? One can hack away at one's code, committing revisions and then catchup when (or if) ever a central server comes back. If it doesn't, or just if needs be, then your repository can be pulled directly by co-workers. Try git send-email.

    Go look how git is used for the thing that it was written for: the Linux Kernel. Or alternatively RTFM.

    Git is not CVS nor is it Subversion.

    Github going away for a few hours or even days is not the end of the world.

    1. Daniel Voyce

      Re: Surely..

      The problem is that tools such as Composer, Bundle, Pip, Bower etc can all be hooked in to use github repositories and developers / DevOps somehow think it is acceptable to have these things "built on deployment" only to realise that if that service doesnt exist many of these tools dont handle that failure well and you end up with a broken deployment.

      CI would prevent this in most cases but I have seen just as many poorly implemented CI systems as I have crappy deployment mechanisms, PEBCAC.

      1. Wensleydale Cheese Silver badge

        things "built on deployment"

        Agreed, stuff that's "built on deployment" can be a real pain in the neck.

        The ability to work completely offline is one of the main advantages of tools like git for folks like me.

        There's also an unfortunate trend to require constant internet access for product documentation. I've got acres of disk space on multiple devices here, making it crazy to insist I'm online all the time.

    2. Stretch

      Re: Surely..

      Go try it. How does it work when everyone is on NAT'd networks and no one has connectivity to each other?

      Git is not SVN, its much, much worse.

      1. stephanh Silver badge

        Re: Surely..

        Actually Git can still do full history-preserving merges even if all you have is e-mail (or sending USB sticks by pigeon carrier). You may read up on git "bundles".

        Good luck trying that with SVN.

        Github is much more than just Git, though. Otherwise they wouldn't have much of a business.

    3. cat_mara

      Re: Surely..


      Git is not CVS nor is it Subversion.


      I agree, but a lot of developers don't understand this, particularly if they came into work one morning and their team lead goes, "oh, btw, we migrated from SVN to git over the weekend". That is, they replace the old centralised VCS product but retain a centralised workflow. There is a tendency, I think, to do git migrations because it is fashionable or because of some perceived benefit ("OMG, we can branch now!") without really investigating the benefits that a distributed VCS will give you. I would not call myself a git expert by any stretch but I only really began to grok it when I stopped looking at it through the "lens" of a centralised VCS model.

    4. EarthDog Bronze badge

      Re: Surely..

      Unless of course you work on a project dependent on a large number of components hosted on Git Hub and decide to do a full update so you can integration test. You do integration test, don't you?

  3. Barbarian At the Gates

    First rule of slacking

    If you can plausibly blame non-productivity on an event outside of your control, play up how critical the tool/service is to your work, and then goof off until it's back up.

    If your boss sucks, they'll buy the story, and you won't feel too guilty about selling them a line of hooey.

    1. Jonathan 27


      How do you benefit in this scenario, you're still at work, all you're doing is killing yourself later to hit the deadline. "Goofing off" at work is for idiots.

      1. This post has been deleted by its author

      2. Anonymous Coward
        Anonymous Coward

        Re: Yeah...

        "How do you benefit in this scenario"

        More web browsing time.

        "all you're doing is killing yourself later to hit the deadline."

        Haha sux to be the tomorrow me!

  4. Ali Um Bongo

    This Just In...!

    *"...If you can plausibly blame non-productivity on an event outside of your control, play up how critical the tool/service is to your work, and then goof off until it's back up..."*

    .. which presumably, for a lot of modern journalists, means a day off next time Twitter goes down.

  5. Stretch

    Yes well...

    You were warned not to use it. VC money is drying up now...

  6. EarthDog Bronze badge

    Single point of failure?

    Um.... sounds like their ops team doesn't understand maintaining a mission critical piece of infrastructure.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019