What's more scary? Downtime or hackers?

This topic was created by Simon Sharwood, Reg APAC Editor .

  1. Simon Sharwood, Reg APAC Editor

    What's more scary? Downtime or hackers?

    I'm at Gartner's IT Operations and Data Centre summit today, and analyst Joe Skorupa just mentioned a client that has left its switches un-patched for FOUR years.

    He said the user is more afraid of unplanned downtime than hackers, so he can wear the risk of not implementing patches if it means the network remains stable.

    What's your view? Is this user pragmatic? Mad?

    How do you balance the two risks?

    1. AdamFowler_IT

      Re: What's more scary? Downtime or hackers?

      That's a big question.

      Usually switches are internal only, so it's much less risky. Hackers need access to something to be able to talk to the internal switches, and if they've gotten that far then you're probably exposed in several ways anyway.

      Switches are probably one of the least patched things because they sit there and just work, and are normally configured to do nothing but pass traffic on the same LAN.

      What is a hacker going to do to a switch - mirror a port and listen to all the traffic might be the worst. Then it depends if your internal communications are encrypted or not - generally environments are a mix of both. But again, if they're at that level you've probably got a bunch of doors open, not just the switch.

      But again, it's a switch. If you're that worried have a single cold spare and patch them one at a time. You can't guarantee the network remains stable due to not running patches, that's bordering on negligence.

      1. Denarius
        Meh

        Re: What's more scary? Downtime or hackers?

        is there a difference ? Both will bring business to knees. The fact of the question suggests a failure to examine root cause, which would have to be inadequate funding of infrastructure. Adam is right about having cold spare ready to go so switches can be patched as vendors require for support .given the dropping costs of all IT infrastructure for a given level of performance, (software bloat excluded) everything should be multiple redundant. If the business is that small, it cant afford it, perhaps a decent ISP and Amazon /Google would be better, with critical data on a standalone server for DR

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon