back to article Sysadmins: Step away from the Big Mac. No more Heartbleed-style 2am patch dashes

Patching is a necessary evil for network administrators. Unfortunately, an awful lot of them have been burning not only the midnight oil, but also the weekend oil to keep up with patches such as – but not limited to – Heartbleed and Shellshock. The bad news is that this is only the start. As software vendors move towards a …

  1. Anonymous Coward
    Facepalm

    When I read "Step away from the Big Mac"

    … I was thinking some Apple-branded server.

    I was not thinking so-called "food".

    1. Steve Davies 3 Silver badge
      Thumb Up

      Re: When I read "Step away from the Big Mac"

      Quote

      I was not thinking so-called "food".

      Well said Mr S. IMHO, call a Big Mac food is a violation of common sense (YMMV)

  2. Anonymous Coward
    Thumb Down

    You lost me at this sentence.

    "Patching is a necessary evil for network administrators."

  3. Anonymous Coward
    Anonymous Coward

    What's patching ?

    That seems to be the ethos with some places.

    I know of servers with no patches for years/since they were installed. Because the customer takes the view "it isn't broken, so don't 'fix' it". Then there are all those appliances (routers, firewalls, etc) that AFAICT "don't exist" to most people unless they break.

    1. Anonymous Coward
      Anonymous Coward

      Re: What's patching ?

      Unfortunately, I have not yet worked anywhere that regularly applies patches to all installed software components. Most of them don't do so at all. I wish I did, because it's so obviously a very good idea.

      Worryingly, the types of companies I've worked in consist of ISPs, financial services companies (directly and via external company that maintained their infrastructure) and a health service.

      I fail to understand the mentality of anyone who resists the idea of regular patching.

      It doesn't have to be a big deal, if the approach is correct.

      1. Tom 13

        Re: It doesn't have to be a big deal, if the approach is correct.

        Patching in any diverse environment with more than 20 employees is ALWAYS a big deal. Properly done, it costs a fair bit of coin. The whole point of it is that when properly done, even costing that fair bit of coin, it is less expensive than being caught with your knickers down.

        For purposes of patching "diverse environment" doesn't mean you're running one or more flavors of *nix, Macs, and Windows. It means anything except Everybody in the company is running Windows x SP #, Office yyyy, and these four specific accounting applications. I've done tech for groups as small as 100 people, but one group were company accounting, another group were web developers, another group were statistical programmers, another group were accounting managers for grant money, one group were conference planners, and oh yeah the conference planners had 3 people with a large suite of DTP applications. Yeah, they were all running Windows XP SP2 which was reasonably standard for the time, but the diversity of apps made patching a bit of a nightmare.

        1. dan1980

          Re: It doesn't have to be a big deal, if the approach is correct.

          @Tom 13

          Absolutely.

          Sometimes it's the smaller environments that can be more difficult, especially as the smaller it is, the smaller the budget and thus the overhead of testing 'properly' is relatively high.

          If you have 100 servers running some application, having a proper test environment is a fractional cost and thus very reasonable. If you have an application running on one server - that's a hell of an increase (percentage wise) to get testing going!

          The rule is that there are no rules. Sometimes you just have to backup the system and go with it. Some sysadmins will look down there noses at you for not doing it 'properly' but if they have been in positions where they always have suitable budget then good for them - just appreciate that's not universal.

          Some environments simply don't permit of any kind of phased deployment and testing so you do what you can.

          And, as for "If the bug is in a service you don’t use and isn’t installed, there is little point in installing it." Unless, of course, your vendor is of the persuasion that won't provide any support if you're not fully updated.

          I've had that argument with more than one support tech and one specific instance where the tech insisted I needed to apply an update to fix an issue not referenced at all in any documentation supplied with the patch. The patch was for a feature we did not use (some SMS alerting) but for double-points, it did have a know issue that would have affected a feature we did use. Despite that, the tech was adamant that the patch needed to be applied.

  4. CatoTheCat

    Eating a burger in the server room?

    Please sir, step away from the computer and leave the building. Now.

  5. Gis Bun

    Use to work at a small place. For workstations, I'd let WSUS do the work. Servers [about 8] were done manually with a batch file after downloading the updates manually. All the servers had the same OS. I would generally do the workstations late in the Patch Tuesday week [if light] or the Monday after but only after consulting with various forums and sites to see if any issues are cropping up.

    Servers were done later in the month.

  6. foobaron

    Online snapshot and rollback is key

    Why doesn't this article make explicit one of the core elements of scalable change management, namely online snapshot and rollback? You should always have the ability to go back to one of your known good states, and trying to manage this with traditional backup / restore just doesn't cut it. Taking snapshots should be instant (so you always do it, before any step with change risk), online (no need to shut down a system or database to ensure data integrity of the snapshot), and modular (i.e. you can rollback just your system patches without affecting the latest user data). The good news is that most VM hypervisors make this easy. But I would argue that no change management plan is complete without explicit SOPs for what snapshot method will be used, when snapshots are taken, how they are stored, and how long they are retained). Every effort should be made to extend this policy across ALL systems that you have to manage -- which will require using different snapshot methods on different platforms.

    In my world (mostly linux) some good solutions seem to be:

    * for VMs, make liberal use of your hypervisor's snapshot capabilities.

    * for linux system partition (especially outside VMs, but even in VMs if the VM also contains non-system data), LVM snapshots.

    * for user data, ZFS.

    But of course the same capabilities exist on other platforms. The point is to become expert at them and use them rigorously and comprehensively.

  7. Anonymous Coward
    Anonymous Coward

    Is patching still the right thing to do?

    Is patching actually the right thing to do? Put down that pitchfork and bear with me here.

    1. It's expensive, either in time, process or money.

    2. The job never ends.

    3. It's a reactive process to a horse bolting.

    Perhaps a different approach is called for. Really rigorous separation of apps, OS and data allowing each to be modified without affecting the other. No patches, just rolling upgrades (prep the new VM, direct %age of traffic to it, blow the old one away when you're happy(servers): or build one image and deploy to everyone (desktops). And heuristic anomaly detection at the network switch layer to identify machines behaving outside the norm.

    Discuss.

    1. Anonymous Coward
      Anonymous Coward

      Re: Is patching still the right thing to do?

      Hmm, I dunno.

      To patch Heartbleed:

      RC=0 stuartl@sjl-lxc-wheezy32 /tmp/build $ apt-cache show openssl | tail

      scope::utility, security::cryptography, security::integrity,

      use::checking

      Section: utils

      Priority: optional

      Filename: pool/main/o/openssl/openssl_1.0.1e-2+deb7u13_i386.deb

      Size: 693616

      MD5sum: 45e9d2fbc92509a91469cf6f3eb99ab2

      SHA1: 98e923d7056f2a2d7f2053bf12c7d4646b501738

      SHA256: 42f1cc4125b9cef951e3eba3bdfb6b916c36f58863fba9790baca3f38eec0d00

      Time to patch: about 10 minutes including download and reboot.

      To download, install and configure an instance of Debian Wheezy: about an hour or two.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019