back to article 'Devastating' Apache bug leaves servers exposed

Maintainers of the Apache webserver are racing to patch a severe weakness that allows an attacker to use a single PC to completely crash a system and was first diagnosed 54 months ago. Attack code dubbed “Apache Killer” that exploits the vulnerability in the way Apache handles HTTP-based range requests was published Friday on …

COMMENTS

This topic is closed for new posts.
  1. Tom 38
    Alert

    Stop gap fix

    SetEnvIf Range (,.*?){5,} bad-range=1

    RequestHeader unset Range env=bad-range

    You can bump the number 5 if your clients actively use a lot of ranges. I'm jumping the gun slightly, they're still voting on the exact wording of the advisory, but that's the top mitigation until 2.2.20 is rolled.

    It behaves differently on different OS. It won't totally take down a FreeBSD server, but it will pretty quickly take down a typical Linux server - the OOM killer will just randomly start taking shit out, and once it takes out init, that's the ball game.

    Not tested Solaris or OpenBSD yet.

    1. Anonymous Coward
      FAIL

      OOM killer == Achilles' heel.

      Seriously, that shit should not even have to exist in the first place, and the fact that it does is an implicit admission of failure. Forcing a clean reboot would make ten times more sense than randomly killing you-don't-know-the-hell-what, and even then it would still be appallingly crude.

      1. Anonymous Coward
        Facepalm

        Rebooting a better approach? I think not.

        I don't think I'll agree that a complete reboot is a better approach than an OOM-killer. "Graceful degradation" and all that. Besides, if linux' process handling and separation works as expected then that clean reboot doesn't give you "cleaner" space than that recovered from a killed process.

        Of course, the OOM-killer is a bit of a conundrum. It exists because linux tries to have its cake and eat it too: It overcommits memory because too many programs ask for much memory they end up never using. Since it doesn't know how much is really needed it just says "sure" to every asker and hopes for the best. If it does run out of space, well, it "solves" the problem in the most expedient way possible.

        You could argue there ought to be better ways than an OOM-killer. For example, you could envision a SIGOOM sent to every process in order of size until enough space is freed for the next operation, and only after that start killing processes.

        The real fix, of course, would be to stop asking for so much memory and then stop overcommitting. And then you can turn off the OOM-killer. But in the case of bugs like here, where malformed input causes the program to eat up all the memory, that then merely means you end up wedging the system anyway and then you're back to the most unsophisticated approach of all: Manually rebooting the system.

    2. Colin Miller
      Thumb Up

      ways to tame the OOM-killer

      http://lwn.net/Articles/317814/

      The OOM can be turned off globally (probably a good idea if all your programs are well-behaved), but I can't see a quick way of doing it per-process.

      The kill-priority can be adjusted for each process, so important ones are more likely to survive.

      /dev/mem_notify can be select()'ed or poll()'ed by very well behaved programs to dynamically adjust their memory usage (caches etc).

  2. bazza Silver badge
    FAIL

    Not a good day for open source

    One of the key advantages of open source is supposed to be that anyone can fix a bug and in all likelihood someone will do so quickly.

    It seems that open source communities can be as lazy as closed software companies afterall. I suspect that the reason that this has happened is because Apache has had a reputation for rock solid reliability for some time there can't be any bugs worth fixing. Clearly not the case.

    So how many other severe bugs are there lurking in the source code?

    1. Solomon Grundy
      Linux

      Not Laziness

      I'm not an open source advocate by any means. By and large I think the best part of OSS is that other people are doing good work for free.

      That being said though; Every software project has tons of bugs and decisions have to be made whether to work on improving functionality or fixing rarely encountered issues. These decisions have to be made with any project, commercial or OSS, and I challenge you to show me a problem that wasn't fixed out of shear laziness in any major product.

      The OSS guys do a pretty darn good job of producing some pretty great software for free. That's their decision so, whatever. No reason to hate on them. Just use their work and profit from it or don't use it. The OSS community overall demonstrates project management skills that almost any big company should like to emulate.

      1. bazza Silver badge

        @Solomon Grundy: Pretty good job?

        "The OSS guys do a pretty darn good job of producing some pretty great software for free."

        Moneytarily free is nice for the rest of us. But just how good a job have they done in this particular instance if a reported problem with huge consequences for a very large fraction of the internet was left unfixed for many many years?

        The OSS people do get some soft benefits in return for their work - high reputation, consultancies, etc. This incident is a good example of how such benefits are just as vulnerable to bad news as cash flow is for a large company.

        "The OSS community overall demonstrates project management skills that almost any big company should like to emulate"

        I would dispute that. The glaring conter example to your statement is the world of Linux. I think that the handling of Linux (not just the kernel, I mean the whole shebang) by the OSS community has been terrible, really, on an absolute scale.

        They do project management well in the sense that a bunch of guys decide to do something, and a result is delivered with enthusiasm over a reasonable timescale. The part of project management that is definitely not being done at all is deciding whether the work was necessary in the first place, or deciding (with global agreement) that the new thing will wholly replace something old.

        Take a look round the world of Linux. Fragmentation abounds as far as the eyes can see. There are umpteen different distributions, a variety of different desktops, different package management systems, etc. etc. FFS how on earth can a choice of software package management systems be a good thing? Ok, someone once decided that an improvement was needed, but why would anyone keep using the old one?

        I would say that at best Linux is a hodge podge of competeting ideas that has met some success in certain areas (servers) where this doesn't matter too much. But in the desktop arena it's in a terrible mess, and it's no wonder that most of the world's desktops and laptops are Windows or OS X. Clearly to the average user (and thus to app developers too) consistency really does matter. Linux has gained some popularity in the mobile sector only because some big organisation (Google and their Android) has come along and imposed its ideas on a global scale.

        I think that the proprietary world is much better at wielding the knife to cut out old stuff and sticking with just one or two ways in which things are done. That's because it's expensive otherwise, and bad for sales. The same pressure is not being felt by the OSS community.

        "Every software project has tons of bugs and decisions have to be made whether to work on improving functionality or fixing rarely encountered issues."

        Clearly in this case no effective triage system was in place for assessing the criticality of issues. If there had been this would have been fixed many years ago.

        1. Eddie Edwards
          WTF?

          Is that a joke?

          "I think that the proprietary world is much better at wielding the knife to cut out old stuff and sticking with just one or two ways in which things are done."

          Is that a joke?

          1. bazza Silver badge

            @Eddie Edwards: Not a joke

            Redhat should have gotton rid of rpm a decade ago. Apt/deb is much much better, so why do Redhat not use it? Unbuntu got rid of Gnome as their default with not much warning. How many different APIs are there for sound in Linux, and which ones are supported in every single version of every single distribution? CUPs has at least brought some consistency to printing, but it's still weird that a programme has to be able to render in PS to print and something else to display the same thing on screen. Mozilla are releasing new versions of Firefox quicker than plugin writers can cope with, and are planning on ditching version numbers as a result. These things might not matter to sysadmin type people, but they do matter to developers aiming at ordinary desktop users.

            Whereas MS have said three years in advance that XP will cease to be supported in 2014. Apple gave massive warning of the cessation of Cocoa. Older versions versions of Office are still updated, but there's a definite end of life. In short the knife gets wielded every now and then, the planning is often quite considerate of users' needs. 'Better' does not mean quicker.

        2. Ru
          Mushroom

          "Fragmentation abounds as far as the eyes can see"

          I know, isn't it terrible? There should be One World, One Vision, One Operating System!

          Everyone's needs are the same, and everyone's desires are irrelevant. The interests of the coders who develop this stuff for free need to be crushed and they must be forced to work instead on the One True Desktop Environment, or whatever.

          There's even more than one web browser! Did you know there is *more than one command line shell*? Worse yet, there is more than one programming language even within the same language family! These things can be compiled for machines with totally different architectures. I ask you! Where will the madness end? THIS SORT OF DISUNITY SHOULD NOT AND WILL NOT BE TOLERATED.

          1. sabroni Silver badge

            oh dear!

            Did someone point out the glaring fault in your world view?

            Or was the fact this bug has been around since 2007 a good thing?

          2. bazza Silver badge
            FAIL

            @Ru

            "I know, isn't it terrible?".

            Yes, it is if you're an app developer trying to support many users of many versions of many distributions. And how is an ordinary home user supposed to choose a Linux distro? For a start, which one's best? Which one does everything they need?

            Ask yourself why Google chose do what they did with Android instead of just slapping a mobile friendly shell on top of an existing distribution. Surely that would have been much easier?

            "Did you know there is *more than one command line shell*? Worse yet, there is more than one programming language even within the same language family! These things can be compiled for machines with totally different architectures."

            Great if your a sys admin or developer. Totally and utterly irrelevant bollocks if you're an ordinary desktop user.

            1. Destroy All Monsters Silver badge
              Thumb Down

              Holy shit, in this world you gotta make decisions?

              "And how is an ordinary home user supposed to choose a Linux distro? For a start, which one's best? Which one does everything they need?"

              GTFOffMyLawn, kid.

        3. Ian McNee
          FAIL

          @bazza: E-V-I-D-E-N-C-E

          Unfortunately you are using the logic of the Daily Wail that leads to things like every known substance being declared both a cause of and a cure for cancer.

          Actual studies based on what happens in the real world show that bugs & vulnerabilities in OSS are fixed significantly faster than in proprietary code. End of.

          And as for your bizarre statement:

          "Linux is a hodge podge of competeting ideas that has met some success in certain areas (servers) where this doesn't matter too much"

          Yeah, those servers, they don't matter much, no point them being secure and reliable, it's not like they deal with anything important like financial transactions over the internet...hey...wait a minute...

          1. bazza Silver badge
            FAIL

            @Ian McNee

            "Yeah, those servers, they don't matter much, no point them being secure and reliable, it's not like they deal with anything important like financial transactions over the internet...hey...wait a minute..."

            So you support my point then? Sysadmins can and do cope with Linux's fragmentation, and has met with success. Even I cope with Linux's fragmentation on a daily basis, and it's infuriating. Linux doesn't succeed on the desktop because you still have to be a something of a sysadmin to run it on a desktop. For example, do you *really* expect the average desktop user to know how to install an rpm packaged piece of software on an Ubuntu box, or to know what to do with a tarball? Get real. If the Linux world wants to succeed in the desktop arena, it's going to have to sort that kind of problem out.

            And as for servers (Linux or otherwise) being secure and reliable, it seems that if they've been running Apache these last 4 years they've been anything but that. They've all been sat there just waiting for someone to send them some dodgy http requests, and it's only luck that no one did. How many sysadmins have spent the last 4 years telling their bosses that their important Apache servers doing important financial transactions on the internet are secure, protected against denial of service attacks, etc, etc?

            "Actual studies based on what happens in the real world show that bugs & vulnerabilities in OSS are fixed significantly faster than in proprietary code. End of."

            Given the magnitude and timescales of this particular problem in Apache, perhaps those studies' findings should be revised? I mean, MS have had their share of problems, but to be in a situation where vast swathes of the internet could have been brought to its knees with a few only slightly dodgy HTTP requests without the need even for a DDOS attack is pretty spectacular.

            1. This post has been deleted by its author

        4. Rob Dobs
          Thumb Down

          Linux is the Kernel

          You posted "I think that the handling of Linux (not just the kernel, I mean the whole shebang) "

          This belies your ignorance about Linux and the OSS community. Linux IS the Kernel - its called the Linux Kernel and thats all it is, there are few versions like current, or stable (not as many changes for reliability in servers or device hardware) But most people aren't even aware of these release versions.

          Repeat: Linux = Kernel. The Distribution (which can be done by anyone) is more accurate and correct way to refer to the whole shebang as you say. But keep in mind there are MULTIPLE shells or GUI's (Graphical User Interfaces) to use as well. Gnome and KDE are popular choices, but Unity is a new one as well)

          Apple's OSX is just a GUI slapped ontop of BSD (a UNIX/LINUX style kernel)

          A distribution (like Ubuntu, Fedora, Mint etc) Chooses which kernel version (usually the most recent stable version) to use, and which GUI to put on top of it. Because its freeware its easy for others to change this. For example Ubuntu used to use Gnome for their distribution, but as some people liked KDE better than Gnome, they released a version of Ubuntu that used KDE instead of installing with Gnome as the default GUI (called Kubuntu).

          You are incorrect about package management as well, there are only two major versions, (with a few other competing as well, but not that polished or popular) Redhat has their RPM (redhat package manager I think) version, and then there is the debian version (which is the distro that Ubuntu is built off from) So this one is popular because Ubuntu uses it.

          Currently for consumers Ubuntu has switched to Unity for their GUI instead of Gnome and lot of users are not happy about the change so some are switching to Kubuntu, others are to Mint (a different Distro) Others still are installing Ubuntu, but choosing to use Gnome 2 or Gnome 3 (also not liked by some) as the GUI during the install.

          The theme you should be catching here is the Linux/OSS community is all about competing products in an open market where choice is unlimited and dominates. If you don't like a version you download another for free. You are completely wrong about the Project management aspect of OSS. Any project that does not innovate and improve (unless its perfect as is of course) and certainly applications, GUI, Kernel even that do not keep up to date are booted out of the way for better choices.

          There are actually many other OpenSource Kernels as well, it could likely that upcoming versions of redhat fedora, or Ubuntu may run on a kernel that is not Linux, and then calling that solutions "Linux would be even more incorrect. Lets go with OSS (Open Source Software) This is really one of the few terms that accurately covers the many different types of Open source licenses and applicaitons, kernels etc that exist out there today.

          The press and mainstream does miss-use this "Linux" term often enough to be an understandable mistake. But really its like saying PC's are all M$ Windows machines, Really PCs (Personal Computers) run Linux, OSX and other OS as well)... its just not accurate.

          1. Tom 38

            Re: Linux is the kernel

            """

            Apple's OSX is just a GUI slapped ontop of BSD (a UNIX/LINUX style kernel)

            """

            No, really, it is not:

            - OSX uses a Mach derived kernel, which shares nothing with BSD. It's userland tools are mainly BSD.

            - BSD is not a Linux (never all caps) style kernel.

            Most of your post was correct, although you are really complaining that he said 'Linux' instead of 'GNU/Linux' though, which is a bit pedantic - about my sort of level of pedantry :)

            The guy you are replying to is completely wrong in everything he says though. This isn't a 4 year old bug which is just now getting fixed, it is a bug which has been fixed within hours of being reported.

            The whole situation is conflated by that attention whore Zalewski trying to take credit for discovering this bug. He said there was a bandwidth starvation vulnerability in a related section of code, there wasn't.

            There is an memory vulnerability in this code, and because he cannot help himself from self-aggrandizing himself, he toured the sec-lists and news agencies saying that he pointed this vulnerability out 4 years ago. He didn't.

            See here for more Zalewski form:

            http://www.eweek.com/c/a/Security/Irresponsible-Bug-Disclosure/

  3. K. Adams
    Alert

    "The behaviour when compressing the streams is devastating..."

    -- Egon ServerAdmin: There's something very important I forgot to tell you...

    -- Peter SiteDeveloper: What?

    -- Egon ServerAdmin: Don't compress the streams.

    -- Peter SiteDeveloper: Why?

    -- Egon ServerAdmin: It would be bad.

    -- Peter SiteDeveloper: I'm fuzzy on the whole "good/bad" thing. What do you mean, "bad"?

    -- Egon ServerAdmin: Try to imagine all our servers as you know them stopping instantaneously, and every service running on them crashing at the speed of light.

    -- Ray HelpDeskDispatcher: Total service denial!

    -- Peter SiteDeveloper: Right, that's bad. Okay. Allright. Important safety tip. Thanks, Egon...

    1. redxine

      Couldn't help but do it

      http://img6.imageshack.us/img6/2219/tcompressthestreams.jpg

  4. Orv Silver badge

    Patch only for 2.x?

    It looks like they aren't planning on -- or at least aren't promising -- a patch for 1.3. A LOT of sites are still running Apache 1.3. Might be time to upgrade?

    1. rfrovarp

      No patches for 1.3

      1.3 was EOL'd over a year ago. You should have been moving to something newer at least two years ago.

  5. Tom 38
    WTF?

    Wait, this article gets the whole thing wrong

    Whilst what Michal Zalewski reported is somewhat related to this, the actual DoS is nothing to do with what he reported - it's simply the attack vector to expose the actual bug.

    He reports that you can get Apache to dump massive amounts of data to the net from a simple request by requesting the entire file range over and over. His DoS threat was from setting up massive TCP windows, so httpd keeps sending data without waiting for an ack, and then silently dropping the connection. This is a DoS attack that attempts to consume all your bandwidth. There have been no successful attacks reported using this approach.

    This exploit works in a completely different way. It repeatedly asks for tiny fragments of the file, as opposed to the entire contents of the same file.

    Due to how httpd handles byte range requests, each byte of the response ends up as an entire brigade in the response. This leads to massive memory usage for that request.

    Flood httpd with those kinds of requests, and httpd quickly consumes all the memory on the server. It is an entirely different kind of DoS attack.

    So, to clarify, the bug reported 4 years ago is not the same as the bug that leads to this DoS attack. They are slightly related in terms of attack vector, but that is it. This is not a bug reported 4 years ago that is only getting fixed now, as the article makes out.

    1. Destroy All Monsters Silver badge
      Holmes

      "These are not the exploits your are looking for"

      Thus we are led to the conclusion that the Reg staff is absent from office and phoning in the copies larded with unnecessary hyperbole while having no exact idea what the hell is actually going on?

      Say it ain't so!

  6. Anonymous Coward
    FAIL

    "challenging conventional wisdom"

    Oh shush you. As if there aren't any long-standing bugs in commercial products. Or even how a certain widely used software thing came with a new tcp/ip stack that *reintroduced* such old exploits as the venerable ping-of-death* because apparently this (self-)proclaimed paragon of good software management couldn't be arsed to run automated regression tests. For their resources, and their installed base, that's a bit of a poor show.

    Apache is likewise bloody big and should've known better. But that doesn't automatically mean that having the software available isn't a great big enabler. I have on occasion submitted and had patches to an open source program applied to the version in my OS distribution of choice while the official fixups were working their way downstream. You can't do that sort of thing with closed source. And that is very clearly a strength. Now if only someone would've thought to actually patch that bug in apache....

    Clearly, the fact that you can doesn't automatically mean it does happen. But that doesn't invalidate that you can. I say this is a silly slip-up, possibly because nobody realised the implications of the thing. Now that it's been made clear, they're promising a 96 hour turnaround. That's pretty good for software you don't even have to pay for.

    Elsewhere you rarely get that, unless perhaps "patch tuesday" happens to be coming up in 96 hours, and even then they'll probably postpone it to the next cycle. Even if you pay doubleplusextra through the nose for gold-plated platinum with diamonds on support.

    Even if that spokeswoman was cute, Dan, that barb was a bit on the cheap-as-in-astroturf side.

    * Apropos tangentially, I also liked that closed-source 3rd party "firewall" that silently up and died upon receiving such a packet, leaving windows functional but unprotected. Remote drop-your-pants indeed.

    1. Anonymous Coward
      Anonymous Coward

      re: challenging [falsely] conventional wisdom

      re: "You can't do that sort of thing with closed source."

      Yes you can. It requires access to the source, which requires a contact and an NDA*, but I've seen it done. The fix was released commercially with a nod to the group that found it and submitted their fix for it.

      * and lots of cash.

  7. Anonymous Coward
    Thumb Down

    ɯopsıʍ ןɐuoıʇuǝʌuoɔ

    "The episode challenges the conventional wisdom repeated by many proponents of open-source software that flaws in freely available software get fixed faster than in proprietary code"

    Bull.

    This artcile challenges the convetional wisdom that Mr. Goodwin is able to tell the difference between statistics and single incidents.

    1. Anonymous Coward
      Headmaster

      sǝıboןodɐ

      for the bad spelling in the above. The keyboard's seen better days, and sometimes it shows.

  8. Benedict

    parallely?

    parallely? really?

    Unless they are foreign then thats a bit of a fail.

    1. Neil 7
      FAIL

      Not just that

      but also this: "killing of processes including but solely httpd processes."

      Eh? So just httpd then, nothing else?

      "Parallely" really does take the biscuit though.

  9. Stevie

    Bah!

    Note to self: Never use "parallel" as an adverb. It makes you sound like a primary school music teacher at sing-along time.

  10. AdamWill

    Devastating?

    An easily-worked-around DoS vuln is 'devastating' now? If DoS vulns were devastating, the internet would've ground to halt some time ago. 'You can DoS anything' is a pretty good rule of thumb for server admins.

    Please save the hyperbole for, say, remote code execution vulns, at least.

  11. eulampios
    Stop

    no panicking!

    OK, although this is a real thing one shouldn't panic!

    Firstly, this only concerns Apache 2.2.16 or earlier. One of mine 2.6.17 (with the freshest stable 3.0.3 Linux kernel on Ubuntu 11.04) did stand the attack, although the CPU usage went up a little.

    Secondly, there is a million workarounds and fixes so far.

    i) As far as a non-Windows machine is concerned there's always a leverage, called "ulimit". If your web service is not that crucial you can just restart apache daemon with a limited shell, i.e., on a Debian/Ubuntu system I did:

    ~#bash -c 'ulimit -v 507200 -m 50480;service apache2 restart'

    tailor the values for both max virtual and resident memory to your needs. A nice pair of values as above for me did not let apache2 daemon die and and very little affect on the system at the same time, although it's still suffers a DDOS, when your web service maybe slow or unavailable.

    ii) A more professional fix is given here http://bechtsoudis.com/hacking/use-mod_rewrite-to-protect-from-apache-killer/ It uses a mod_rewrite tweak

    iii) another recommendation is to use a very strong open source web server nginx either as a standalone one or as a a proxy to Apache. It also a FOSS as a Apache and seems to be very good and efficient , in particular, in withstanding enormous loads.

    PS nginx share is about 9% vs. Microsoft's IIS 18%

    1. eulampios

      erratum

      Turned out that my 2.2.17 is indeed vulnerable (with 200 forks), however my previous tweak with ulimit did work all right with "-v 10000 -m 4048'.

      BTW theregister's Apache/2.2.16 (Debian) is not vulnerable . Good job, admins!

    2. eulampios

      about a mod_rewrite sol'n

      Tried this solution:

      RewriteEngine On

      RewriteLog /var/log/apache2/rewrite.log

      RewriteLogLevel 3

      RewriteCond %{HTTP:Range} bytes=0-.* [NC]

      RewriteRule .? http://%{SERVER_NAME}/ [R=302,L]

      Should be put in the <VirtualHosts> section of the config file (my system has it in /etc/apache2/sites-enabled/000-default

      Enable mod_rewrite module (on Debian:

      sudo a2enmod rewrite

      )

      Restart apache daemon

      Thanks to many smart people from the open source community.

      (got it from http://www.opennet.ru/opennews/art.shtml?num=31582)

  12. Anonymous Coward
    Trollface

    lighttpd ftw!

    LOL!

  13. dssf
    Joke

    If this is true, then is this a case of ...

    Tomahawking the Apache?

    Overlapping byte ranges? Is this a case of severe server mal occlusion? Sounds like severe overbyte to me... doing a hatchet job on the server...

  14. Anonymous Coward
    WTF?

    >"including but solely"

    Spectacular typo there!

  15. Michael H.F. Wilkinson Silver badge
    Coat

    So what is the name of this Apache bug?

    Geronimo?

    Mine is the one with the Karl May books in the pocket

  16. DMahalko
    FAIL

    Now if only Squid "fixes" its HTTP range handling

    Squid is totally useless for caching ranges, so it is pretty much pointless for a K-12 school or college to try to reduce user bandwifth with squid, by trying to cache streams from video sites, or caching any other bandwidth hog that wants to use ranges, like Windows Update.

    From the Squid Wiki:

    "Squid is unable to store range requests and is currently limited to converting the request into a full request or passing it on unaltered to the backend server. "

    HTTP ranges are now extremely common so it's a huge limitation for squid to continue to not handle them.

    1. Mike G

      Use varnish instead of squid

      See this:

      http://devblog.seomoz.org/2011/05/how-to-cache-http-range-requests/

  17. Destroy All Monsters Silver badge
    Facepalm

    Woah! No information on the Apache httpd frontpage?

    Is the webmaster kicking it back in the Carribean?

  18. Antony Riley

    OOM-Killer

    Is a last resort to protect the operating system from becoming unusable, there's nothing wrong with it.

    Suggesting some sort of cooperative fail mode is silly, look what happened to cooperative multitasking.

This topic is closed for new posts.

Other stories you might like