back to article Revoke, reissue, invalidate: Stat! Security bods scramble to plug up Heartbleed

The startling password-spaffing vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also at risk. The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email …

COMMENTS

This topic is closed for new posts.
  1. Pen-y-gors

    Has it been exploited?

    Just ask NSA/GCHQ...

    1. Anonymous Coward
      Anonymous Coward

      Re: Has it been exploited?

      "Just ask NSA/GCHQ..."

      What are their web site addresses again? and is the exploit really untraceable?

      <fires up tor browsing in sandbox vm>

    2. Anonymous Coward
      Anonymous Coward

      Re: Has it been exploited?

      Moved all our websites to Server 2012 Core a while ago....A lot fewer security patches to worry about than our old Linux stack. And not effected by this :-)

      1. Real Ale is Best
        Facepalm

        Re: Has it been exploited?

        Moved all our websites to Server 2012 Core a while ago....A lot fewer security patches to worry about than our old Linux stack. And not effected by this :-)

        Ahh, so you are just affected by the bugs that you can't see, and can't fix even if you did know about them. Smart.

        1. Anonymous Coward
          Anonymous Coward

          Re: Has it been exploited?

          "Ahh, so you are just affected by the bugs that you can't see, and can't fix even if you did know about them. Smart."

          Sure seems to work better than the Open Source method of publish and pray...

  2. WraithCadmus
    FAIL

    All done here

    We had a bit of downtime on our (few) HTTPS services as our SSL provider took 'revoke' to mean 'delete'.

  3. Nate Amsden

    thanks citrix

    Not vulnerable on Netscaler.

    Also checked one of my former employers who I am almost certain still runs F5 gear and they are not vulnerable either.

    just another day..

    1. TRT Silver badge

      Re: thanks citrix

      F5 you say? Crap.

  4. Destroy All Monsters Silver badge
    Holmes

    "Coding flaws in a fairly new feature"

    Let me guess ... problems which could have been avoided if the language of predilection for taking relaxing mudbaths of coding wasn't "C" (with or without "lint")?

    Of very high relevance, there is an inspiring series of papers on the subject of "weird machines". Check it out:

    The Language-theoretic approach (LANGSEC) regards the Internet insecurity epidemic as a consequence of ad hoc programming of input handling at all layers of network stacks, and in other kinds of software stacks. LANGSEC posits that the only path to trustworthy software that takes untrusted inputs is treating all valid or expected inputs as a formal language, and the respective input-handling routines as a recognizer for that language. The recognition must be feasible, and the recognizer must match the language in required computation power.

    When input handling is done in ad hoc way, the de facto recognizer, i.e. the input recognition and validation code ends up scattered throughout the program, does not match the programmers' assumptions about safety and validity of data, and thus provides ample opportunities for exploitation. Moreover, for complex input languages the problem of full recognition of valid or expected inputs may be UNDECIDABLE, in which case no amount of input-checking code or testing will suffice to secure the program. Many popular protocols and formats fell into this trap, the empirical fact with which security practitioners are all too familiar.

    LANGSEC helps draw the boundary between protocols and API designs that can and cannot be secured and implemented securely, and charts a way to building truly trustworthy protocols and systems.

  5. Anonymous Coward
    Anonymous Coward

    sounds familiar

    "Simply because the source code of Open Source software can be reviewed by anyone does not mean they will know how to look for security vulnerabilities or indeed detect them."

    What is the canned response for this on El Reg? Oh yes, I remember - <clears throat> "Open source is secure because EVERYONE can look at the source code and easily spot vulnerabilities in thousands or millions of lines of code, thus the speaker is a paid shill and should be ridiculed."

    1. Anonymous Coward
      Anonymous Coward

      Re: sounds familiar

      It has taken a while for this particular bug to be found, but without the source oversight it wouldn't have been. Had this been in a closed-source product without the same robust methods of detecting such bugs the same thing can happen and the number of reviewers is much more limited.

      It's a great shame that this didn't get spotted earlier or recognised as a security problem but then if someone is determined to commit at 11pm on New Year's Eve then the chances that they remembered doing it and ran lots of tests subsequently is clearly reduced.

      1. Anonymous Coward
        Anonymous Coward

        Re: sounds familiar

        "Had this been in a closed-source product without the same robust methods of detecting such bugs the same thing can happen and the number of reviewers is much more limited."

        Open-source doesn't have "robust methods" of detecting such bugs, as this bug painfully points out. The only reason it was found is because it is very widely used. When more people use it, the more bugs are uncovered...

        If open-source is so robust due to so many eyes on it, then why did this bug take so long to find? After all, openssl is used everywhere and we all know open-sourcers audit every line of code before using a piece of software...

      2. maffski

        Re: sounds familiar

        A proper fuzz attack would have found it - source code or not.

        Several of the bugs in the OpenSSL changelog were found by fuzzing.

        1. Sir Runcible Spoon

          Re: sounds familiar

          The difference here is that once the bug is known about, it can be fixed in open source.

          If it was proprietary code you'd have to wait for the owners to fix the bug.

          1. Al Jones

            Re: sounds familiar

            Do you know when the bug was identified? You don't think OpenSSL.org released an updated version at the same time as they announced the vulnerability because it only took them 20 minutes to figure out and release an upgrade, do you?

            In other words, we DID have to wait for the owners of openSSL to fix the bug before we found out about it, and could protect ourselves from the impact.

      3. Al Jones

        Re: sounds familiar

        What makes you think that this bug was found by someone trawling through the code? Codenomicon (heartbleed.com) claims to have independently discovered this bug while working on their own security testing tools, rather than by poring over the source code. I haven't seen any explanation of how Neel Mehta of Google Security discovered it, but I'd be pretty surprised if it was by reviewing the code.

        It's far more likely that the presence of this sort of bug will be identified by the behavior of the compiled code than by careful examination of the uncompiled source code. And the outcome of responsible disclosure, where the parties responsible for the code (and major vendors) are notified and allowed to fix (and hopefully do some recursion testing) before the world and her dog are notified will not be much different whether the problem is in a closed source or an open source product.

  6. Anonymous Coward
    Anonymous Coward

    Simply because the source code of Open Source software can be reviewed by anyone"....

    "does not mean they will know how to look for security vulnerabilities or indeed detect them."

    ... Ouch!

    I sincerely and truly believe we are heading into a dark age in the web-sphere. Tech is moving too fast for security and privacy. Impatient business (think Target etc) are refusing to wait too, and in complete ignorance of the risks... And the risks are growing, because they can now be profitably exploited and capitalized on!

    1. Destroy All Monsters Silver badge

      Re: Simply because the source code of Open Source software can be reviewed by anyone"....

      Polarization by Daniel E. Geer, Jr:

      I submit that polarization has come to cybersecurity. The best skills are now astonishingly good while the great mass of those dependent on cybersecurity are ever less able to even estimate what they don’t know, much less act on it. Polarization is driven by the fundamental strategic asymmetry of cybersecurity: the work factor for the offender is the incremental price of finding a new method of attack, but the work factor for the defender is the cumulative cost of forever defending against all attack methods yet discovered. Over time, the curve for the cost of finding a new attack and the curve for the cost of defending against all attacks to date must cross. Once they do, the offender never has to worry about being out of money. That crossing occurred some time ago.

      I don’t see the cybersecurity field solving the problem because the problem is getting bigger faster than we (here) are getting better. I see, instead, the probability that legislatures will move to relieve the more numerous incapable of the joint consequences of their dependence and their incapability by assigning liability so as to collectivize the downside risk of cyber insecurity into insurance pools. We’re forcibly collectivizing the downside risks of disease, most particularly the self-inflicted kind, into insurance pools; why would we not expect the same of cyber insecurity, most particularly the self-inflicted kind?

      1. Anonymous Coward
        Anonymous Coward

        Re: Simply because the source code of Open Source software can be reviewed by anyone"....

        And this from today's BBC news isn't helping. Look at how skewed Police priorities are. Terrorism always comes first. Yet how many acts or attempts at terrorism occur versus cyber crime? They're chasing shadows!! "Last year, the government identified five threats as priorities for police to prepare for. These are:"

        1 Terrorism

        2 Civil emergencies

        3 Organised crime

        4 Public order threats

        5 Large-scale cyber attacks"

        ....Forces 'unprepared' for cyber crime

        ....http://www.bbc.co.uk/news/uk-26963938

  7. Anonymous Coward
    Anonymous Coward

    The web is a Swiss Cheese of Holes!

    Wow, talk about schoolboy error. How was this code hole not caught by somebody?

    The article's video link makes it crystal clear: vimeo.com/91425662

    1. Destroy All Monsters Silver badge

      Re: The web is a Swiss Cheese of Holes!

      Because C, "runtime bounds checking is for lamers" and shitty protocols (imma gonna send muh size ... what a good idea)

      Or just plausible deniability.

    2. Anonymous Coward
      Anonymous Coward

      Re: The web is a Swiss Cheese of Holes!

      > Wow, talk about schoolboy error. How was this code hole not caught by somebody?

      You tell me. Since you are such an expert as to class this as a "schoolboy error", surely you have the experience to know how these things come to occur.

      For my part, I find it very humbling. I was mentally going through the code of another project I used to be involved on, not as high profile as OpenSSL but well known to the public nonetheless, who was driven by a bunch of seriously competent people, and our project leader was particularly paranoid. Even though our project has nothing to do with security, security is designed in on every line of code (and the code base has one of the lowest defect rates of any major project btw, as measured by various different metrics and external auditors).

      And you know what? I realise that, had we had an ever so slightly wider attack surface, we would not have caught this type of attack either. :-(

      The reason, I think, is that this may be a novel approach. Does anyone have any examples of similar vulns that you can share?

      [ Please, make sure you fully understand the attack, e.g., by reading Chris Williams' excellent article before you come back with a flippant blanket answer like "sanitise your inputs" or stuff like that. ]

      1. Anonymous Coward
        Anonymous Coward

        "to know how these things come to occur"

        Yes, in proprietary code where the authors are the ones who primarily check it its easy to be blind. But in such a popular Open Source security library and widely used implementation of SSL, I'd have expected to see the code passed by a variety of professionals and not just the authors before coming into the wild. I can only assume that the professionals sat on the sidelines and assumed someone else had bullet-proofed it.

        Clearly the solution going forward is to offer a bounty as the private sector does i.e. Google / FB / Twitter etc. In short, pay bug hunters and security specialists to locate vulnerabilities, and not just assume that because the code is the public domain we can deem it to be safe. I read the article you supplied. There wasn't anything new or unique that didn't remind me of the same class of vulnerabilities that we see in SQL Injection or XSS style attacks. In fact the comments sections in the article highlights the risks and easy solutions::

        .....1."perhaps as a general rule, apart from the obvious bounds checking, one should clear all memory as it becomes (re-)assigned? - or better on de-assignment.Perhaps generally these under-run or their over-run brethren should be detected and escalated as a general principle."

        .....2."Is the leaked data simply the junk that was in de-assigned memory?"... Yeah, it appears to be dead or alive blocks of memory allocated via some malloc()-like magic. If dead, one wonders why it wasn't zeroed on release."

        .....3. "In theory this should never have happened because malloc should have thrown a wobbly at copying that memory. In practice, it appears that OpenSSL is using unconventional memory allocation logic:"

        .....4. "The big issue here is simply not validating the received PDU (which could simply be random data, even if not malicious). It would be a good idea to have separate memory pools with no-access pages between them to segregate different types of in-process data from buffer overruns, but you'd pretty much have to write your own allocator for that purpose too."

        .....5."It looks to me to be a way for the client to send something to the server and have it echoed back. Is there a reason why the server should be echoing back client supplied data, and in what way this (as opposed to sending back data that the client doesn't control) is a useful addition to the protocol?"

      2. Anonymous Coward
        Anonymous Coward

        "surely you have the experience to know how these things come to occur"

        Yes, in proprietary code where the authors are the ones who primarily check it its easy to be blind. But in such a popular Open Source security library and widely used implementation of SSL, I'd have expected to see the code passed by a variety of professionals and not just the authors before coming into the wild. I can only assume that the professionals sat on the sidelines and assumed someone else had bullet-proofed it.

        Clearly the solution going forward is to offer a bounty as the private sector does i.e. Google / FB / Twitter etc. In short, pay bug hunters and security specialists to locate vulnerabilities, and not just assume that because the code is the public domain we can deem it to be safe. I read the article you supplied. There wasn't anything new or unique that didn't remind me of the same class of vulnerabilities that we see in SQL Injection or XSS style attacks. In fact the comments sections in the article highlights the risks and easy solutions::

        1.

        "perhaps as a general rule, apart from the obvious bounds checking, one should clear all memory as it becomes (re-)assigned? - or better on de-assignment.Perhaps generally these under-run or their over-run brethren should be detected and escalated as a general principle."

        2.

        "Is the leaked data simply the junk that was in de-assigned memory?"... Yeah, it appears to be dead or alive blocks of memory allocated via some malloc()-like magic. If dead, one wonders why it wasn't zeroed on release."

        3

        "In theory this should never have happened because malloc should have thrown a wobbly at copying that memory. In practice, it appears that OpenSSL is using unconventional memory allocation logic:"

        4.

        "The big issue here is simply not validating the received PDU (which could simply be random data, even if not malicious). It would be a good idea to have separate memory pools with no-access pages between them to segregate different types of in-process data from buffer overruns, but you'd pretty much have to write your own allocator for that purpose too."

        5.

        "It looks to me to be a way for the client to send something to the server and have it echoed back. Is there a reason why the server should be echoing back client supplied data, and in what way this (as opposed to sending back data that the client doesn't control) is a useful addition to the protocol?"

  8. Steven Davison

    Have a cookie for the XKCD Reference :- http://xkcd.com/1353/

  9. Anonymous Coward 101

    "Simply because the source code of Open Source software can be reviewed by anyone does not mean they will know how to look for security vulnerabilities or indeed detect them."

    This is how children drown in supposedly supervised swimming pools - all of the adults assume other adults are taking care of the kids.

  10. Anonymous Coward
    Anonymous Coward

    This has been used in the wild on Bitcoin exchanges

    There's been a lot of inexplicable hacks of Bitcoin / Altcoin exchanges over the past month.

    I suspect it's directly caused by this issue.

    A new exchange was launched recently but was attacked immediately with someone managing to gain all sorts of details with no known explaination available. Code audits and masses of testing suggested some kind of system failure outside of the exchange code itself. Initially php was suspected as being the root cause of this issue but this vulnerability explains it all.

    1. Anonymous Coward
      Anonymous Coward

      Re: This has been used in the wild on Bitcoin exchanges

      Undoubtedly the fault of some Open Source product where everyone can play 'lets find the get root zero day'....

    2. Mystic Megabyte
      Stop

      Re: This has been used in the wild on Bitcoin exchanges

      "I suspect it's directly caused by this issue."

      bollocks!

      I suspect it's the NSA, because they are the puppets of the banks who really don't want Bitcoin to succeed.

      The ability to move money without the bank getting a cut? Preposterous!

  11. Jason 41

    Won't affect El Reg though!

    Ok I get it, reg doesn't like Yahoo! I'm not so fond myself to be honest

    But please. They patched it straight away. There are still lots of sites out there, according to the article, which still pose a problem

    Would it possibly more useful to mention the danger sites rather than banging on about Yahoo! who aren't?

    Just a thought

    1. Sandtitz Silver badge

      Re: Won't affect El Reg though!

      There's lot more than a few SSL secured websites. Many hardware firewalls use OpenSSL libraries. Many installation are old enough to contain the older, secure OpenSSL libraries but that's not the case everywhere. I'm personally waiting for Watchguard to come up with their patches.

      The problem with small businesses (without IT admins) is that they may have a firewall which was installed by a 3rd party and never updated again. Same with secured websites too, no doubt.

  12. asdf
    Trollface

    Robin Seggelmann You Asshat

    Bet its not fun being Robin Seggelmann right about now. My guess is he has seen his name next to the word asshat more than once in the last few days.

    1. TJ1

      Re: Robin Seggelmann ^h.... Dr Stephen Henson... You Asshat

      Whilst the original author, Robin Seggelmann, wrote the code I'd argue that the code reviewer and commiter, "Dr"* Stephen Henson, has the greater responsibility for this.

      The reviewer is often the only set of 'expert' eyes that review the code before commit and as such acts as the gatekeeper on code quality and consistency.

      According to his own consultancy web-site biography he's been an OpenSSL core developer since 1999. If anyone is aware of the kind of code pitfalls to avoid in security programming and where to spot them in the OpenSSL codebase it is he - I know I shan't be hiring him to advise on TLS/SSL!

      Makes me wonder how many other of his commits should be reviewed?

      * "Dr" since it it is a Heartbleed.

    2. Anonymous Coward
      Anonymous Coward

      Re: Robin Seggelmann You Asshat|Not so fast!

      That's B.S.

      First of all, Seggelmann's patch was reviewed by "Dr." Stephen Henson, who actually posted it. So even the written record shows he's not in this alone.

      But where were the other project team members? The project manager? Or how about the hundreds (ok, tens...) of developers and managers on the payrolls of companies who make a living distributing products based on this code (Red Hat, Canonical, IBM, Oracle, etc)? Let's not forget all the consultants, pundits and "experts" who have conducted, signed off and been paid to audit those vulnerable systems over the last 2 years.

      So don't anyone tell us that this is all Seggelmann's fault. We're all human and are entitled to make stupid mistakes from time to time. Sure, everyone in the tech business, especially sysadmins and developers, know every day they go to work that branch they have to climb out on may break beneath them. That's the nature of the work. But we all also have the right (yes, right) to expect that people won't start shooting the wounded to deflect attention from their own failures.

      1. asdf

        Re: Robin Seggelmann You Asshat|Not so fast!

        I won't deny a whole lot other people are responsible as well (though not me, except for not reviewing more open source code I guess). Still if you are writing code half the world uses you really need to get it right. Any patch to very base open source projects needs to be really be reviewed very carefully. We all make mistakes but we all don't post patches for software millions of other people use and rely on either.

  13. Anonymous Coward
    Anonymous Coward

    Is this really as bad as it sounds?

    Okay, the attacker gets 64K of memory which may contain passwords and other important stuff. That's bad, sure. If the attacker then asks for another 64K is there any guarantee that they get a different chunk of un-zeroed memory?

    1. Nigel 11

      Re: Is this really as bad as it sounds?

      I shoud think that if they sit out there and repeat every day / hour / minute, they'll soon find out how long one has to wait on average for a new chunk of un-zeroed memory to leak.I'd expect heap-management on the host system to recycle blocks of free memory fairly fast, unless it's a very lightly loaded system.

  14. El Andy

    This highlights two issues with Open Source software

    1) The whole "many eyes" things is just a complete myth. And worryingly the sheer belief that code is somehow under constant auditing is making developers complacent.

    2) Because the nature of O.S. code is to share widely, vulnerable code can end up in lots of places and actually tracking them all down becomes a lot harder. We really need automated tools to scan open source codebases to find places where bits of open ssl code might well have ended up copy-pasted.

    The real take away though is how poor the overall quality of a lot of security critical code is becoming these days. I notice that Microsoft have a TLS reference implementation written in F# that has been mathematically verified. Maybe applying formal proofs to key open source codebases, such as OpenSSL, is what really needs to start happening. As well as not using languages like C for this sort of thing, which we all know just carry far too many risks of introducing subtle bugs.

    1. Anonymous Coward
      Anonymous Coward

      "I notice that Microsoft have a TLS reference implementation written in F# that has been mathematically verified."

      Yeah, well, who verified the model used to verify it? Or the program that ran that model? There's no perfect system. I see TTD leading coders into all sorts of complacency issues every day because they forget that the same error-prone people who write code also design and write tests.

      1. Anonymous Coward
        Anonymous Coward

        That's "TDD", not "TTD". How appropriately ironic.

        1. El Andy

          It's not TDD they've used, which could easily be lacking. It's a formal mathematical proof, which is a lot harder to do, but a solid guarantee that it works. I would suspect the F# code isn't necessarily that performant, but that's a better problem to need to solve, IMO.

  15. This post has been deleted by its author

This topic is closed for new posts.

Other stories you might like