back to article OpenSSL Heartbleed: Bloody nose for open-source bleeding hearts

Robin Seggelmann, the man who accidentally introduced the password-leaking Heartbleed bug into OpenSSL, says not enough people are scrutinizing the crucial cryptographic library. The Heartbleed flaw, which was revealed on Monday and sent shockwaves through the IT world all week, allows attackers to reach across the internet …

COMMENTS

This topic is closed for new posts.
  1. Thom Brown

    Exaggerated risk?

    CloudFlare have found it impossible to exploit the bug to steal keys despite their efforts:

    http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed

    1. diodesign (Written by Reg staff) Silver badge

      Re: Exaggerated risk?

      "CloudFlare have found it impossible to exploit the bug to steal keys"

      Well, steal keys from a specific Nginx setup, but I take your point - and the Cloudflare blog is linked to in the article. I note that the Cloudflare heartbleed challenge site has updated itself to "Has the challenge been solved yet? MAYBE? (verifying)". Stay tuned.

      In general, it is very tricky to steal private SSL keys (going to Vegas to put everything on red 14 seems like a better chance of success), but that doesn't stop the leaking of passwords and whatnot.

      Plus, it's a rather fun bug. Code safe, everyone.

      C.

    2. diodesign (Written by Reg staff) Silver badge

      Re: Exaggerated risk?

      "CloudFlare have found it impossible to exploit the bug to steal keys"

      Bad luck, ducky. It's utterly possible :(

      "We confirmed that both of these individuals have the private key and that it was obtained through Heartbleed exploits. We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we can’t be certain."

      C.

  2. Grease Monkey Silver badge

    So is it a good idea to build the security of your entire web operation around code written by four unpaid volunteers?

    This isn't to criticize the volunteers, but those companies who are panicking now because they built their operations around this code and just assumed it would be secure.

    1. h4rm0ny

      Volunteers are not necessarily less (or more) competent than paid people and in any case, a lot of Open Source development is actually paid for by large companies. Open Source does not necessarily mean written by volunteers.

      Regardless, flaws happen in proprietary code and in Open Source code. I've never been convinced there's an intrinsic bias either way. There is a rather exaggerated idea about "a thousand eyes" leading to fewer flaws in Open Source software. That's never been that supportable. The advantages of Open Source are not magic lack of vulnerabilities. The advantages of Open Source are that you can check it for deliberate subversion - e.g. government backdoors and because it can be forked at any time, you're hopefully protected against lock-in and projects being abandoned. (Though Google do their best on the former).

      I suppose some might say I've missed "free" off the list above, but to quote the Great Wookie himself: "Free as in speech, not free as in beer". Any serious customers are likely to go with whatever solution is best rather than cheapest.

      But the essential point is that companies like RedHat, SuSE, et al. are not small companies and it's not a pile of volunteer code. Some of it is, but it's the testing side that matters more than the development side in matters like this.

      1. Tom 13

        @ h4rm0ny

        Perhaps I'm in an atypically generous mood today, but I didn't read Grease Monkey's comment as denigrating volunteers. I read it as "do you twits now think it might be better to donate money/people/resources to this code branch since it is so critical to your business? Remember it's free as in 'speech' not free as in 'beer'."

    2. Anonymous Coward
      Anonymous Coward

      Here at Microsoft

      We only use low paid college interns to write our code,look :

      Companies have built entire operations around Office but due to RTF parsing bugs:

      http://technet.microsoft.com/en-us/security/bulletin/ms12-079 (2012) - Critical

      and whoopsie daisy we have deja vu

      http://technet.microsoft.com/en-us/security/advisory/2953095 (2014) - Critical

      Let's get thing into perspective here, all software has bugs and never assume code is 100% secure. It's what you do about them that matters.

    3. Gene Cash Silver badge

      Oh hell yeah. In my experience, unpaid volunteers turn out code far better than the paid blokes.

      Also, outside of the Linux Kernel Mailing list, has anyone ever seen a code review actually catch a problem? I sure as hell haven't.

      1. BlueGreen

        @Gene Cash

        > has anyone ever seen a code review actually catch a problem?

        yes

      2. Anonymous Coward
        Anonymous Coward

        @ Gene cash

        Here are two very expensive software vendors:

        Oracle and Adobe

        How many CVE's in the last 12 months for these 2 ?

        Also, outside of the Linux Kernel Mailing list, has anyone ever seen a code review actually catch a problem? I sure as hell haven't.

        Oh please !

      3. John Smith 19 Gold badge
        Unhappy

        @Gene Cash

        "Also, outside of the Linux Kernel Mailing list, has anyone ever seen a code review actually catch a problem? I sure as hell haven't.

        I take it you are unaware of how IBM Federal Systems wrote the code for th Shuttle.

        Coder review was were the key to finding the bugs.

        But what really lowered the bug rate was using that information to identify the pattern of that bug and pro actively look for other instances of that pattern and verifty they did not have the fault as well.

        That's why the software never failed in 30 years of use.

        1. The BigYin

          Re: @Gene Cash

          "But what really lowered the bug rate was using that information to identify the pattern of that bug and pro actively look for other instances of that pattern and verifty they did not have the fault as well."

          Static Code Analysis. The first time you run it on your code base, have spare undergarments to hand. It's not foolproof, but it is another tool that is inexpensive to slip into your build system and automate.

          1. John Smith 19 Gold badge
            Unhappy

            Re: @Gene Cash

            "Static Code Analysis. The first time you run it on your code base, have spare undergarments to hand. It's not foolproof, but it is another tool that is inexpensive to slip into your build system and automate."

            I think in the late 70's, when they started writing the Shuttle code it did not exist. It was all code reviews and clever grep scripts.

            1. John Hughes

              Re: @Gene Cash

              "clever grep scripts." is static code analysis.

              1. John Smith 19 Gold badge
                Happy

                Re: @Gene Cash

                ""clever grep scripts." is static code analysis."

                I think things have moved on a bit since then.

                At least, I hope so.

        2. Anonymous Coward
          Anonymous Coward

          Re: @Gene Cash

          "That's why the software never failed in 30 years of use."

          Never catastrophically failed sure, but *never* failed in any way that required a reset? You 100% sure about that? Even avionics software occasionally has the odd glitch. I'd be surprised if the software in the shuttle was any different. Certainly at least one well known rocket crash was down to faulty software: http://en.wikipedia.org/wiki/Cluster_(spacecraft)

          and then there was the mars mission that used a mix of metric and imperial...

          1. John Smith 19 Gold badge
            Unhappy

            Re: @Gene Cash

            "Never catastrophically failed sure, but *never* failed in any way that required a reset? "

            Correct.

            The team built both the OS and the "application" software. The system was 4 way redundant (unlike Ariane 5 with's master/slave system) and implemented cross checking of IO and synch pulses.

            "Certainly at least one well known rocket crash was down to faulty software: http://en.wikipedia.org/wiki/Cluster_(spacecraft)"

            Firstly the Ariane software was not built by IBM Federal Systems (who BTW were the role models for the CMU Capability Maturation Model Level 5 for how software should be developed) and secondly the failure was a failure of change control.. They reused the Ariane 4 software with a policy of leaving software modules in. The module that crashed the processors was not even a core module. A5 was designed to allow much greater movements at some parts of the flight. The software (that should have not been running at that point in the flight. A failure of requirements management) thought the rocket was going haywire and crashed the master processor. The slave processor then crashed in a cascade failure.

            BTW Ariane 5's software was AFAIK written in Ada.

            In fact I'd say the Ariane5 CLUSTERf**k (as I like to think of it) was more a management than a software development failure.

            Which IBM FS were also pretty good at.

      4. The BigYin

        Every single code review. If you don't find something, you didn't look hard enough.

      5. Ian Michael Gumby

        @Gene Cash

        You must not be very experienced.

        Look, there are some very good people writing very good code, however, that's a very small subset of all of the code that is being tossed out in to the public.

        Then you have Apache were depending on the project, visibility and money tossed behind it... YMMV.

      6. Destroy All Monsters Silver badge
        Headmaster

        Also, outside of the Linux Kernel Mailing list, has anyone ever seen a code review actually catch a problem? I sure as hell haven't.

        You could be a sushi chef for all we know, so your lack of relevant encounters might find an easy explanation.

      7. Vic

        > has anyone ever seen a code review actually catch a problem?

        Many times.

        The trick is to pick your reviewers carefully - those that hit the "Ship it" button within 5 minutes are not reviewing code.

        Vic.

        1. James Hughes 1

          Code Reviews

          Having been using a code review scheme for the last year, it has caught many many issues (not just bugs, but commenting mistakes, inefficient code etc).

          It does depend on how good the reviewer is. I'm particularly crap at it, but others can find real niggly issues that didn't show up in testing.

          Also, static analysis like Coverity, or even running in valgrind for some dynamic stuff digs up hundreds of issues, even on code that has been around for ages. Both well worth doing. I think Coverity may well have found heartbleed for example.

        2. Anonymous Coward
          Anonymous Coward

          "The trick is to pick your reviewers carefully - those that hit the "Ship it" button within 5 minutes are not reviewing code."

          The problem is that a lot of the time the people who get chosen to review some code are presented with a lump of code that they have little idea of the function of. So the best they can do is check for obvious syntax and logical errors, pass it and then get on with their own work which they're probably under pressure to get done. For proper code review you need it set down as an actual task with a specific time slot allocated so people can get up to speed on what they're looking at - not something fitted inbetween other tasks if the person has a few minutes. Unfortunately thats just not the way its done in most companies.

    4. Anonymous Coward
      Anonymous Coward

      Given the choices:

      1) Fund OpenSSL development

      2) Buy your own island

      3) Buy your own 767 and use it to reach some tropical island

      4) Buy a castle somewhere in Europe

      What Oracle, Google, Facebook, etc. etc. CEOs do?

      Ah, and then some executive will tell managers "get our developers use open source code, it's free...."

      1. Destroy All Monsters Silver badge
        Childcatcher

        Re: Given the choices:

        I am pretty sure I have hit the "donate" button rather more often than $MYBOSS.

      2. Anonymous Coward
        Anonymous Coward

        Re: Given the choices:

        5) Run IIS on Windows like 1/3 of the world's web site now do....

    5. Anonymous Coward
      Anonymous Coward

      This will hopefully be a learning experience

      It does seem ridiculous that so many mission-critical systems thoughout the world have unquestioningly adopted free software without at least double-checking the code. Free Open Source software is fantastic but shouldn't be put in critical systems without extra checks.

      The business community can remedy this by jointly starting a free software security consortium whose mandate is to search and test for security holes in the free software they intend to implement. Pooling resources in an open multiparty project would save each business having to spend the money to do these tests themselves, and all would benefit as a result. A joint debugging effort would still be a lot cheaper than having to go back to writing or buying proprietary software for every component of their systems.

    6. pacman7de

      Writing secure code ..

      Shouldn't such security code be written and audited to a much higher quality than the rest of the innovative software product. As in once it is written then a second team expends much effort in discovering potential exploits. Only then is it posted as production code.

  3. lee harvey osmond

    "Google's Android 4.1.1 is vulnerable"

    Vullnerable to Heartbleed exploits?

    Is it? Really? Do many people run SSL/TLS-enabled servers on their mobile phones and such?

    1. BristolBachelor Gold badge

      Re: "Google's Android 4.1.1 is vulnerable"

      The keep alive may be sent from the client to the server, or from the server to the client.

      Are you saying that Android does not make SSL connections?

      1. lee harvey osmond

        Re: "Google's Android 4.1.1 is vulnerable"

        Oh. So I could have my mobile phone connect to a TLS-enabled SMTP server such as Gmail, and in the short periond that that connection is open (read the Android developer docs about battery management) those dastardly people at Google could read up to 64k of core memory from my phone, and this represents a threat to me even 0.1% as serious as some geezer in China connecting to a Gmail server, never attempting to make SMTP authentication over that TLS connection, but snatching 64k out of that server, to which lots and lots of people have connected, and where in principle the private key might be visible to go with the public cert, facilitating impersonation?

        Mmmm I don't think so. Yes the library inplementing the protocol has a flaw and there is a vulnerability, but the consequences to humanity at large of unsuspecting clients connecting to malicious servers (servers which will still be expected to present a valid SSL certificate) are rather than less serious than those from malicious clients connecting to unsuspecting servers.

        1. Anonymous Coward
          Anonymous Coward

          Re: "Google's Android 4.1.1 is vulnerable"

          > TLS-enabled SMTP server such as Gmail,

          Alternatively you could just browse the internet with your phone.

          Whether you like it or not, Android 4.1.1 is vulnerable.

          It doesn't matter how probable it is that somebody will use the vulnerability to extract 64k from your phone, it is still vulnerable. For you it might only expose the cat videos you are watching, but others have more sensitive information.

        2. Jamie Jones Silver badge

          Re: "Google's Android 4.1.1 is vulnerable"

          "Yes the library inplementing the protocol has a flaw and there is a vulnerability, but the consequences to humanity at large of unsuspecting clients connecting to malicious servers (servers which will still be expected to present a valid SSL certificate) are rather than less serious than those from malicious clients connecting to unsuspecting servers."

          Ummm, I don't think anyone has said the problems for clients are just as serious, however you don't seem to understand the situation.

          Are you saying you only ever visit google and your banks websites? Or maybe you use the lesser-known plugin "httpsNoWhere"?

          Any site you visit could have malicious code - even a non-https site could have embedded https stuff (with a valid certificate too - that's not relevant)

          So, you are basically trusting the honesty *and* security of every site you vvisit, and every third party ad company/image broker/js-library provider they use.

        3. ElReg!comments!Pierre

          Re: "Google's Android 4.1.1 is vulnerable"

          unsuspecting clients connecting to malicious servers (servers which will still be expected to present a valid SSL certificate)

          Not necessarily, from what I gather the malicious server wouldn't have to present a valid certificate. Your point still stands, people are extracting useful info from servers by hammering them with malicious SSL requests; I can't see that happening on a phone. Remember that in the 64k you can extract at a time, most is truncated or otherwise uninterpretable garbage. Moreover, on a client machine most if not all of that garbage would be data that the malicious server previously sent to begin with (ot that was sent to the malicious server by this particular client). In chrome and Firefox, tabs are run in separate processes, so even if the attacker managed to hammer your phone with malicious requests at the right instant -extremely unlikely to begin with- they couldn't snatch your bank credentials from a concurrently-open tab.

          Not terribly scary then. Still needs patching.

    2. ElReg!comments!Pierre

      Re: "Google's Android 4.1.1 is vulnerable"

      Yeah, my thought too. If you're worried about this bug on your handset I have a personal meteorite deflecting shield you may be interested in. Heartbleed can leak some of the calling process' memory stack.

      If memory serves, both Chrome and Firefox fork processes on connection, which means that a malicious website would have access to 64k of... it's own prior data exchanges with you. In other words an attacker could use your ram as his own history. Oh noes, the end is nigh etc.

      This bug is really only a concern on massively multi-user servers, where the 64k of leaked memory could contain _someone else's_ data. A client machine typically has only "one-on-one" server-client connections, so attackers can mostly retrieve data they already have. And that is, if they can make use of the tiny time frame in which the connection is established (typically, client system are not designed to accept out-of-the-blue SSL connections; they establish the connection for a particular need they have, say, to retrieve the list of emails in a distant mailbox, then shut it down).

      A server is vulnerable because it is designed to be listening to random connection requests, and potentially has a huge number of users connected to it. Unless I missed something, neither is happening on a client system.

  4. Anonymous Coward
    Anonymous Coward

    The real problem is C

    With great power comes great responsibility. If it is impossible to exercise responsibility then it is time to take away the power.

    It should be recognised by now that we are not using tools which are fit for purpose and are harming and putting everyone at risk through that.

    There is only one answer... ban C

    1. h4rm0ny

      Re: The real problem is C

      "There is only one answer... ban C"

      In favour of...?

      1. ItsNotMe

        Re: The real problem is C

        "In favour of...?"

        Well there are 25 other letters in the alphabet...choose one.

        1. Vic

          Re: The real problem is C

          > Well there are 25 other letters in the alphabet...choose one.

          Not Z. Please, $deity, not Z.

          ::shudders::

          Vic.

          1. Anonymous Coward
            Devil

            Re: The real problem is C

            VB

          2. John Smith 19 Gold badge
            WTF?

            @Vic

            "> Well there are 25 other letters in the alphabet...choose one.

            Not Z. Please, $deity, not Z."

            Someone has built a Z compiler?

            Impressive.

            1. Roo
              Childcatcher

              Re: @Vic

              "Not Z. Please, $deity, not Z."

              Someone has built a Z compiler?"

              Last I heard they were (re?) animating Z specs. :P

            2. Vic

              Re: @Vic

              > Someone has built a Z compiler?

              Not quite a compiler...

              I wrote a lint tool for Z. I still have nightmares about that project...

              Vic.

              1. John Smith 19 Gold badge
                Thumb Up

                Re: @Vic

                "I wrote a lint tool for Z."

                Hmm.

                Respect.

          3. Roo

            Re: The real problem is C

            "Not Z. Please, $deity, not Z."

            I liked the idea of Z, but in practice I could never get away from the fact that I could achieve the same ends expressing the same constraints using some carefully written C++ unit tests. :)

            Note: There are things Z can do which carefully written C++ can't, and of course it's possible to have bad tests fail to detect bugs in bad code... That said, it's pretty rare that people can write Z accurately either. :(

        2. h4rm0ny

          Re: The real problem is C

          >>"Well there are 25 other letters in the alphabet...choose one."

          Well in that case, I choose D. It's a lovely language - essentially a rebuild of C++ with an "if we knew then what we know now" approach. But it may not satisfy the OP's criteria. I'm interested to see if they have an answer, or only a criticism.

          1. Ken Hagan Gold badge

            Re: The real problem is C

            "... D. It's a lovely language - essentially a rebuild of C++ with an "if we knew then what we know now" approach."

            A bit like C++11 then. Both would be perfectly reasonable replacements for the C that (inexplicably to my mind) appears to be the preferred choice for several rather important FOSS endeavours. Seriously guys, it has been a quarter of a century since we learned how to make C safer without any loss in performance (or one's ability to twiddle bits or map brain-dead structure layouts). Memory management in particular is a solved problem.

          2. Vic
            Joke

            Re: The real problem is C

            > Well in that case, I choose D.

            Nah. You want ArnoldC

            Vic.

        3. tom dial Silver badge

          Re: The real problem is C

          The real problem is an implementation design error, compounded by a coding practice error, combined with apparently inadequate source code review and prerelease testing. It appears that the packet was not expected to be inconsistent, so the protocol did not address the issue. In coding, the possibility of an inconsistent packet was overlooked and suitable action (e. g., discard the packet) was not coded. That could have been caught by a code review, and it could have been caught by rudimentary - and automated - testing of the results of invalid conditions like an internal length specifier implying a length greater than the total packet length.

          Mistakes happen, and there is no reason I know of to think that they are either more or common with open source than closed source software. They are a result of fallible humans doing demanding work, sometimes under time and money constraints and sometimes coming up short.

      2. Anonymous Coward
        Gimp

        Re: The real problem is C

        ADA!

        After Mistress has kicked the 10 shades of shit out of you and your miserable poor excuse for some source, she might deign to generate a binary from your code.

        1. Tom 7

          Re: The real problem is C

          ADA! Was that the one they used to blow up that rocket?

          1. John Smith 19 Gold badge
            Unhappy

            @Tom 7

            "ADA! Was that the one they used to blow up that rocket?"

            No that was what the fules who wrote the software used.

            A fool with a tool is still a fool (and probably a bit of a tool).

            1. Tom 7

              Re: @Tom 7

              I was just pointing out the language didn't and wont help.

        2. James O'Shea

          Re: The real problem is C

          "ADA!

          After Mistress has kicked the 10 shades of shit out of you and your miserable poor excuse for some source, she might deign to generate a binary from your code."

          Only if you admit that you've been a baaaaad boy.

        3. John Smith 19 Gold badge
          Happy

          @Lis 0r

          "After Mistress has kicked the 10 shades of shit out of you and your miserable poor excuse for some source, she might deign to generate a binary from your code."

          Stop complaining.

          You know you love it.

      3. Anonymous Coward
        Joke

        Re: The real problem is C @ h4rm0ny

        Java ? :)

        1. Tom 13
          Coat

          @ rm -rf /

          No, Microsoft Basic.

          1. Tom 13

            Re: @ rm -rf /

            Or perhaps they should

            Go Forth

            and conquer. Or something like that.

      4. Boris the Cockroach Silver badge
        Pint

        Re: The real problem is C

        In favour of a language with some bounds checking built in

        But then that will ruin 90% of our fun when a m$ product turns out to have a buffer overrun flaw.................................

        again

      5. James O'Shea

        Re: The real problem is C

        ""There is only one answer... ban C"

        In favour of...?"

        Pascal. Modula-2. ADA. Hell, FORTRAN, if necessary. (Why, yes, I do have extensive experience with Pascal, Modula-2, and FORTRAN, and could probably muddle through with ADA if I had to, why do you ask?)

        1. RAMChYLD
          Trollface

          Re: The real problem is C

          Pascal?

          Pah, quiche eater...

          1. Chemist

            Re: The real problem is C

            "Pascal?

            Pah, quiche eater..."

            I think you'd be better with rosti-eater as Niklaus Wirth is Swiss and from the German speaking region

            1. Mpeler
              Pint

              Re: The real problem is C

              @Chemist

              "Re: The real problem is C

              "Pascal?

              Pah, quiche eater..."

              I think you'd be better with rosti-eater as Niklaus Wirth is Swiss and from the German speaking region"

              The quiche reference was to the Computerworld (Datamation?) article titled "Real Programmers Don't Write Pascal", which referred to Pascal programmers as "Quiche-Eaters"....it extols the joys of FORTRAN and various coding tricks used back in the day (cough, cough, 1960s and onwards (yaaayyy WATFOR))...

              Now I have visions of Rösti and Quiche on a pizza...with Käsespätzle and Maultaschen....and black olives...

              (Paris as the icon but that would have been Quiche and tell....)...

      6. Destroy All Monsters Silver badge
        Headmaster

        Re: The real problem is C

        ban C

        8 downvotes

        In favour of...?

        Oh, please.

        2014, and there are still "C fast and C furious" public dangers being deposited on my information highway due to morbidly retarded curricula combined with the attitude of the unstoppable coding matador.

        And use the fucking static code checkers. Use them.

        1. Mpeler
          Holmes

          Re: The real problem is C

          Ahhh, from C to shining C. How about PL/I ? Babbage (CW 1993 or so)

          "IF NOT THEN DON'T" or "GOING, GOING, GONE", "OPEN AND SHUT CASE" statements,

          or "CALL BY VALUE, CALL BY REFERENCE, CALL BY PHONE" (back in the dialup days...).

          Or, not Visual Basic, but Invisible Basic - no one will see your code, so no one can see your problems....

          Whatever happened to bounds-checking? Did it go away with the need-for-speed era in the late 80s and 90s?

          How about bounds-checking in hardware? The HP3000 had it; there were others....NX is a step in the right direction, but only a step....

          Static code checking-----how about desk checking (goes back a ways.....)....remember the papers "Real Programmers Don't Write Pascal" and "Mel" .......(talk about hardware programming.....).....oh wait, there's always plugboards....

          1. Ken Hagan Gold badge

            Re: The real problem is C

            "How about bounds-checking in hardware?"

            To be effective in this case, it would need to have byte granularity and be capable of tracking millions of separate allocations. Hardware bounds-checking at page granularity works well for keeping processes off each other's toes. It's impractical for tracking the millions of tiny allocations that a large server might have in play at any given moment.

            On the other hand, there are languages that automate such things. They are frequently able to prove the correctness of a particular access at compile time. Where a run-time check is needed, memory latency and out-of-order execution often means that the check costs no time. Either way, these methods are practical at whatever granularity and whatever scaling you care to mention.

          2. Anonymous Coward
            Anonymous Coward

            Re: The real problem is C

            The x86 protected mode architecture has a lot of checks (including the BOUND instructions) just OS and compilers don't use them because checks slow down operations...

    2. Awil Onmearse
      Joke

      Re: The real problem is C

      Yep, let's write critical infrastructure in Ruby and sit back and enjoy - hell maybe even hasten - the heat-death of the universe waiting for it to encrypt your session-key ;-)

    3. Anonymous Coward
      Anonymous Coward

      Re: The real problem is C

      Hey, I would love to chuck C as well but Heartbleed could have happened in any language. RFC6520 says return the bytes and the code returns the bytes.

      1. Michael Wojcik Silver badge

        Re: The real problem is C

        Hey, I would love to chuck C as well but Heartbleed could have happened in any language. RFC6520 says return the bytes and the code returns the bytes.

        That's a grotesque oversimplification, which is quite an achievement, considering how simple the bug is.

        The RFC doesn't require the recipient to honor a malformed Heartbeat request, which is what Heartbleed uses. So citing the RFC is invalid.

        And a language that performed array bounds checking would catch the overrun - unless the protocol buffer was over-allocated in the first place, and in that case the hypothetical alternative language would hopefully initialize the buffer (since overrun protection doesn't do that much good unless you also have object-reuse protection).

        But, as I've noted in other Reg discussions about Heartbleed, the problem here isn't C. The problem is writing lazy code, with poorly chosen names for identifiers and direct buffer manipulation. A single, very thin layer of indirection for copying between protocol buffers could perform all the necessary range checking.

        The problem with OpenSSL isn't C; it's C culture. It's entirely possible to write safe, clean, readable code in C. Most C programmers simply can't be bothered to do so. And of course this is also true of many programmers using many other languages.

      2. John Hughes

        Re: The real problem is C

        "Hey, I would love to chuck C as well but Heartbleed could have happened in any language. RFC6520 says return the bytes and the code returns the bytes."

        No. RFC6520 (written by one Robin Seggelmann!) says to reject all requests with invalid length fields.

        A language with array bound checking would have done that automatically.

    4. AlanS
      Boffin

      The problem isn't C

      It's the spec. At this point we have a binary blob of known length B. The spec says the blob has a header byte, a two-byte payload length P, and some unspecified data which must be echoed. The intent is that the transfer mechanism can add padding so that B >= P+2+1 but there's nothing in the wire protocol to ensure this. The fault is not checking that P is appropriate to B and could occur in any language; other languages with bounds checking have been suggested in other threads but you still need a constructor - if the implementer uses "byte data[P]" rather than "byte data[B-2-1]" you have the same bug. Further, with hindsight you should use [min(P, B-2-1)] but would you think of that first?

      1. AlexV

        Re: The problem isn't C

        No, the problem is C. In a reasonable language, declaring an array of byte data[P] would result in an *empty* array of bytes. Not data hoovered out of whatever unrelated (and potentially sensitive) crap is sitting in memory at the time. Similarly, copying 64k bytes of data from an array that's 2 bytes long would result in an exception, preferably at compile-time (but at worst, at runtime). Not an apparantly succesfull copy with 2 bytes from the array and the remainder from unallocated memory.

        Writing in any language, you could have a bug where you crash out with malformed input with mis-matching lengths. The bug isn't the big deal, the big deal is that as a result of the bug C behaves in a completely unacceptable way.

        1. Mpeler
          Alien

          Re: The problem isn't C

          in other words, malloc as Moloch....

        2. Ken Hagan Gold badge

          Re: The problem isn't C

          "No, the problem is C. In a reasonable language, declaring an array of byte data[P] would result in an *empty* array of bytes."

          and that is what would have happened in OpenSSL if the writers hadn't chosen to write their own allocator. The most fascist bounds checking language out there won't help if you write your own allocator on top, particularly if you write one that permits use-after-free.

      2. John Smith 19 Gold badge
        Unhappy

        @AlanS

        "The fault is not checking that P is appropriate to B and could occur in any language; other languages with bounds checking have been suggested in other threads but you still need a constructor - if the implementer uses "byte data[P]" rather than "byte data[B-2-1]" you have the same bug. "

        I was going to suggest an assertion but if padding is allowed then that won't work.

        Pity.

      3. Androgynous Cupboard Silver badge

        Re: The problem isn't C

        Sorry, I am struggling to see how this could happen in Java. Really. There is no way (*), in a correctly functioning virtual machine, for a byte array to contain uninitialized data (or, rather, data from elsewhere in the heap). You would get zeros, an ArrayIndexOutOfBoundsException, or it would fail to compile.

        * Excluding native code malarky, but then it's not really Java.

        1. Anonymous Coward
          Anonymous Coward

          Re: The problem isn't C

          I challenge you to write an SSL implementation in Java that performs adequately.

          1. Anonymous Coward
            Anonymous Coward

            Re: The problem isn't C

            And that doesn't need it's Java Run Time patched for dozens of critical vulnerabilities several times a year...

          2. Androgynous Cupboard Silver badge

            Re: The problem isn't C

            Yawn. 2005 called and wants it's performance-based objection to Java back.

            Choosing performance over correctness is what got us here in the first place, no?

      4. Michael Wojcik Silver badge

        Re: The problem isn't C

        It's the spec.

        No, it's not. This issue - self-describing data from the peer - occurs all the time in comms code. Anyone who writes comms code should be aware of it and deal with it as a matter of course. Anyone who's ever written a protocol stack or a Wireshark dissector or anything along those lines should be aware of it. Blaming the specification is endorsing developer laziness.

        You cannot assume the data from the peer is well-formed. Period.

        And there's no need for a "constructor" of any sort in this case. The code simply needs to validate that the length it's copying won't run past the end of the input, which it could do explicitly before the memcpy or (better) using an abstraction that knows the size of the input buffer and the offset into it. Look at the CVS diffs at openssl.org (I posted the URL in another message, and it's easy to find). It's a trivial change.

        1. John Smith 19 Gold badge
          Unhappy

          Michael Wojcik

          "You cannot assume the data from the peer is well-formed. Period."

          Exactly

          It seems 99% of the trouble seems to be with people incapable of understanding this simple idea.

  5. Anonymous Coward
    Anonymous Coward

    Number of people assuming that open source software is being scrutinised by other people because....well it's open, and it's OpenSSL for goodness sake, there MUST be loads of people reviewing it = millions.

    'Other people' = 0

    Not a jab at OSS here, just pointing out that it isn't free as in beer. Someone has to spend some time and effort, it doesn't all come for free. Hopefully a kick up the arse for companies raking in money from of technologies built on OSS to put a little back.

    1. h4rm0ny

      Well it was a Google engineer who found the flaw, so some credit there at least.

      1. BristolBachelor Gold badge
        Black Helicopters

        You meant yo say that the Google engineer who found it teported it to the people who could fix it.

        However, the NSA hack who found it added it to their exploit list.

    2. Lee D Silver badge

      The point of open source is not that a million people are constantly reviewing your code for free.

      It's that bugs like this can be found by code inspection, by anyone. As you rightly point out, this is both a positive and a negative.

      But the people who claim being open-source makes something secure are idiots. Sorry, they are. Nobody with brains REALLY thinks that. The difference is, once the problem is identified *I*, *you* or anyone else could knock up a patch in minutes and distribute it to the world. And that's pretty much what happened.

      If you think that open-source = security, I'm sure you also think that being honest means people won't rob you. It's the same thing.

      1. Anonymous Coward
        Anonymous Coward

        "The difference is, once the problem is identified *I*, *you* or anyone else could knock up a patch in minutes and distribute it to the world. And that's pretty much what happened."

        Oooh, soo naïve.... of course the patch is released at the same time the flaw is published - that's how proper vulnerability management work, open source or not. The only difference is you can really patch it yourself, if you know how to do it and know enough of the used programming language and toolchain to get the final executable - something not many users have.

        "If you think that open-source = security"

        No, I never thought it, but that what many OSS fans try to sell you.

  6. Alistair
    Mushroom

    no armageddon here thanks

    SSL libraries contain a code flaw.

    That on SERVER processes will allow one to read RANDOM blocks of 64Kb freed memory.

    This is not the internet Armageddon that its being made out to be. Yes, possible. Yes it might leak passwords, yes it might leak credit card details. It has NOT been effectively demonstrated in the WILD. You *really* need to have a pretty damned good idea of what you're getting back to make any sense of it.

    I'm poking at my brand spanking new dsl router (yes, ssl enabled webpage, uses 1.0.1c). No server keys so far. 9 hours. Lots of HTTP headers, lots of utter garbage. Lots of (oddly) 0'ed pages. No keys.

    Even better:

    https://www.cloudflarechallenge.com/heartbleed

    Try it. take a look at the crap you get back. At *best* I've seen 3 or 4 HTTPS session keys.

    1. Anonymous Coward
      Anonymous Coward

      Re: no armageddon here thanks

      The bug means that 64k of contiguous logically addressable memory can be read from addresses following the array. The memory doesn't have to be free, just adjacent by logical address. So it could be anything.

      Why do a number of reports state that it can only be free memory that's read from? Am I missing something, or are they?

    2. Christian Berger

      Re: no armageddon here thanks

      Actually that's incorrect. Since OpenSSL has its own memory allocator, those 64k are guaranteed to belong to the OpenSSL library. So the chance that you get keys and/or passwords is pretty high.

      So an idiotic design choice inside of OpenSSL also contributed to the problem.

      1. Anonymous Coward
        Anonymous Coward

        Re: no armageddon here thanks

        Actually what's incorrect? And why?

      2. Anonymous Coward
        Anonymous Coward

        Re: no armageddon here thanks

        But the memory allocator will also get and release memory to the OS - what you get from this bug can be anything depending on where the memory comes from, and what has been used for before.

    3. Michael Wojcik Silver badge

      Re: no armageddon here thanks

      It has NOT been effectively demonstrated in the WILD.

      Yes it has. Numerous testers have been able to extract server private keys, for example. Look at heartbleed.com, or for that matter the Cloudfront blog you cited, which has now been updated to indicate they've confirmed the possibility as well.

      You *really* need to have a pretty damned good idea of what you're getting back to make any sense of it.

      No, you don't. The entire process can be automated. You have the server's public key from its certificate. You just take every byte string of appropriate length from the leaked data and test to see whether it's the corresponding private key. If you want, you can apply some heuristics to cut down the search space (for example, any string that looks like it's all valid ASCII text can probably be skipped; and you can also guess at plausible offsets).

      With a GPU farm, or even just a single GPU, you can do a lot of RSA key tests.

  7. Stuart 22

    False Positives

    Just a warning. There appears to be a proliferation of websites 'testing' for the flaw and getting it wrong. Do be careful.

    I think this is because they are only testing for the creation date (of the flawed software) and not the modified date when it was fixed. Good checkers should actually send a string and show you the return if its worried about it.

    This is the command line return I'm typically getting from our Centos servers which confirms the fix:

    # rpm -q --changelog openssl-1.0.1e | grep -B 1 CVE-2014-0160

    * Mon Apr 07 2014 Tomáš Mráz <tmraz@redhat.com> 1.0.1e-16.7

    - fix CVE-2014-0160 - information disclosure in TLS heartbeat extension

    1. Anonymous Coward
      Anonymous Coward

      Re: False Positives

      > Good checkers should actually send a string and show you the return if its worried about it.

      And in the UK that would leave both the person offering the service and the person using it open to challenge under the Computer Misuse Act 1990. A hideous piece of sh^H^Hlegislation, like everything that comes out of Westminster, but unfortunately for the time being that's the law.

  8. Anonymous Coward
    Anonymous Coward

    Where have all the fanboyz gone

    M$ suck... Windows is full of security holes... I rebuilt our servers and installz Linux on em cause its free and open source so secure init.. and eyes also got the boss to agree to rollout ubuntu to all the corporate desktops next week, v14 is mental its even got menus now in the windows, way better than Windows 8 and even runs mega fast on a Pentium 4

    Enjoy your patching over the weekend guys.. I'm off for beer and its looking like a sunny one

    1. Anonymous Coward
      Anonymous Coward

      Re: Where have all the fanboyz gone

      Well, one goof against a mess that is so massive that patching needed weekly scheduling to ensure some bandwidth remained for actual operations - I think I'll take my chances, still.

      1. Anonymous Coward
        Anonymous Coward

        Re: Where have all the fanboyz gone

        I call bollocks.

        It's Patch Tuesday this week - and we've still got a load to do. Patching the OpenSSL implementations was a doddle that I did yesterday and last night with a glass of wine and an awful lot of screen sessions. Oh, and a Puppet module for the big stuff.

        At one point I had 15 odd ssh sessions going - try that with RDP to click on the wanky yellow shields and then wait for the reboot. I had an editor with a selection of command recipes to paste straight into the command line: update library, check for modules in RAM using lsof and restart services with older OpenSSL libraries open.

        I'm going to enjoy the weekend - you wont.

        1. RubberJohnny

          Re: Where have all the fanboyz gone

          I think the Windows boys don't see those yellow shields, they use RDCMan tool to manage multi RDPs and I don't think 15 is considered a lot.

          And they WSUS servers to automate the patching.

          They complained to me about the lack of opportunity to earn a bit extra at weekends.

          1. Anonymous Coward
            Anonymous Coward

            Re: Where have all the fanboyz gone

            Guess many also didn't still learn how to use Active Directory and group policies to automate patch installations, and create group of machines with different policies. But some critical servers that need to be updated only after patches has been tested before being allowed, everything else is patched automatically - we prefer to have to pull a patch is some crappy app stops working, than being compromised for the lack of patch.

    2. ItsNotMe
      FAIL

      Re: Where have all the fanboyz gone

      Hey numb-nuts...you may want to rethink that "Enjoy your patching over the weekend guys.. I'm off for beer and its looking like a sunny one"

      "Microsoft was largely unaffected by the Heart Bleed Bug. Microsoft Account and Microsoft Azure, along with most Microsoft Services and Windows’ implementation of SSL/TLS were not impacted.

      "The company said those who run Linux images in Azure Virtual Machines, or software which uses OpenSSL, could be vulnerable, however. The company is advising such customers to follow guidance from their software distribution provider. Corrective action guidance from U.S. Cert can be found here.

      http://www.sitepronews.com/2014/04/10/heart-bleed-bug-still-issue-cloud-services/

      Hope you still have a job on Monday. :-)

    3. Anonymous Coward
      Stop

      Re: Where have all the fanboyz gone

      What have Linux to do with OpenSSL?

    4. Anonymous Coward
      Anonymous Coward

      Re: Where have all the fanboyz gone

      RHEL 5 and 6 ship with OpenSSL 1.0.0 not 1.0.1 so my servers are not vulnerable. I have a few workstations with more recent distributions, but their daily updates included the patch for OpenSSL 1.0.1e. So I will be sleeping quite well this weekend thank you.

      1. phil dude
        Linux

        Re: Where have all the fanboyz gone

        yeah opensuse even backported (libopenssl1_0_0-1.0.1e-1.46.2.x86_64) to the old 12.2 release (which I am running due to the need to have a place to write this thesis...)

        P.

  9. Juan Inamillion

    This explains it

    As ever xkcd....

    http://tinyurl.com/odn4d

    1. vagabondo

      Re: This explains it

      Down vote for using an obfuscated link. Why would anyone want to click on a link without knowing where it would lead to?

      Perhaps you meant:

      http://xkcd.com/1354

      Or perhaps not -- I am not following your link.

      1. Anonymous Coward
        Anonymous Coward

        Re: This explains it

        Especially when the "shortened" link is longer than the one it refers too, LOL!

      2. steogede

        Re: This explains it

        > down vote for using obfuscated link...

        You clearly don't know tinyurl.com applies you to preview links, otherwise you would have know his "shortening" was even more pointless - the link points to http://xkcd.com

    2. littlegreycat
      Trollface

      Re: This explains it

      Didn't you notice that the tiny URL was longer than the original?

      I mean, what kind of idiot posts a string longer than the original and full of random characters?

      Oh, wait..........

  10. Anonymous Coward
    Anonymous Coward

    Short-handed? Not bloody likely

    It simply doesn't ring true that they were as short of volunteers as they now claim. I work on a couple of open source projects which are more than twice as large in size (SLOC) and yet have user bases considerably smaller than OpenSSL (less than 1% if I had to guess). Those projects also use C and C++. We have plenty of volunteers and a core dev team which has ranged from a dozen to three dozen*. In this instance a core dev is considered to be someone sufficiently trusted and with the appropriate track record to be granted commit access. None of us receive a penny in donations or sponsorship, which leads me to my theory of why they are short of core devs. The fewer 'core' devs, the greater the share of the income ...

    * People come and go as their jobs and families allow.

    1. Ken Hagan Gold badge

      Re: Short-handed? Not bloody likely

      Perhaps working on cryptography software requires a particular (and rare) combination of skills. It's all very well pointing out that this bug is a novice error, but when it is buried within a lot of code where even fixing valgrind errors has catastrophic consequences, most of us are too aware of our own limitations to even step forward.

      1. Anonymous Coward
        Anonymous Coward

        Re: Short-handed? Not bloody likely

        There's no shortage of people who like a challenge, to prove themselves, and more importantly plenty of talented individuals who are considered experts in cryptography, mathematics and security. OpenSSL is one of those projects which PhD students love to write their doctorate on, or professionals would kill to have on their resume. Companies such as RedHat employ people full time just to assist with development of these critical infrastructure projects precisely because it's in their best interests. In fact many of them have probably already submitted patches and/or worked on closely related open source projects. No doubt had they been asked they would have happily joined the team at OpenSSL, but that's the problem, the core team have to extend an invitation before you can join their ranks.

        Therefore it's odd for the OpenSSL devs to complain that there are just four of them when that's something they seem to have arrived at by design. They've stuck a notice on the door saying "private, members only" then when the shit hits the fan they are saying "it's not our fault, no-one will help us" and "send us even more money".

        1. midcapwarrior

          Re: Short-handed? Not bloody likely

          "There's no shortage of people who like a challenge, to prove themselves,...."

          Many consider writing code to be a challenge.

          Far fewer consider code review/testing as a challenge.

          Good testers are worth their weight in gold because of this.

          1. Tom 13

            Re: Good testers are worth their weight in gold because of this.

            Actually, I'd say they're worth at least their weight in platinum. Occasionally even gem quality diamonds.

  11. GBE

    OpenSSL "blueprints"

    "[...] meaning that by making the blueprints public, flaws should be quickly spotted and fixed."

    Blueprints?

    Really?

    Afraid that readers of The Reg wouldn't know what "source code" means?

    1. Anonymous Coward
      Anonymous Coward

      Re: OpenSSL "blueprints"

      Source code ?

      Doesn't that come out of a bottle and you can put it on your chips ?

    2. diodesign (Written by Reg staff) Silver badge

      Re: OpenSSL "blueprints"

      It's an old writing habit from my tabloid days - avoid repetition, it improves your writing. So "blueprints" was used to avoid another use of source and/or code in the same sentence. That's all. I've written enough deep dives to expect Reg readers to get techy concepts.

      On that note, thanks for the article comments - good discussion all round.

      C.

    3. DropBear
      Trollface

      Re: OpenSSL "blueprints"

      Blueprints?

      Really?

      Indeed. Come on, everybody knows they're called "sketches"...!

      1. ElReg!comments!Pierre
        Coat

        Re: OpenSSL "blueprints"

        If only there were more volunteer willing to check OpenSSL's UML designs, all this wouldn't have happened.

  12. Anonymous Coward
    Anonymous Coward

    Not enough reviewers?

    Seems there is at least one too many contributor, too. According to the git commit it looks like he both wrote and reviewed the offending patch.

    Other cooks also contributed by deliberately neutering libc's buffer overrun prevention on malloc() by doing their own memory management for performance reasons for the sake of a few platforms where it made any difference.

    Fail of epic proportions.

  13. A Non e-mouse Silver badge

    More issues with OpenSSL

    The lack of proper review has other consequences with OpenSSL's code.

    Ted Unangst has two blog posts [1][2] about how OpenSSL's clever internal memory management code is actually hiding more bugs. It allows use-after free. There are also parts of OpenSSL's code that bank on a free-then-malloc returning the exact same block of memory.

    When problems were found when OpenSSL wasn't used with its internal memory allocator (Four years ago!) the problems weren't fixed.[3]

    [1] www.tedunangst.com/flak/post/heartbleed-vs-mallocconf

    [2] www.tedunangst.com/flak/post/analysis-of-openssl-freelist-reuse

    [3] rt.openssl.org/Ticket/Display.html?id=2167&user=guest&pass=guest

    1. Decade
      Facepalm

      Re: More issues with OpenSSL

      I'm going with the theory that the OpenSSL core developers don't deserve more volunteers. SSL is incredibly difficult to do right, and OpenSSL is badly written and badly maintained code. Fixing it is like throwing good money after bad. It should be replaced. The trouble is that it's hard to find another library to standardize on, and to be sure that it's correct.

      OpenSSL is written by monkeys.

      OpenSSL has exploit mitigation countermeasures to make sure it's exploitable

      The discussions also include anecdotes about how hard it is sometimes to get improvements into OpenSSL. But if OpenSSL improves drastically due to Google's involvement, then it may become a good idea to standardize on OpenSSL again.

      1. Michael Wojcik Silver badge

        Re: More issues with OpenSSL

        OpenSSL is badly written and badly maintained code.

        I have a lot of respect for Eric Young and Steve Henson and the rest, but I have to agree. Understanding cryptography and the SSL/TLS protocols and ASN.1 and the rest does not imply good software-development skills. I've spent a lot of time reading and debugging through the OpenSSL sources, and - while I've certainly seen worse - it's not good. Insufficient abstraction, poor choices in identifier names, insufficient comments, and then the infelicitous design decisions that others have noted.

        It should be replaced. The trouble is that it's hard to find another library to standardize on, and to be sure that it's correct.

        I don't think there's currently a suitable alternative. We've seen dumb bugs in GnuTLS (which has unsuitable license terms for many OpenSSL users anyway), RSA BSAFE (which costs a small fortune), and Apple's implementation in the past few months. Certicom's stack is closed-source and not cheap. Microsoft's is closed-source and only available on Windows. And so on.

        Part of the problem is that SSL/TLS is itself a godawful mess, and it employs other godawful messes like X.509 (which means ASN.1, a nasty mess in itself). Part of the problem is that cryptography is, obviously, central to SSL/TLS and hard to get right. And part of the problem is that security in general is very hard to right, and very sensitive to flaws that would be minor in other contexts.

    2. diodesign (Written by Reg staff) Silver badge

      Re: More issues with OpenSSL

      Thanks for the links - I ran out of time and had other deadlines to hit to drop in Ted's comments. Worthwhile reading.

      C.

  14. John Smith 19 Gold badge
    Flame

    If "noone" is resposible for code review guess what ...

    No one does it.

    FOSS only works properly if the users don't just sit back and stuff the latest release through compile.

    I'm especially looking at network hardwar mfgs, you lazy, greedy bunch of ba**ards who built a business on someone elese work and di f**k all to contribute.

  15. The Dude
    Pirate

    who do we sue?

    So... if this coding mistake does led to someone being harmed, or robbed, or whatever... who gets sued?

    1. Shannon Jacobs
      Holmes

      Re: who do we sue?

      Tell it to the Microsoft. This idea of no-liability software is probably their ONLY innovation.

  16. Anonymous Coward
    Anonymous Coward

    The Big Shops should pay....

    After all they have the most to lose... They already pay bounties to bug hunters and security specialists to beat-up their proprietary code, so why not expand that to include open source, especially the crucial parts like SSL?.

    As an aside... I thought it was interesting that Yahoo was one of the few bigger firms that was still exposed when the news broke. Whereas the Big-G + FB etc had already fixed their systems. So why were Yahoo the only ones left behind? When this happened before, it was because Google discovered the flaw and deliberately didn't tell the competition...

  17. Anonymous Coward
    Anonymous Coward

    Its mostly C ....

    and what else?

    1. M Gale

      Re: Its mostly C ....

      Probably compiler directives, looking at that one function. Ouch.

      1. Anonymous Coward
        Linux

        Re: Its mostly C ....

        Have a look at some of the other OpenSSL code - whilst it undoubtedly does what it is meant to do, there's no way in hell that it'd pass any decent code review.

        Admittedly, crypto is probably the sort of thing that's done by mathematicians rather than 'professional' developers but whoever is responsible for that monumental clusterfuck of preprocessor directives and eye-tearingly bad coding style needs taking aside, given a book on coding standards and a serious beating with a Clue Stick.

        Given how difficult it can be to get anything done with OpenSSL, I'm amazed it has gained as much traction as it has.

        If APIs could speak, OpenSSL would be screaming 'fuck you!'

        1. Ken Hagan Gold badge

          Re: Its mostly C ....

          "a serious beating with a Clue Stick"

          Would a Clue Fork do? Based on what I've learned in the last week, I wouldn't be surprised if OpenSSL wasn't the only game in town in twelve months time. They could start by fixing the bugs that prevent the use of the standard allocator.

  18. Swiss Anton
    Pirate

    So what's to stop the bad guys.

    I'm sure Mr Seggelmann is one of the good guys, but this apparent lack of scrutiny of open source must make the bad guys rub their hands with glee.

    Is there any vetting of those who maintain this stuff - no - because we rely on their code being reviewed by us. However if there is a reasonable chance that it isn't being reviewed, then if I were a bad guy, I might try my luck. (BTW - I'm not one of the bad guys, honest truth guv)

  19. Glen Turner 666

    I am paying for OpenSSL, via my Red Hat subscription

    Firstly, there are middleman here -- Red Hat Inc, Novell and Google. I pay Red Hat for my Linux, Google for my phone software, in return they should be paying people to do whatever to produce the software they are selling to me. If OpenSSL aren't getting a cheque from Red Hat/SuSE/Google then I have some questions...

    Secondly, there's the complexity of SSL/TLS itself. Whilst your article contacts the author (and kudos for that) I would be just as interested in an interview with the IETF chair of the area which published the specification in the first place. The small gain from allowing data in a response (to probe for MTU failures on non-TCP protocols) doesn't appear to me to justify the risk from a change to a security function. It's the chair's role to make that call.

    Thirdly, there's C. We desperately need a new systems programming language. We've written enough applications programming languages to know what works and what doesn't (Java, Python, Lua, etc) but those languages simply aren't deployable in the systems programming space.

    Finally, there seems to be a whole culture around security bugs which is simply broken. It's pretty much the task of the NSA to lead the response to this, and yet they seem to be the body most assumed to know of the bus existence but not to have told anyone. Not to mention that every bug is seen as an opportunity to sell stuff: create a consultancy, win a bug bounty, scare customers into buying products, scam the unwary, and so on.

    1. vagabondo

      Re: I am paying for OpenSSL, via my Red Hat subscription

      And your Red Hat Enterprise Linux is not affected by this vulnerability.

      If by Novell you meant Attachmate/SUSE, well the SLES and SLED distributions are also unaffected. Unless you have a Motorola phone, you have not paid Google for phone software. Your complaint should be directed to your phone supplier.

      With FOSS you have the choice. Accept it for no charge "as is" and take responsibility for yourself, or purchase support/management and expect your supplier to act responsibly.

    2. Anonymous Coward
      Anonymous Coward

      Re: I am paying for OpenSSL, via my Red Hat subscription

      Why do you believe Google based its Android/ChromeOS software on OSS - "free" - software? Because it wanted to steal your data in a way that should have been easy and fast, it wasn't going to invest in delivering a new OS written from scratch which would have costed a good sum of its revenues stored in some paradise island.

      They will invest money if and only if it helps to move software away from a competitor they may have to pay - if they find some free software they have not to pay at all, the better.

      And when you buy Android, all you buy is the right that your data will be stolen by Google and not someone else.

      1. M Gale

        Re: I am paying for OpenSSL, via my Red Hat subscription

        And when you buy Android, all you buy is the right that your data will be stolen by Google and not someone else.

        Which I suppose is better than paying through the nose to have your data "stolen" by Microsoft? You should be reading the news; their Scroogled campaign has now dropped from zero credibility down into negative integers.

        Everybody does it. If anything I'd call Google the more honest out of the bunch because at least they are up-front about the whole "WE ARE GOING TO ADVERTISE TO YOU" thing. Still haven't found any scandals involving Google engineers manually poring over Gmail contents to find an alleged leak, and the Google Maps wardriving incident was.. really not an incident.

    3. Canecutter

      Re: I am paying for OpenSSL, via my Red Hat subscription

      "Thirdly, there's C. We desperately need a new systems programming language. We've written enough applications programming languages to know what works and what doesn't (Java, Python, Lua, etc) but those languages simply aren't deployable in the systems programming space."

      Want a language to use to replace C? Well how about the language that is most studiously ignored by the elites of software development, such as the ones responsible for OpenSSL? To what language do I refer? Why Oberon-2 of course! It IS a SYSTEMS programming language with which it is possible to implement the entire stack, from Operating System all the way to the most sophisticated applications. The language has a clear, concise definition, it enables the programmer to _REASON_ about the artifacts he is generating, is type-safe, and does not rely on a preprocessor.

      Having said that, though, I really do not believe OpenSSL's problem stems from the use of C, per se. Instead it stems from the methods used to create its source code (reliance on the pre-processor, reliance on libraries that have not been proven secure, and an undisciplined style).

      The fact that the RFC in which OpenSSL is defined is such a convoluted mess doesn't help much either.

      For a good, recent set of practical advice on avoiding the kinds of problems that produce heartbleed, check out "Mars Code" by Gerard J. Holzmann CACM DOI:10.1145/2560217.2560218 also available at:

      http://cacm.acm.org/magazines/2014/2/171689-mars-code/pdf.

      But then I just cut sugarcane for a living. What do I know.

  20. Yes Me Silver badge
    Facepalm

    Blame the programming language, not the programmer

    " Essentially, he forgot to check the size of a received message"

    No. The world took a wrong turn many years ago (in the early 1980s) and ignored the known fact that languages without strong typing, rigorously enforced at compile time, are dangerous. In particular they're subject to array overrun bugs. In the mistaken name of efficiency, we've been using sloppy languages ever since. It's perfectly possible to get efficient code out of a strongly-typed language where this class of bug is simply impossible; it's just more difficult to get your code through the compiler, because it won't let you do potentially dangerous things. This isn't a wakeup call for individual coders: it's a wakeup call for the whole industry to look again at the basics of systems programming languages.

    1. James O'Shea

      Re: Blame the programming language, not the programmer

      Somewhere upstream I suggested using Pascal, ADA, Modula-2 and (jokingly) FORTRAN instead of C. I got downvoted, possibly by some humorless ghit who doesn't like FORTRAN. Me, I have the ability to write an excellent FORTRAN program in any programming language I know. Even C, though that can be a challenge.

      Seriously, though, I've often thought that most people would be better served by a computing ecosystem (ugh, I hate that word) based around Modula-2 than around C.

      <exit, stage left, manfully resisting the urge to point out that C is Wirthless.>

    2. John Smith 19 Gold badge
      Unhappy

      @Yes Me

      "No. The world took a wrong turn many years ago (in the early 1980s) and ignored the known fact that languages without strong typing, rigorously enforced at compile time, are dangerous. In particular they're subject to array overrun bugs."

      I wonder if any one remembers C.A.R. Hoare's Turing lecture on the subject.

      Hoare worked for (IIRC) Ferranti on their Algol compiler in the 60's, which had proved popular back when assembler was still a popular mainframe programming choice and efficiency was a very big issue. Development had been such a PITA that they had included array bounds checking by default. Once they got a working compiler they asked their customers if they'd want it switched off by default to save the performance hit.

      The customers said "no."

      The customers had worked out that what they gained on run speed they lost on developer debugging time.

      Of course processors are around a million times faster, as are main memories.

  21. Anonymous Coward
    Anonymous Coward

    Where is the buffer overrun?

    The fact that people are blaming the C language for this problem, when it has nothing to do with the language, but one of implementation, tells me that their technical opinion isn't very relevant, programming languages and possibly otherwise.

    If you disagree prove me wrong. Show code implementing it the same way, and show how the language of your choice would prevent it. Also show how your language would do it if this was the desired implementation i.e some kind of memory sniff/diagnostic tool.

    1. Michael Wojcik Silver badge

      Re: Where is the buffer overrun?

      Show code implementing it the same way, and show how the language of your choice would prevent it.

      Have you ever used a language with array bounds checking? Your question appears to be based on complete ignorance.

      The Heartbleed bug works like this:

      1. Get padding length value from protocol buffer containing peer's message

      2. Copy padding-length bytes from protocol buffer containing peer's message to response message buffer

      In an environment with proper array management - whether that's a language that does it for the programmer, or an implementation on top of C's bare-bones arrays, which is what a sensible SSL/TLS implementation would use - then either the protocol buffer would have been allocated or resized to the proper size (preferable), or it would be over-allocated to be at least long enough to allow the copy in part 2, and initialized. Those are the only two possibilities for an environment with proper array management.

      Then the copy would either fail (former case) or succeed but copy initialized, not reused, data (latter case). That's what would happen with, say, any JVM or .NET language. It's what would happen with any native language that implements array-boundary checks. It's what would happen if OpenSSL employed a trivial abstraction on top of protocol-buffer manipulation, as is commonly done with, for example, Wireshark dissectors (which are also written in C).

      Also show how your language would do it if this was the desired implementation i.e some kind of memory sniff/diagnostic tool.

      Now that's just dumb. You don't use a deliberate buffer overrun to inspect the contents of your process's own memory. If it's necessary to do that, you use a facility explicitly for that purpose, such as a debugging API or (depending on your needs) the appropriate mechanism for copying data to a byte array, or creating an alias representation.

      It's true that there are cases in C where the "trailing array" hack:

      struct extensible_data {

      ...

      unsigned char trailer[1]; /* may be allocated larger */

      };

      arguably makes a deliberate buffer overrun useful, though opinion is divided. But those cases are at best few and far between, and in any event that doesn't apply here.

  22. Shannon Jacobs
    Holmes

    It's the funding model, stupid!

    I've said this before, so I guess I'm wasting time to say it again, but bad software with a good financial model wins. Look at Microsoft, Google, and Apple, just to limit it to three especially egregious examples.

    My suggestion is to fund OSS with 'charity shares' where the project will have a PLAN, a BUDGET, and sufficient TESTING. Dare I say it? There should be success criteria so the donors will know if their money went to a good cause.

    Why should small donors (like me) be treated with perfect contempt? Because the financial model stinks, that's why.

    In a twisted way, you can mostly blame Microsoft again. The key to their EVIL financial model is that no matter what happens from their most awful software, there isn't any financial liability on Microsoft. That's the only part of the financial model that applies to OSS, and look how it worked out this time.

  23. Anonymous Coward
    Anonymous Coward

    Dumb design?

    Why have a variable length heartbeat payload in the first place?

    1. Michael Wojcik Silver badge

      Re: Dumb design?

      Why have a variable length heartbeat payload in the first place?

      Read the RFC. It's for Path MTU determination, particularly for DTLS.

  24. John Deeb

    health check?

    What about a general FOSS project health check? For all core projects insist perhaps on a certain minimum amount of developers and reviewers, with some properly documented reviewing processes? Perhaps this is just about having some standards even when it's free and volunteer work. This is not about creating more overhead but about learning from mistakes and underlying causes in all the practises and work-flows. It hardly seems an incident, how many important libraries are maintained and minimally reviewed because of similar reasons?

    1. Ken Hagan Gold badge

      Re: health check?

      I'm sure that FOSS developers all over the world will be asking themselves what they can learn from this, but since it is all volunteer work there is no authority or paymaster who could perform such a review or enforce such standards.

  25. Anonymous Coward
    Anonymous Coward

    What would Lottie Dexter do now?

    Her year of code can surely fix this one.

  26. Anonymous Coward
    Anonymous Coward

    Yeah, let's dump open source and watch the interweb implode without it. If only every single line of code was written by MS. We can dream.

  27. Tim Rodger

    We can't accept untested code any longer

    It's easy to criticise after the flaws have been publicised, but the one thing that I can tell from the patch that led to this bug is that there were no tests either changed or added at that time, or at least that's how it appears to an outsider. See http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=4817504d069b4c5082161b02a22116ad75f822b1

    I can't comment on how extensive the existing test suite is, nor how often it is run, but it should be clear that manual code reviews alone can not possibly verify code quality to the standards required for this or any other software project.

    Given the small size of the team developing OpenSSL (not to mention the products importance) the need for a full and reliable test suite can not be neglected. The project's maintainers and reviewers must insist that sufficient passing, automated tests are provided by developers before their code is even accepted for review. There is no other realistic means to ensure that new code doesn't break existing functionality, introduce new bugs etc whilst maintaining a viable development velocity.

    Likewise, teams using FOS should insist on this when choosing software. In this way we can leverage open source's promises to our advantage. Not only can I read the code but I can see that the tests have passed and continue to pass when patches are made. Using testing services like Travis CI give even more visibility to this process as I don't even need to download the code, compile it and run the tests, I can just look at the results.

  28. John Smith 19 Gold badge
    Unhappy

    Let's note a few things

    Internet protocols are complex

    They run at low levels, so VM's (and the languages that use them) are not a good idea.

    They should have minimal impact on what the users doing (IE be "efficient" in terms of resources)

    Having said that my instinct would be a) Write it as a state machine in one of the available tools b)Profile the code to determine where (if?) it's too slow/big and c) Hand tweak the code carefully. (re measuring to make sure my "fixes" didn't make it go slower instead).

    I personally think that anyone who knows where their code is going to run slow is wrong, and IIRC all I've read about code tuning agrees. There are probable candidates but no more than that.

    Without re-starting the "language wars" I think there are probably better languages out there than C/C++ but it's about critical mass. C/C++ has it. In an ideal world the development community would have had some kind of (global) competition and chosen the best system development language.

    Yeah. But in this universe things work a little differently.

    To recast the old line in development terms "I wouldn't start by learning C if I wanted to develop reliable secure internet software" Some say the compilers for the Bliss (BCPL Like) language produced the most optimized code ever, partly due to the fact that Bliss has no goto statement, but who programs in Bliss?

    Changing languages is not an option, at least for this generation of software.

    OTOH C/C++ does have security and warning features in most compilers if you use them and don't ignore them.

  29. Madboater

    Every Business that uses OpenSSL

    must share a proportion of the Blame for letting this one slip through. The blame must be proportional to how many £'s they make while using it without contributing to its development.

  30. david 12 Silver badge

    WTF generic security software FAIL

    >silently siphon passwords, crypto-keys, and other sensitive information from vulnerable systems.

    >We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we can’t be certain."

    I'm not familiar with the OS or the applications, but isn't there a secure memory API like (on the Windows side) "SecureString" or "SecurePassword", "CryptProtectMemory", or "SecureZeroMemory"?

    So that you don't leave passwords, crypto-keys and other sensitive information in memory for generic memory-recovery attacks to harvest?

    1. Anonymous Coward
      Anonymous Coward

      Re: WTF generic security software FAIL

      Not on Linux, no. It's not yet as sophisticated as the Windows security model in many ways. For example, Linux only recently got proper ACLs with the arrival of NFS4.1 and requires bolt ons like 'SEL' to try and approach the lockdowns that are built into Windows from the ground up.

      1. h4rm0ny

        Re: WTF generic security software FAIL

        *sigh* Two instinctive downvotes, no-one able to actually reply and show it is wrong. Windows does have a more sophisticated security model than GNU/Linux from Vista onwards. I'm familiar with both. That's not to say that Windows is overall more secure than GNU/Linux, but it's model is more capable. Those who downvoted should ask themselves if they can actually justify their disagreement or if they're just voting out of preference.

        1. Vic

          Re: WTF generic security software FAIL

          > I'm familiar with both.

          I don't think you are...

          i can't tell you why you got two downvotes, but I can tell you I was thinking of downvoting you myself. Your assertions about Linux security are incorrect, and parrot the same Internet memes we keep hearing from those that aren't as familiar as they think they are.

          Vic.

          1. david 12 Silver badge

            Re: WTF generic security software FAIL

            Downvote, because you made that assertion without even attempting to demonstrate that you had any kind of knowledge about the question I asked. I don't like to downvote you, but I'd like to encourage you to do better. If you know anything about Linux security, what can you tell us?

            1. Vic

              Re: WTF generic security software FAIL

              Downvote, because you made that assertion without even attempting to demonstrate that you had any kind of knowledge about the question I asked.

              And if you'd bothered to follow the threading, you'll see I was not responding to your post, just refuting an unsubstantiated claim from someone else.

              I don't like to downvote you, but I'd like to encourage you to do better

              I'm not going to downvote you - but you should probably expect some downvotes from someone...

              Perhaps you could be encouraged to try better to follow the thread of conversation?

              Vic.

              1. david 12 Silver badge

                Re: WTF generic security software FAIL

                C' mmon Vic...

                I asked a question. I got a reply "Windows security is better". You replied to the reply with: "Same to you, with knobs on it"

                So, I've followed the thread down, do you have anything to contribute?

                1. This post has been deleted by its author

              2. h4rm0ny

                Re: WTF generic security software FAIL

                >>"And if you'd bothered to follow the threading, you'll see I was not responding to your post, just refuting an unsubstantiated claim from someone else."

                "Refuting" is where you prove someone wrong. All you did was make an ad hominem post claiming I didn't know what I was talking about and threw in some snide remarks about me "parroting" things. That's not refuting, that's just stating I'm wrong because you say so. Anyway, I've seen your post now and responded, so consider that "claim" substantiated.

          2. h4rm0ny

            Re: WTF generic security software FAIL

            >> I'm familiar with both.

            >I don't think you are...

            Well then I'll demonstrate otherwise. I'm more familiar with GNU/Linux as that is what my development career has always been based on, but I'm familiar enough with Windows to make informed comparisons. Anyway, I'm always happy to back up what I say and so:

            This is what I wrote: "Windows does have a more sophisticated security model than GNU/Linux from Vista onwards. I'm familiar with both. That's not to say that Windows is overall more secure than GNU/Linux, but it's model is more capable"

            The core of the security model on GNU/Linux is the user/group/world permissions system. It defines read, write and execute permissions with a few later refinements such as the setuid bit. It's a security model that paints with extremely large strokes and results in lots of work arounds. Want to make a group a member of another group? Enter the workarounds (and occasional sed scripts to update multiple groups). I can't count the number of times I've had to call a sysadmin because they have forgotten the list of groups a particular developer needs access to and missed something. If you have something you want all the members of a group to read, but only two of the members of that group to write to, enter the bolt-ons and work arounds.

            Lets review the ACL system in Windows Firstly, they're actual lists that are not exclusive. Objects can have multiple owners, be in multiple groups, all with the same or different access privileges. ACEs (Access Control Entries) are actually sophisticated structures containing a host of privilege types. On Linux's you have 'read, write, execute' (I'm kind of rolling setuid under execute as it's really a fudge to deal with shortcomings in the model elsewhere). On Windows, as well as 'write', you get 'append', 'delete'. You get whether attributes can be altered, read, whether extended attributes can be read or altered, whether ownership of the file can be taken... As I wrote: a lot more capable. And they're all really simple to use - which is an important point. It's no more difficult to set a file to be appendable than it is to make it writeable. On Windows, you tend to use what is appropriate, not what is easy. And this is all available by default. You can even manage it through the GUI without any real training if you want to. Just click on a file, properties and the advanced permissions and you'll see a list of checkboxes for these privileges.

            So we already have two major ways in which the Windows security model is more capable (my exact words in my OP) than GNU/Linux. These are the non-exclusive nature of access - as many owners and groups as you wish; and the far more sophisticated privileges available. Some process is writing to a log file or adding data? Give it append, not write. Want to give someone write access to a directory, but not delete from there? No problem.

            Let's continue, because there's quite a lot more. (Though I consider both my point and the fact that I'm familiar with Windows models demonstrated).

            Windows ACLs tie into both local users (by which I include services, etc) and directly into Active Directory. These are hierarchical. If you want to make one group a member of another group, just add it. This is a major advantage when it comes to administrating permissions on a system or network. If all members of the Printers group should be members of the Hardware group, or all members of the Secretaries group be members of the HR group, just add them. Then any amendments you make to the sub-group membership trickle up. Any changes to the parent group permissions, trickle down (unless you tell them not to, which you can do). It's all very intuitive to anyone with a programming background.

            Just an addendum on the tying into Active Directory, you can even distinguish between login type.Are they on the local box or did they come in over the network? You can make use of this if you want. For example the AD set-up can handle VPN access and you can tie ACLs to accounts in AD.

            What else? Well, you can apply everything I've listed so far not only to files and directories, but to any object with a security descriptor such as named pipes, processes, registry keys. It's nice and it introduces consistency in approach across a wide range of Windows functionality, which is good for developers and admins alike.

            Hmmmm, ACLs are inheritable. I wont go into that as I'm not a Windows admin, but obviously when you've developed sophisticated security controls, it's nice to be able to have them trickle down automatically. All this is a long way from creating something in a directory on GNU/Linux and having it copy on create the rwx/rwx/rwx settings from its parent.

            The ACLs in Windows also have in-built auditing. If you want to set a log of access granted or access denied on a securable object (pipe, file, directory, whatever) you can just build that right into the ACE. Doesn't matter what user or process tries to access that object, it will log it if you so wish. Want to record any denied access to a given directory or process? Easily done. Auditing is a inherent part of the Windows security model.

            So I feel I've long since demonstrated good reason to state that the security model in Windows is more capable than that of GNU/Linux. Note, I'll re-emphasize what I wrote in my original post which you replied to - this is not to say that either Windows or Linux are necessarily more secure than the other, but simply that Windows has the more capable model.

            Now let me anticipate a couple of possible attempts at shooting this down if you are the sort of person that does not like to be called out on their false accusations (making an ad hominem argument of my not being familiar with these systems when you don't even know me). Firstly, how important is the permissions system to discussing "security models". Well obviously pretty core. The core, really. Any attempt to dismiss the advantage Windows has with its ACLs over GNU/Linux's default permissions system as not being relevant to which security model is most capable, is absurd. This is a fundamental aspect. THE fundamental aspect when comparing models, in fact.

            Secondly, what about SELinux and ACLs on GNU/Linux. Well first off, hardly anyone uses these. I think more people use SELinux than ACLs, but anyway, the former doesn't really make GNU/Linux more capable (which is what I said), it helps lock it down. It's good, but it's not equivalent. ACLs on Linux are used even less (I have clients that use SELinux that have never even considered using ACLs on GNU/Linux), they're optional, their implementation between different distros and file system types are fragmentary and inconsistent. (Are you using ACLs on ZFS? Great - that's different to on Ext3) and above all else they are rudimentary. You can add access to files and directories to non-owning users not in the owning group, you can add a couple of basic additional permission types such as list content and append data. Its limited in scope, it has next to no enterprise support or real management tools, it's all but impenetrable and downright painful to work with.

            This is how you copy an entire ACE (access control entry) from one securable object (which I remind you on Windows can be anything from a registry entry to a directory) using Powershell:

            C:\ Get-Acl C:\LogFileA.txt | Set-Acl -Path C:\LogFileB.txt

            Aside from the ugly Windows standard of using slashes that lean the wrong way, that's beautiful. Any Linux developer here ought to be able to read that and understand it right now. Can you say the reverse would be true?

            So there you have it. Windows has a more capable security model than GNU/Linux.

            Now as to your rather insulting and ad hominem reply to me: "and parrot the same Internet memes we keep hearing from those that aren't as familiar as they think they are."

            I'm not familiar with any "Internet memes" about how Windows has a more capable security model than Linux. Indeed, what I hear repeatedly on these forums is people parroting that Linux is inherently more secure than Windows. Something that has not been true since Windows. Now perhaps you would like to apologize for your accusatory and belittling post? I would like you to do so.

            1. h4rm0ny

              Re: WTF generic security software FAIL

              Typo: The sentence near the end should read:

              that Linux is inherently more secure than Windows. Something that has not been true since Windows Vista.

            2. Paul Crawford Silver badge
              Trollface

              Re: ACLs & OS willy-waving

              I thought I might as well come out from under my bridge to weigh in on this:

              In the beginning there was no Windows security at all, and BillyG said Lo! Make it so we don't suck! Thus Dave Cutler was employed to design a worthy OS and, being who he is, it had to be non-UNIX in every aspect, presumably due to some nasty experience at the hands of some UNIX admins at a student party or similar.

              Thus he created NT, and we saw it was good and multi-platform. Anything and everything had an ACL for security and computer scientists around the world marvelled at how complex one could create a machines permissions. Alas, it did not last because those in MS' demonic marketing department decided that it had to be compatible with some legacy stuff based upon the old singer-user non-networked model of security, and speed was poor and thus the video subsystem, and other stuff, was thrust into the ring 0 code that once was pure kernel. Then it became x86 only, until very recently when the bastard child WinRT was created.

              And darkness descended upon the windows ecosystem as software was allowed free reign by default to do things it should not, and the tenderest parts of the user's nether regions became the favourite lunch of malware writers the world over.

              Meanwhile the old UNIX/Linux model chugged along on the bases of multi-user systems with a crude, but effective, set of permissions that were enforced by default leading to far less trouble.

              And so children, the lesson here is analogous to the tortoise and the hare: Windows should have been the pinnacle of security, but was let down by pesky users not knowing or caring how to use ACLs, and by time it became a problem so much legacy software was doing it all wrong. Given you need to use a tool to simply find out what ACLs are in use, it is hardly surprising.

              Linux is indeed less sophisticated by default, but as its basic segregation of admin & user has always been enforced, software for it always played well that way, thus basic security has always "just worked".

              For ACLs on Linux you can copy this way:

              getfacl file1 | setfacl --set-file=- file2

              And yes, ACLs on Linux are not completely consistent across different file systems, but how consistent is Windows ACLs across file systems? Oh yes, it is only NTFS...

              1. h4rm0ny
                Pint

                Re: ACLs & OS willy-waving

                LOL. Genuine chuckles at that. And I don't disagree with it. The willy waving wasn't what I intended it to be (and I don't have a willy for all the relevance that has anyway so I'd have to wave someone else's) and I was quite explicit in stating that I wasn't saying one OS or the other was more secure. (A badly maintained one of either is worse than one with someone who knows what they were doing). But I called on it and pretty much declared a liar by Vic, so I felt inclined to show the ways in which Windows has a more capable security model.

                Anyway, I don't disagree with anything you wrote, I'll just observe that most of your post was talking about how inferior Windows security used to be, and that's what I'm really getting at. I become tired of all these people who haven't updated their knowledge in nearly a decade spouting dangerous nonsense such as GNU/Linux being inherently more secure than Windows. The fact that Windows has a far more capable permissions system isn't meant to be a dig at GNU/Linux, more a wake-up call to the ignorant.

                Oh, and btw:

                >>"Oh yes, it is only NTFS..."

                *cough* ReFS. *cough* ;)

                1. Paul Crawford Silver badge

                  Re: ACLs & OS willy-waving

                  Oh I would not worry about a lack of a willy, as in decades of engineering work I have never needed to use mine in a professional capacity. Also I think you will find that waving lady-bits around will trump any willy-based competition!

                  Sorry for omitting ReFS, just I have not seen that actually used yet. And it is also Windows-only!

                  1. h4rm0ny

                    Re: ACLs & OS willy-waving

                    >>"Oh I would not worry about a lack of a willy, as in decades of engineering work I have never needed to use mine in a professional capacity. Also I think you will find that waving lady-bits around will trump any willy-based competition!"

                    That tends to lose you your female friends and partners pretty quickly. And no, lecherous office males are not a substitute!!!

                    >>"Sorry for omitting ReFS, just I have not seen that actually used yet. And it is also Windows-only!"

                    It is. It's also more designed for Enterprise than home, though that's not a negative necessarily given the context of this discussion and there's nothing to stop someone using it at home if they choose. But it is a second file system that Windows can use its ACLs on and I think it's going to be quite good. MS had a bunch of catching up to do with ZFS imo, though that's not really my area.

  31. Anonymous Coward
    Anonymous Coward

    Noone Looks at Old Code

    Part of the problem is complacency with old packages. There's an assumption that code that's being run for years "must" be bug-free, so less effort goes into checking it.

    1. DropBear
      Devil

      Re: Noone Looks at Old Code

      Part of the problem is complacency with old packages

      Possibly so, but I squarely blame this one on Not Being Paranoid Enough: a classic case of the developer implicitly trusting other parties to cooperate / play along nicely (or putting it another way, data to be valid). And that's always, always, always an idiotic thing to do. Not assuming that every single piece of software and every other system you interact with is out to get you is simply irresponsible coding. Whether or not they actually are (and if so, why) is beside the point - assuming the worst will make your code much more robust and resilient, even if possibly somewhat less efficient; but I find that a small price to pay.

    2. Alan Brown Silver badge

      Re: Noone Looks at Old Code

      Yup and that assumption is spot on. I get told time and again that "old code is secure, why change it?" from people who should know better.

      This and the recent X11 stuff plus other bits underscore the point I repeatedly make on wilfully deaf ears that virtually all old (legacy) code has never been run through a static code analyser, let alone properly audited - and that noone bothers to do it because they all assume someone else already has done it - an assumption proven horribly wrong by the X11 bug.

  32. Michael Wojcik Silver badge

    Not all OpenSSL 1.0.1x systems are "at risk"

    Any machine ... that uses OpenSSL 1.0.1 to 1.0.1f for secure connections is at risk, thanks to the Heartbleed bug.

    Not technically correct. OpenSSL can be compiled with Heartbeat or TLS support disabled, and TLS support can be disabled programmatically. It's likely true that the vast majority of systems running OpenSSL 1.0.1[cdef] are at risk, but it's also likely that at least some are not because they were built with a smaller feature set (typically to reduce footprint).

    And, of course, since the bug was announced, many people have simply rebuilt OpenSSL 1.0.1x with Heartbeat disabled, so while those systems are still running OpenSSL built from vulnerable source, they are not themselves vulnerable.

    This is another reason not to trust those "am I vulnerable?" web sites, by the way. Only a test that actually tries a malformed Heartbeat message is reliable.

  33. Stevie

    Bah!

    Heartbleed highlights a pretty impoartant flaw in the thinking of the Open Source philosophy that "enough eyes make all bugs shallow" as outlined in the article, namely that for many the audience for Open Source products equates in some fashion to benevolent eyes-on-the-code and that is simply not true, especially for the not-so-cool bags-o-code packages that form the building blocks of the digital world these days.

    It would seem that only two pairs of benevolent eyes were on this particular fragment of OpenSSH at the time it was important for the perceived truth to be actual truth. Hey, I freely admit that looking through endless lines of tedious code is not my idea of a good time, which makes me part of the problem.

    But it, along with the cries for funding suggest that the idea that if you pay people to look at the code you have more chance (but not 100% chance, people being people) that it will get done.

    Big surprise.

    I take it as a giant sign of the maturity of the Open Source lobby that no-one has cried "what do you expect? It's free!" from the bleachers, and that says good things about the user base.

    Ah well. He that works makes many mistakes. He that does no work makes no mistakes. He that makes no mistakes gets richly rewarded and promoted, while he that makes many mistakes gets called vile names. That's life.

  34. Version 1.0 Silver badge

    And we're surprised?

    I've thought that SSL was vulnerable for a while and mentioned this some time ago. Now it's time to fix it ... but almost any encryption is vulnerable if you throw enough resources at it.

    But while we're all panicking - it's worth noting that a hell of a lot of servers are not vulnerable but nobody's discussing that.

  35. HaroldR

    The problem is testing, not coding

    A static code analyzer would have found this problem easily. The real problem is TESTING, not coding. Commercial vendors can afford high quality software testing tools. Open source developers usually don't have these resources, so they're reduced to error-prone manual code review. Good software ain't cheap. Open source software is worth every penny you pay for it.

    1. david 12 Silver badge

      Re: The problem is testing, not coding

      >Commercial vendors can afford high quality software testing tools. Open source developers usually don't have these resources,

      Coverity "testing solutions are built on award-winning static analysis technology" was doing free testing for security-related O/S projects. I would have thought OpenSSL qualifies.

    2. Alan Brown Silver badge

      Re: The problem is testing, not coding

      There are a _huge_ number of OSS static code analysers out there and LLVM's one gives any commercial one a run for it's money.

      The biggest problem is getting people interested in actually doing it (It was the LLVM analyser which pointed out the X11 bugs)

  36. grumpy-old-person

    Why are we beating up the volunteers?

    Volunteers have brought us masses of excellent quality open-source software.

    Will there be bugs? Yes! Why? Mistakes, of course, but also because of the hardware on which we run software.

    Astonishing that the hysteria does not occur each month when Microsoft releases the usual slew of patches. Or when Adobe has yet more "interesting" vulnerabilities. Or when Java lis found to have terrible bugs.

    All done by companies with huge financial and people resources.

    The comments about companies using open source in products they sell for profit iare right on!

    But what we overlook is that the hardware we use is still stuck in the distant past. Something like fitting a V8 motor to a Morris Minor - goes like hell in a straight line but cannot corner safely and is impossible to stop in any distance vaguely approaching safe.

    If hardware enforced the rules (like buffer size) we would simply avoid all this pain.

    Check out IBM's SWARD machine, designed and built when hardware was nothing like as powerful as it is today. I expect there are going to be comments about capabilities and other mechanisms being imperfect - true, but MUCH better than what we use now.

    1. M Gale

      Re: Why are we beating up the volunteers?

      Astonishing that the hysteria does not occur each month when Microsoft releases the usual slew of patches. Or when Adobe has yet more "interesting" vulnerabilities. Or when Java lis found to have terrible bugs.

      "Dog bites man" is not news. "Man bites dog" is news.

This topic is closed for new posts.

Other stories you might like