back to article Apple Safari, Mail and more hit by SSL spying bug on OS X, fix 'soon'

Apple has admitted a bug in Mac OS X 10.9.1 allows hackers to intercept and decrypt SSL-encrypted network connections – and has promised to release a fix "very soon." Sensitive information, such as bank card numbers and account passwords, sent over HTTPS, IMAPS and other SSL-protected channels from vulnerable Mac computers …

COMMENTS

This topic is closed for new posts.
  1. Hud Dunlap
    FAIL

    This doesn't make sense

    Normally, everything is kept quiet until the fix is out. Because they did the IOS first everyone got clued in that could be a problem in OSX. Now that is confirmed.

    I can see hackers burning the Midnight Oil to take advantage of this before it is fixed.

    Why didn't they wait and fix both at the same time?

    1. Cliff

      Re: This doesn't make sense

      Presumably the cat had already escaped the bag, so there was nothing to be gained by keeping a secret from those at risk that those attacking already knew?

    2. Anonymous Coward
      Anonymous Coward

      Re: This doesn't make sense

      Hubris

    3. Anonymous Coward
      Anonymous Coward

      Re: This doesn't make sense

      Well the fix is already out for the iPads and iPhone, hours rather than days or months it seems. Seems sensible to get the fixes out as soon as possible.

      1. Phil Endecott

        Re: This doesn't make sense

        No not hours - the CVE number was reserved early in January.

  2. Merton Campbell Crockett

    Using Safari 7.0.1 running on an OSX 10.9.1, I am unable to "https://gotofail.com". Safari reports that it is unable to establish a secure connection to the web site.

    I can use Firefox 27.0.1 running under OSX 10.9.1 to access the site. The problem is with the X.509 Server Certificate being used. The Common Name (CN) in the certificate identifies that the certificate is only valid for a host named "gotofail.com"; however, a DNS query for 184.173.139.237 returns the host name as "ec2-184-73-139-237.compute-1.amazonaws.com".

    Clearly, the host names are different and the X.509 Server Certificate is not valid for the host to which the connection is established.

    I've found similar problems with web sites using an improperly constructed wildcard names as the CN. The CN, *.familysearch.org, is used in a server certificate for a host named "familysearch.org". The certificate is only valid for any host in the familysearch.org domain. It is not valid for the domain name. It would have worked had the PTR record returned a name of "www.familysearch.org".

    If you can establish a secure Safari connection to "https://gofail.com", your system is misconfigured.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Merton

      FWIW Safari 7.0.1 using the default config on a Reg Mac running 10.9.1 can reach gotofail.com, and is flagged up as insecure. I included the link in the article because it's a simple test. YMMV.

      C.

    2. Fuzz

      that's not how it works

      PTR records have nothing to do with SSL certificates. I'd bet most websites have either PTR records that don't match the A record or have no PTR record at all.

      The certificate for https://gofail.com is perfectly valid so that site should load correctly on any computer, what the site tries to do is load a png from another server/vhost hosted on a different port at the domain. That server has a messed up certificate and won't load if your browser works properly. The site then uses a bit of jscript to show/hide an element to tell you if your browser is vulnerable or not.

  3. Anonymous Coward
    Anonymous Coward

    National Security Weakening Agency

    Sept 2012 - iOS 6.0 released

    Oct 2012 - NSA PRISM slides date the inclusion of Apple devices

    Regardless of whether the NSA had anything to do with the creation of this exploit, they have claimed in their slides that they can monitor iOS traffic 100%

    If this bug was their method of interception the NSA have purposely held back the security of all iOS users. Protecting the state at the expense of users seems a very strange way to operate.

    1. Anonymous Coward
      Anonymous Coward

      Re: National Security Weakening Agency

      Umm, not *every* bug can be laid at the feet of the NSA.

      Of course, you're welcome to do that anyway (it's not like they don't deserve it) :)

      1. Joshua's Memory

        Re: National Security Weakening Agency

        It need not be the case that NSA made the bug, it might be like Stuxnet, where the existing bug in Siemens equipment was taken advantage of. In that case, the vulnerability was actually patched before the virus went out. There were active steps taken to prevent Iran from getting their hands on the latest versions.

        It is not unreasonable to think that the NSA and other agencies are actively looking for flaws that might be exploited, even if they are not actively introducing them. It would be then up to the agencies involved to decide whether their interests are best served by informing vendors, or keeping mum incase they might be able to use the vulnerability themselves.

  4. Wintermute

    Test-Driven Development

    So with this bug, we see that Apple either does not use TDD at all or lacked a test case for a mandatory part of establishing an SSL connection.

    With the proper tests in place, this bug would have been squashed before it was ever released.

    1. John H Woods Silver badge

      Re: Test-Driven Development

      Actually I can understand the lack of a test case for this - as you'd have to write the exploit as the test.

      What I can't understand is the following:

      1) why the compiler, or code coverage tool, didn't flag the final check as unreachable code, or - if it did - why didn't anyone notice how important the unreachable code was?

      2) why it wasn't noticed on visual inspection - even with the apparent failure of correct indentation (which should have been automatic), surely it's reasonably clear there's a duplicated line?

      3) why the programmer chose to write the code in this way? I'm not a programmer any more, as apparently I'm 'too expensive' and am now only allowed to 'create' using MS-Office products --- but this stuff is rubbish, who writes it? For a start, the 'fail' label is simply misleading as this code is concerned with resource release. There's at least three ways of writing this method in a more elegant way. I might have used non-evaluating conjunctions or nested if statements, but I can guarantee that whatever I did the return code would never have been set to success before all the conditions had been met. In fact even using reasonable logging in this code would almost have forced the author to write it in the correct way as each failure condition would have to be tested:

      if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)

      would probably have to be something like

      err = SSLHashSHA1.update(&hashCtx, &signedParams

      if(err != 0) { logger.log("failed at signed parameter check"); goto fail}

      In other words, the principle failure here is conflation of the test with the assignment. We do not have to hand-optimize code anymore. I don't know why programmers even try to do it: the most important thing is correctness, then comprehensibility. Everything else, compactness, hell, even performance, comes a very long way behind those two attributes.

      1. Adam 1

        Re: Test-Driven Development

        But Shirley you could use mocks to fake that response code? You wouldn't have to write the whole exploit, just a test that used a mock returning a valid certificate and an assertion on the error code or expected exception or however they flag it.

      2. Richard 12 Silver badge
        Facepalm

        Re: Test-Driven Development

        Yes, the code style is rather odd.

        I "assume failure" at the start of this kind of thing and only set 'success' if all steps succeed. There's lot of ways to arrange that depending one how much detail you'd like of the failure, and no reason to ever directly return the result of any one step.

        I'd always assumed everybody else did the same. Perhaps I'm just pessimistic.

        1. Anonymous Coward
          Anonymous Coward

          Re: Test-Driven Development

          The code style is indeed odd.

          It has exactly the right oddness to allow this bug to look like an innocent mistake.

          How odd is that?

      3. Phil Endecott

        Re: Test-Driven Development

        Although i would not expect a compiler to warn about unreachable code there (or only to do so in a mode that also resulted in many false positives), i would expect both static analysis tools (coverity) and code coverage tools to report a problem. Although i wouldn't expect such tools to be used on all code, i would expect that an SSL implementation would be exactly where you apply them first.

        I would also love to see a lint-like tool that would spot the wrong indentation.

        1. Chris T Almighty

          Re: Test-Driven Development

          "I would also love to see a lint-like tool that would spot the wrong indentation."

          Or just do what Visual Studio does and have an auto-format.

      4. Anonymous Coward
        Anonymous Coward

        Re: Test-Driven Development

        The real coding failure is not having mandatory curly braces for every if.

        Curly braces for every if is part of the basics of secure coding standards and this bug is exhibit A.

        1. Mike 16

          Re: Test-Driven Development

          My personal code standards (which I enforced dictatorially when "lead" of a massive three-person group at a Telecoms company), mandates braces on all ifs. But the Linux Kernel coding standards _forbid_ them for "single statements". They also mandate placing the statement, indented, on the line after the if(), thus almost guaranteeing the occasional "deception by indent".

          Lest the Linux hordes pile on me as a MSFT shill, the particular bug would have been caught by a -Wunreachable or equivalent, but when I was (briefly) doing Windows development, I found that it was rarely possible to get a "clean" (no warning) compile from Visual C if I turned on many warnings, because the system-provided headers were full of dubious constructs.

          The woodpeckers are winning.

    2. Anonymous Coward
      Joke

      Re: Test-Driven Development

      Because they're not Microsoft, they never make mistake, their code is safe by definition because it's a a *nix OS.... you have not to dirt your hands with proper code and testing, thta's only for Windows developers....

      1. Frank Bough

        Re: Test-Driven Development

        Who hurt you?

  5. fnusnu

    Does this affect versions earlier than 10.9?

    1. Dan 55 Silver badge

      You can check with the link in the article (gotofail.com).

      FWIW, I'm on 10.8.5 and it's okay with both Safari 6.0.5 and Firefox 27.0.1. However if I then go to howsmyssl.com the results are better in Firefox than Safari although that's to be expected as Firefox doesn't depend on the OS for certificates or encryption.

    2. diodesign (Written by Reg staff) Silver badge

      Re: Does this affect versions earlier than 10.9?

      No. If you're running 10.8 or lower, you're good. The change was introduced in OS X Mavericks.

      C.

  6. Anonymous Coward
    Anonymous Coward

    Trivial error + code release = negligence

    It is a trivial programming error, but evidence of negligent software development.

    Every intro to programming book explains that "goto" is bad. Lint probably caught the error, but no one noticed amongst its other complaints, if they run lint. Decent unit testing would have caught it. Any respectable functional testing would have caught it. And that's just 4 off the top of my head.

    And this is the security code, and not at a performance hot spot. It is probably handled with maximum care. So what now is a responsible estimate of the quality of software development at Apple in general?

  7. Anonymous Coward
    Anonymous Coward

    Fine in Chrome

    On 10.9 you can avoid this issue by using Chrome instead of Safari. Of course the TLS bug is included by any app that uses the OS library - curl for example.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fine in Chrome

      Or IE!

  8. RTNavy
    Trollface

    Turn of Events

    What was funny to watch was the response by the Doctors and others with Apple tech (at my Hospital). You would have thought the world was going to end and that their toys were going to steal all their credit cards and online banking.

    Hard to explain a "man in the middle" attack to people who felt they were "invulnerable" to anything like this.

    Just try to show an Apple fanboi that there are updates like these nearly every month for something on iOS or OS-X (or at least for the applications running on top of those devices) and you would think the Doctor was trying to cause me a stroke with his mind.

    Welcome to tech reality.

  9. Jim Wilkinson

    Glad I didn't "upgrade" to 10.9

    I dumped 10.9 the day I upgraded. Went back to 10.8 straight away. The reason - sync services were removed so I had to use a "cloud" account to sync my personal and business address books and calendars. No thanks Apple, I lost my faith in you that day.

  10. Joshua's Memory

    FWIW, I have tested fully patched versions of Safari on fully patched 10.8, 10.7, 10.6 and 10.5 using gotofail.com and all have passed. It looks like this specific vulnerability is Mavericks only. So shouldn't affect anyone who takes security seriously, since Mavericks is still in beta testing ;)

    Also, a recent report from another forum suggests that the vulnerability is present still in the latest developer release of 10.9.2.

  11. RPF

    Now patched by 10.9.2

This topic is closed for new posts.

Other stories you might like