back to article Security mandates aim to shore up shattered SSL system

A consortium of companies has published a set of security practices they want all web authentication authorities to follow for their secure sockets layer certificates to be trusted by browsers and other software. The baseline requirements (PDF), published this week by the Certification Authority/Browser Forum, are designed to …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Same sort of fig leaf...

    ... as that provided by the payments industry in the form of the "PCI" requirements. Doesn't fix the underlying problems one bit, just provides some paper and glue for the crumbling walls. The essence of what "due dilligence" is all about: Don't do anything overly stupid, and hey we've got the papers to spell out that these "best practices" we've come up with and decided are "industry". Holey instant teflon coat, batman!

  2. Anonymous Coward
    Anonymous Coward

    Utter Rubbish

    Dan sounds cross that he couldn't bully large organizations into short-circuiting their approval processes for his deadline.

    Convergence and Google's proposals are steps in the right direction. They could be useful tools in the security armory.

    Dan (and the poster above) are taking easy pot-shots. "It's all broken", "We're all doomed", etc. It might pass for journalism, but no-one has even started gathering the requirements for what *exactly* we should be doing instead of using PKI with international commercial public CAs.

    1. BlueGreen

      Not sure your first line is quite fair

      however "A Mozilla official, meanwhile, said only that the requirements would be discussed among developers in online forums" is the right step before committing, or declining to, and the debate will (added bonus) be transparent so we'll know why they did/didn't go for it. Any org that doesn't think hard about every major issue, well it's going to cost them big.

      MS is doing the same of course, their response is just "we're consider it" written by marketeers.

  3. Charles 9 Silver badge

    Who do you trust?

    Because that in the end is what the Secure Web is all about: having the right connections to say this is who they say they are and NOT someone else trying to pose as him/her/it. But even as you get someone to vouch for him, the next immediate question becomes, "Who vouches for the voucher?" You quickly get into a "Quis custodiet ipsos custodes?" problem. But I suspect there will ALWAYS be risk, simply because in order for e-Commerce to function, we have to place at least some trust in total strangers.

  4. Anonymous Coward
    Anonymous Coward

    Please explain

    "Under the current SSL system, CAs get to log each visit an IP address makes to an HTTPS page protected by one of their certificates."

    Have browsers started dynamically checking for revoked certificates? Or is this something else?

  5. PyLETS

    Some combination of DNSSEC, DANE and Convergence needed.

    Once you have chosen a domain name and have invested in this as a brand it's difficult to change. So the domain registrar and TLD operator both have guaranteed continued business - a bit like the current CAs which have not lost customers despite many being incompetent.

    Even so DNSSEC/Dane for issuing web server and email certificates suggests some improvement over the current CA system, in the sense it provides better partitioning. But some of the registrars and TLDs (which under DNSSEC become the new CAs) still handle/control very large partitions of the Internet name space. So DNSSEC on its own is no magic bullet e.g. if a registrar or TLD goes bad or is told to do bad things by its government. Hence the need for something like Convergence, where an additional check on the authenticity of a domain certificate becomes possible. But I suspect this would work faster using DNSSEC itself as the Convergence low level transport, compared to Moxie Marlinspikes RESTful transport proposal for additional certificate checks based on alternate user selected trust anchor notaries.

  6. Vic

    Who fact-checks these articles?

    > Under the current SSL system, CAs get to log each visit an IP address

    > makes to an HTTPS page protected by one of their certificates.

    Where on earth did you get that idea from?

    The site certificate is provided by the web server to the client. The client checks its authenticity against its stored CA certificates.

    The CA gets no traffic from this.


    1. Dan Goodin (Written by Reg staff)

      Re: Who fact-checks these articles?


      When a web user visits an SSL-protected page, most browsers will check the see if the certificate has been revoked. This database is maintained by the CA who issued the certificate. The CA gets to see the IP address of the person trying to access the certificate.

      This ability was underscored during the investigation into the DigiNotar breach. The investigators were able to determine that more than 300,000 people, mostly in Iran, encountered the fraudulently issued GMail certificate.

      I hope this answers the question you and several other readers have raised.

      1. Vic

        > When a web user visits an SSL-protected page, most browsers will check

        > the see if the certificate has been revoked.

        *Some* browsers *may* check to see if the certificate has been revoked. Not everyone actually uses OCSP.

        But OCSP is heavily-cached (and needs to be).

        So when you said "CAs get to log each visit an IP address makes to an HTTPS page protected by one of their certificates", you were incorrect.

        Should a CA log OCSP requests, it would get one log entry per IP per renewal period. This is very different form what you suggested.

        HTH, HAND, etc.


        1. Dan Goodin (Written by Reg staff)

          Slight clarification

          Fair enough. Caching means not *every* HTTPS request is logged. I've updated the article to reflect this.

          Additionally, I'm adding the following response from security researcher Moxie Marlinspike, who continues to argue that under the current system, CAs "have a tremendous amount of insight into your browsing history."

          His response in full is:

          It's true that the OCSP check isn't done with *every* HTTPS request to a site, because the response is cached in the browser for a short time.

          It's more like the CA is notified once per "session." If you think about a typical visit to, it might involve several requests to the website in order to send or receive a payment. The CA will typically only be notified for the first request, not all of the subsequent ones within a session. For an average user, you can think of it in terms of a CA knowing how many times they sent or received a payment via the PayPal website, but not how many clicks it took them to do it.

          In any case, the notion that CAs have a tremendous amount of insight into your browsing history is substantially true.

          - moxie

          1. Vic

            > Caching means not *every* HTTPS request is logged

            There's more to it than that.

            Firstly, many browsers don't use OCSP at all. According to Wikipedia[1], IE only supports it from IE7 on Vista (not XP), and Safari has the protocol disabled by default until Lion.

            For those requests that are sent, the response may well have the nextUpdate field set; this does not enforce caching, but does set the parameters for that cache., for example, has a 1-week difference between thisUpdate and nextUpdate, so a browser would be behaving correctly if it only checked the OCSP field once a week. Wikipedia has the same 1-week interval.

            > the response is cached in the browser for a short time.

            The response may be cached in the browser for a *long* time.

            > If you think about a typical visit to,

   also has a 1-week interval between thisUpdate and nextUpdate.

            > The CA will typically only be notified for the first request, not all of the subsequent ones

            "Notified" is putting it a bit strongly; OCSP only asks whether or not a given certificate is still valid. It says nothing about whether or not the site was actually visited, nor what URL was visited. And with these 1-week intervals we keep seeing, it is entirely possible that the responder actually gets very infrequent status requests. This is browser-dependent.


            [1] No, I can't be arsed to check.

  7. Destroy All Monsters Silver badge

    What the hell am I reading?

    "Under the current SSL system, CAs get to log each visit an IP address makes to an HTTPS page protected by one of their certificates."


    1. Anonymous Coward
      Anonymous Coward


      That is actually a good thing and no more of a problem than webmasters being able to store your IP address every time you visit their website.

      CRL lists; these can be distributed in many ways where one of the most common ways is http. In other words: make the list available through use of a website. Since most clients will check crl's to see if the currently used certificate is still valid you already have the environment described above.

      Hardly an issue IMO since those CRL lists are actually a key asset to safety. The whole DigiNotar debacle? Could have been easily thwarted with the use of CRLs; the very moment a CA notices that something is amiss they can immediately revoke the signed certificate thus rendering it totally invalid.

      Its actually a major advantage which the X509 structure holds over, for example, the GPG environment. Someone can revoke his GPG key but that does not make it immediately known to the world.

  8. Anonymous Coward

    Rules aren't the problem

    The way they're enforced and monitored is the whole issue here.

    Take for example DigiNotar.. Investigations turned out that they didn't even use up to date virus scanner on their computers, that the OS on some of them was way outdated and rumor has it that illegal software was also spotted in the company.

    What good are rules and regulations when you're dealing with companies like these? They're being paid a lot of cash for their services and basically provide a more than miserable service. The big problem obviously is that the customers are not aware of all the problems inside the company.

    Its not really an SSL / X509 problem either, the technology is pretty straightforward and doesn't differ that much with other public key based structures such as GPG. The main difference here is having one signer to verify authentication v.s. several signers. Yet the more people you put your trust in the more likely that something can go wrong.

    Solution? Same as with GPG in my opinion; we shouldn't frown upon using self-signed certificates anymore.

    Think about it; what is the difference? Say I download a piece of software which has been signed. A readme tells me that I can download the authors public key from <keyserver> in order to verify the code.

    I can do the exact same thing using X509. Either by using the self-signed certificate to set this up or by creating sub-layers (using the self-signed certificate to sign yet another certificate which verifies the code).

    Quite frankly I don't see the problem here.

    The only 'advantage' we have when using "acknowledged certificate authorities" is the knowledge that these guys paid heavily to being allowed and put their certificates into operating systems such as Windows and OS X, browsers like MSIE (though it utilizes the Windows repository) as well as FireFox, Safari and others.

    Since when is having big pockets a sign that you can put trust in an organization?

    1. Ken Hagan Gold badge

      Re: big pockets

      "Since when is having big pockets a sign that you can put trust in an organization?"

      Well you can sue them (or someone else in the chain) if something goes wrong.

      That's not tongue in cheek, by the way. For an organisation with deep pockets, the threat of being caught in the wrong and sued shirtless is generally sufficient to ensure that they at least *try* to get things right. Similarly, for an organisation with small pockets, this threat is generally sufficient to ensure that they have appropriate levels of insurance and *that* in turn means that they've had to convince someone *else* that they aren't completely clueless.

      In contrast, a self-signed certificate is worthless. There's nothing to stop anyone downloading the tools and knocking up a self-signed cert for or their favourite bank. If *you* receive a self-signed cert and *independently* verify that it is authentic, then yes you can trust it, but you've just done the job of a CA. If you've done it properly, it probably took you some time and the vast majority of the computer-using public prefer to outsource boring jobs like that.

      The present system is far from perfect, but it is trying to solve a real problem.

    2. Anonymous Coward
      Anonymous Coward

      The problem:

      Basically, any yoyo could set up a fake site and a certificate for it that "proves" you're really talking to your bank, down to the right colour in the address bar and a nice "YourBank, plc" tag replacing the hostname. That is why ostentiably why browsers whine so much about self-signed certificates.

      To solve that problem, x509 relies on a hierarchical structure, neatly reducing the problem from millions of unknowns to only about 650 (and counting). The upside here is that you can tell your users that because the other side's certificate is signed by one of the certificates in the root CA list in the browser it must therefore be "safe", neatly forgetting to explain just what that means. (It means you'll be protected from whomever all those 650 browser-blessed root CAs refuse to take money from, provided none of them fsck up.)

      PGP/GPG by contrast would require every tom, dick and harry to understand about the web-of-trust thing, set up a key, go to signing parties, and so on. The web of trust model doesn't rely on a large collection of single points of failure, excuse me, root CAs, but it's a model that only a nerd might understand (even so not all GPG-using nerds really do) and only a cryptonerd could really love. There's a reason Johnny (still) can't encrypt.

      Neither is the most brilliant solution to the problem. The x509 model, however, is much more commercialisable than the web-of-trust model; you can have companies take your money and harass you with requirements and give you a certificate back. After that most if not all your users' browsers shut up about untrusted certificates. For a year or so.

      So you should see that self-signed certificates are a problem in the sense that they don't guarantee anything whatsoever --unless you know beforehand what signature to expect you don't even know you're not subject of a MITM attack-- but also that neither web-of-trust or root CA trust anchoring is a very good general solution. And that rules locking down some aspects of the root CA rigamole is just so much fig leaf on a broken system.

      Personally, I think that we'd be better off to go to our nearest bank branch and get confirmation from the manager that $KEY is what they're currently using to secure their banking websites. How you'd do that? Maybe a business card with a signature on it, possibly the entire key in a 2d barcode on the back. Or maybe a smart card containing the certificate, who knows.

      Currently, though, no browser really support user-blessed certificates, nevermind in a usable way. It's either browser vendor-blessed, or OS vendor-blessed (that's micros~1; it seems like the same thing but the CA store is part of the OS, not of the browser, and it'll wipe out your "don't use these CAs please" choices behind your back at the next scheduled system update OR ca store check too).

      That wouldn't solve the general case, but if the certificate system would support both models interchangeably you could have a master key for the bank with current-use certificates for things like websites and email signing, and you could have trust signatures where your bank would certify certificates of merchants or merchant organisations they'll do business with. And that sort of thing needn't be limited to banks, of course. But since the certificates aren't limited to one CA's blessing, we've more-or-less done away with the root CA problem of being forced to trust them all lest some things stop working, even if that prevents us to distrust some other things the same CA blesses. Now for a much better way of having indivudual users choosing which CAs to trust.

      So I think that neither model alone would do, and neither would disregarging all endorsements as you'd do with self-signed certificates. In fact, even such a flexible system, while a massive improvement over the two incompatible systems we currently have, won't be enough. There's some more features we'd need, like selectively revealing who signed what and other privacy measure. But that's well beyond the scope of this latest attempt at patching up a thorougly broken system.

      At the end of the day, though, we'll find that the modeling is the easy part. The hard part is doing something useful with the models. For a long time now we've ignored that last bit, and so it's no surprise that it's increasingly coming apart at the seams.

      1. Anonymous Coward
        Anonymous Coward

        That is exactly the problem...

        Why use an encryption protocol for identify verification in the first place?

        It was meant for data encryption, not verification perse, all that stuff has been added afterwards. Not to mention that many CA's push out certificates without much checks going on anyway. For example; how do you expect an American CA to know what a Dutch KvK paper ("Kamer van Koophandel uittreksel" or "House of Commons application" I suppose) looks like ?

        That flaw by itself is already enough not to put the amount of trust in X509 for identification purposes, yet the browser market heavily insists we do so. Money prevails once again.

        IMO they should stop blocking self signed certificates in the way they do now. Just make it clear that the connection is encrypted but that the identify cannot be verified. Done.

    3. jonathanb Silver badge

      The problem is that customers just want a padlock on the browser as cheaply as possible, and choosing a more secure certificate authority doesn't make your site any more secure from man in the middle attacks. They could have got a diginotar certificate for your domain regardless of which CA you chose.

  9. pdw

    Shurely some mistake

    > Under the current SSL system, CAs get to log each visit an IP address makes to an HTTPS page protected by one of their certificates.


    The client knows the root cert (installed in O/S, browser, whatever) and the server has a series of signed certs linking back to it. There is no connection to the CA.

  10. David Hicks

    Not really SSL is it?

    We're talking about the infrastructure of trust, certificates and authorities. This is not really part of the SSL/TLS protocols.

    Like so many ways that SSL has apparently been broken of late, we're talking about exploits somewhere else that affect the preconditions for starting a secure session.

    1. Anonymous Coward
      Anonymous Coward


      Except for the minor detail that without those bits there really is very little point left in encrypting any longer, yes.

      1. David Hicks

        Err, except for the fact that SSL is used in a far wider context than initiating 'secure' HTTP connections with previously unknown parties, you'd have a point.

        SSL/TLS is used for a lot of private comms between systems using private CAs, where this stuff is not an issue.

  11. Daniel Palmer

    Another reg article about "SSL" being broken..

    when it still isn't broken. Dodgy CAs != SSL broken.. how long will it take the reg to grasp that?

    1. auburnman

      SSL *System* is broken

      The SSL system is more than the internet protocols - If the entire system is not doing what it's supposed to then it is broken. In this case agents in the system which have to be trustworthy for the whole shebang to work have been shown to not be trustworthy.

      It sounds like you have interpreted the headline as El Reg claiming that the SSL protocols have been cracked, which is not what they are doing.

      1. David Hicks

        Not the only SSL System

        SSL/TLS is widely used in many circumstances other than the the public CA infrastructure. There are many, many in-house systems in the world, private CAs used to secure comms within or between companies for non-public data transfer.

        *An* SSL system is broken. Perhaps you could even say the Public HTTPS system is broken, perhaps. But not that SSL or TLS is broken.

      2. Daniel Palmer

        1. SSL/TLS are protocols that sit at different layers of the OSI model. It is not "more than protocols". CAs are not SSL.. they are trust providers. You can have SSL without trusting CAs.. if you know the other party they can verify their public keys, you don't have to have the CA to do that.

        2. SSL/TLS are not "broken" which is what every article on the register about hacked CAs suggests.

        There is still no way for a third party to intercept and decode an SSL/TLS session without exploiting one of the parties.

        3. CAs (!= SSL) that dish out trust for money have been hacked and shown that they are useless at providing the top level of the chain of trust required to validate certificates. You can validate certs manually face to face .. people do this for PGP keys.

  12. Larry Frank
    Thumb Down

    Pretty weak as a standard

    If this is a weak as they are prepared to go in UPGRADING the security requirement - I am afraid that we haven't seen the last of the hacks and flurry of activity related to poorly operated CA/PKI systems... Just a few thoughts - US Federal policy mandated that PKI stop issuing and relying on 1024 bit RSA in 2008. While not all comply - it would seem sensible that the new standard had set something higher than the current (fairly weak) level of secutity represented by RSA 1024. While they are at it - how about standards for hash algorithms. Hashing wasn't mentioned - and there are still CAs out there who use MD5 (maybe none of this crowd?) SHA-1 is rapdily becoming less trust worthy - again, US Federal requirements are pushing to SHA-2.

    Worse, I noticed NO requirement for the strength of authentication by the RA to the CA - wasn't the Comodo attack because of a password the hacker found on line? Wouldn't it be a good idea for a PKI to use PKI to protect itself from fraudlent approval of a certificate request by an RA? I saw nothing requiring that CAs have multi party controls for administration. At the core of the DigiNotar hack was the architecture of the CA enclave which allowed the hacker to get into a related system and become an admin. Really to little to late...

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019