back to article Google researchers propose fix for ailing SSL system

Security researchers from Google have proposed an overhaul to improve the security of the Secure Sockets Layer encryption protocol that millions of websites use to protect communications against eavesdropping and counterfeiting. The changes are designed to fix a structural flaw that allows any one of the more than 600 bodies …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    typical google thinking

    This paper is half-assed. It describes a protocol but not the system. Well, guys, turns out that it's the systems that crumble.

    It further suggests two terrible, guaranteed-to-fail ways to deploy it:

    1. "let's have one big centralised system", and

    2. "let's trust the CAs".

    Well, (1) that's one of the main reasons that Wave failed, guys; and (2) is the current problem and there is some very sloppy thinking in the paper that claims it would work around this, but actually wouldn't if the CA is compromised and selectively doesn't publish to the list. I can guarantee you that every browser & smartphone will continue to allow this scenario simply for backwards compatibility during the inevitable decades-long ramp up period.

    oh, and

    3. "Something else but we couldn't think of it".

    The ONLY way to do this and leave control in the hands of certificate users, where it belongs, is with federated certificate verification. See: DANE (since we can this NOW), and the meta-protocol for cert trust verification, Convergence.

    Thanks for trying guys but consider your paper extremely negatively peer reviewed for proposing a system with so many fragile assumptions about human and corporate behaviour.

    1. Chris 3
      Alien

      Sounded authorative until I got to...

      "Well, (1) that's one of the main reasons that Wave failed, guys; "

      No, Wave was designed from the very start as a federated service. It failed for many reasons, but not because it was based on a centralised system. Google's Wave server was intended to be the first of many which hosted discussions in a federated manner.

      1. Anonymous Coward
        Anonymous Coward

        "No, Wave was designed from the very start as a federated service."

        I don't agree. They suggested it as such from the base of XMPP but "designed"? It was neither designed nor built as a federated system. The best revision I can do is "federation was a minor design goal of Wave that was ignored to their great detriment".

        1. corrodedmonkee

          No, he's completely right, Wave was designed to be federated, and companies running their own Wave servers, all of which could communicate with each other. The design goals were to 'be the new email'.

          Did they not also say in the article that any certificate not on the list would be completely ignored?

          1. Anonymous Coward
            Anonymous Coward

            You seem to be writing "designed" where a more accurate assessment would be "half-written draft protocol with no working implementation".

            Ignoring certificates because they're Not On The List takes us back to problems #1 and #2.

            Keep drinking that Mountain View kool-aid.

  2. Pete Spicer

    The difference between this and Convergence is that this seems like it can be rolled out incrementally across browsers and systems, as opposed to something that pretty much demands everything be upgraded at once, something that I didn't seem to get out of reading about Convergence.

    The one problem I see with this is ultimately who has control over the log(s). It seems like we're creating another entity that must implicitly be trusted, but second-signing is going to be better than the current approach, I guess.

  3. Anonymous Coward
    Anonymous Coward

    Half

    "It would require every CA being complicit in changing the way they operate, as well as every browser and every webserver"

    Surely only more than half need to agree, as the remainder will be forced to follow.

  4. Kevin McMurtrie Silver badge
    Big Brother

    Google to the rescue

    Google proposes another technical solution that can only be implemented by Google hosting massive amounts of data for customers interact with in revealing ways. This one is even better than Google's whitespace WiFi solution, where all access points must query a database (Google & friends) for unused frequencies using their GPS coordinates.

  5. Daniel B.

    Um...

    This basically would create a CA's CA. Another SPOF, and one that would be like OCSP: good if you have it, useless if you don't.

    It would be much more easier for CAs to use real secure stuff like FIPS level 3 compliant HSMs requiring Operator Cards/Tokens to actually *sign* the stupid certs. Sure it's slower, but at least it means that some shady hacker won't get his rogue cert signed w/o physical access to the CA's site. Make some CA standard requiring this and blacklist non-conforming CA entities!

    The other option is simply to use the Convergence alternative.

  6. Anonymous Coward
    Anonymous Coward

    the black light on the black background?

    "all certificate authorities would be required to publish the cryptographic details of every website certificate to a publicly accessible log that's been cryptographically signed to guarantee its accuracy."

    ?

    So if you subvert the CA to issue a dodgy cert, you would then add your dodgy ticket to the large central list using the CA's subverted credentials.

    assuming site admins don't sit all day watching for changes on this large list (if it s a genuine site to start with), how does this actually adjust the risk?

  7. Neal 5

    methinks Google have identified the problem correctly

    verify the certificates, yes. Brilliant idea and will perhaps renove the fantasy being created that SSL is broken, when it's plain to see the CA's are the problem, solve that and SSL is not broken.

    Dan, obviously still no understanding of SSL, perhaps this link will clear your head a wee bit.

    http://msmvps.com/blogs/alunj/archive/2011/10/28/1801861.aspx

  8. Anonymous Coward
    Anonymous Coward

    Sanity Optional

    If this is the best they can come up with (and keep on deflecting other plans with more original thought behind it and a better chance of working), then we really should stop trusting google meddling with our internet protocols.

    At any rate, all of this is patches on a broken system. This doesn't even begin to address the real problem, and that's the arbitrariness of the "trust anchors". I don't have a good solution to that either, but at least I'm not publishing papers in which I propose fixes for a system while conveniently neglecting to address the ailments.

    Actually, for some corner cases, like your bank, it would make more sense to go there and pick up a key fingerprint, go home and check the key on the website against the card. Could be on a business card, or even stick it in a smart card or some 2d barcode for convenience. Swap keys with your bank manager and only do business when both sides recognise each other's keys would be even better.

    Introducing trust anchors that way means you can now do business with any site that would do business with your bank (and your bank would do business with). To make that possible, though, we need to be able to do the "web of trust" thing. It also makes things more difficult because there will be sites where you wouldn't want their certificate to be introduced by your bank's. It also means the end user needs to take a more active approach in who they trust, but that path is where convergence goes, too.

    Now for users who understand this, and for ways to isolate yourself from other people's mistakes. Which you also can't do in the current system. Time to come up with a little list of requirements, preferrably more sane than the last time, and start over from scratch with those certificates.

  9. Anonymous Coward
    Anonymous Coward

    Maybe this is a bit too simplistic, but...

    Why not just require that a certificate be signed by at least 3 (perhaps more) independent CAs?

    Benefits: Easy to implement; already have the existing infrastructure; no extraneous online component; limits the exposure when individual CAs are hacked.

    Drawbacks: Doesn't actually solve the problem, only makes it harder to hack. Higher computational costs; higher registration costs.

    Any other thoughts?

    1. JeevesMkII

      Good luck finding 3 independent CAs

      Most of them are Verisign under various other names. Sounds like a good scheme for making you pay Verisign three times as much. They'll probably love the idea. You should patent it.

      1. Ken Hagan Gold badge

        Re: independent CAs

        Too true, but there are actually two independent problems here. The first is that we've allowed Verisign to become a near-monopoly, which in this context is synonymous with a single point of failure. The second is that when a CA is compromised, suddenly no-one knows *which* of that CAs certificates aren't valid.

        The OP's suggestion fixes the second problem in a scalable fashion. That's worthwhile even if it isn't a panacea.

        It would also provide some commercial support for truly independent CAs, which over time might address the first problem.

  10. Anonymous Coward
    Anonymous Coward

    All your Secrets are belong to

    Google!

  11. JeevesMkII

    DNS sec says "Hi!"

    This seems like a problem DNS sec would solve to everyone's satisfaction. Not only would it make it more difficult to conduct specific small-site hacks on local TLS connections achieved by changing what DNS servers clients are told to use through DHCP, it would enable a far more sensible solution to the problem of "Did the owners of the domain authorise the CA to issue this cert?"

    If your DNS records can be authenticated, you can then tie the certificate to the domain by having a unique key pair tied to the domain through (say) a key fingerprint in a DNS record. You could then sign SSL certificates issued to you with your domain certificate and the browser could verify that chain of trust as well as the chain to a trusted CA.

    Attackers would then simultaneously have to subvert a CA and subvert your authoritative DNS to make any headway on breaking your security though this route. Maybe not impossible, but certainly far harder than doing only one of them.

    On the "this is theoretical bullshit" side of the coin, it would require DNS sec to be ubiquitous enough to just outright reject non-authenticated DNS records. If the attacker could simply substitute regular old unsigned DNS without the browser throwing a fit, then that leg of the security would fall apart, though if the user had gone to the site before the client could at least cache the level of security it is used to and flag it up to the user if the site has had a downgrade. In any event, my idea seems much less outlandish than requiring CAs to run what amounts to yet another DNS system for looking up certs.

    1. PyLETS
      Boffin

      DNSSEC could provide a better Convergence implementation

      Moxie Marlinspike's presentation: http://www.youtube.com/watch?v=Z7Wl2FW2TcA (45 minutes - but well worth it) rightly critiques DNSSEC in comparison with Convergence due to the lack of trust agility in connection with a DNS trusted chain breakdown, in the event your domain registrar or top level domain is or becomes evil or is compelled to do evil things by bad laws.

      In practice with DANE, you can keep your own SSL webserver certificates and fingerprints in DNSSEC records for your own zone so probably wouldn't need CAs for that purpose, as your DNSSEC registrar or TLD provider effectively provides your CA service and trust chain along with your domain registration.

      If, based on Moxie's advice, this approach still isn't good enough, then using a similar approach as is used for email DNSBL black/whitelists, it is also possible for a number of notaries with domains in different parts of the DNS to validate reputation of website certificates using their own DNSSEC zones, in a similar way to how Spamhaus acts as a notary in connection with good/bad SMTP clients. That would enable a protocol which is feature compliant with Convergence to be implemented using the somewhat more efficient DNS transport compared to REST.

      You could still argue hypothetically about what would happen if IANA goes bad and does evil things with the root DNS zone, but this zone is a small file which changes infrequently, and I think enough people would notice and complain if that happened Besides which with DNSSEC, you can also configure your own client trust anchors elsewhere e.g. pointing directly to IP addresses of your trust notaries if you prefer.

  12. batfastad
    Stop

    Self-signed

    I dislike the extreme prejudice that browsers have against self-signed certificates. It assumes that I trust some faceless certificate authority more than I trust myself or our network admin... a hugely false assumption that I find completely baffling!

    Is there anything that can be done with DNS here? Have the certificate stored as a TXT record or something for verification?

    1. JeevesMkII

      You can add your home-brewed root to the list of trusted certs on your computer to fix that particular problem. Really if your org is using self-signed certs and has a network admin they should have done it already.

    2. Ken Hagan Gold badge

      Re: false assumption

      To put Jeeve's reply another way: if your self-signed certificate (or its root) hasn't been rolled out to your entire network by your network admin, it probably *is* a forgery and your browser is quite right to sound an alert.

      1. batfastad

        Yes I realise that on an enterprise scale you can easily roll out your own root cert to Internet Exploder. But in the SMB environment you can have people using many different devices... phones, laptops, whatevers, desktops connecting from home, Firefox or Chrome that someone has decided to install themselves etc.

        There's always some devices that you just can't get working and yes it's the device manufacturer's fault for not following specs/protocols properly or for selling poorly written software, but unfortunately it's my immediate problem to sort out. We've gone from using self-signed certs 5 years ago, to spending a few hundred quid a year for commercial certs for our main services. Purely because of the faffing that was required to get every conceivable device accepting the self-signed certs AND our root, on a permanent basis.

        What's better, a cheap SSL cert from a budget CA?

        Or one made by a trusted/contracted individual within your own organisation, or yourself?

        I know what I'd rather have. But unfortunately the hurdles in rolling out self-signed/custom root certs to every single possible device these days will have people falling for the budget CA route. And that does noone any favours, apart from those selling the certs.

        If you're going to have a compromised certification root, is that more likely to happen to one of the limited number of CAs, a big target? Or me, a needle in a haystack?

        The whole concept of trusting the security of your traffic to a small number of organisations, none of which is yourself, needs resolving. Not exacerbating. I don't have the answers but there must be someone on the planet that does.

        1. Ken Hagan Gold badge

          Sorry to hear about your menagerie of broken devices and uncooperative users. Nevertheless, breaking my browser isn't the correct fix. Accepting "anything signed, by anyone" makes signing pointless. You'd be better off *not signing at all* and saying to your customer base "Well, we would prefer to use a local root, but you guys aren't up to it.".

  13. Adam Inistrator

    simple practical browser based solution for many cases

    Browser should simply show who, out of the hundred of signers the browser trusts, the https cert was signed by. I say give users a little information about how that GREEN color came about.

    green hsbc.com signed by verisign .. i feel happy

    green hsbc.com signed by somedutchpersonidontknow .. suspicion raised perhaps call hsbc support for reassurance

  14. JanMeijer

    CA/Browser Forum baseline requirements won't fix anything either

    All recent CA-hacks had nothing to do with the security of the cryptosystem, and all to do with sub-optimal or shoddy systems security.

    Anyone who's been involved inpractical system security and in "formal" audits, risk assessments, security policies etc. knows paper is no replacement for real security. And that's where most of our CAs go wrong. They employ people to make sure the PKI side of the game is done OK, procedures, etc. Fine. But the attackers then go at the systems layer, and circumvent all procedures.

    The baseline requirements don't say anything about "you must employ at least two world-class practical security experts". It's more talk about plans, audits, etc.

  15. Anonymous Coward
    Anonymous Coward

    Typical Google solution

    Google: "Let's keep this huge centralised database of every SSL site out there" (very handy for a search engine, they mutter between each other)

    Public: That's a huge traffic load and requires massive servers, who could do such a thing for not much cost?

    Google: Well, now that you mention we have some free space. Just saving the Internet and all, of course.

    Public: Thanks Google, you're our hero.

  16. Anonymous Coward
    Anonymous Coward

    What about having an SSL entry as part of the DNS record?

    I'm sure that people must have discussed this before (and so it might be fundamentally flawed in one way or another), but at least to achieve a partial fix against bogus CAs issuing certs for your domain, why not include some form of a TXT entry in the DNS record that identifies a CA (or CAs) that you have authorised to issue certs for your domain? Same kind of thing as an SPF entry, only for SSL?

    That way, whenever a browser etc. sees an SSL cert, it can identify the CA on the cert and do a DNS lookup to confirm that it is authorised. If it's not authorised at the DNS level then the SSL cert is rejected.

    Heck, if the TXT field allows a sufficiently long entry you could even store a key hash.

    Obviously, this would be open to DNS poisoning attacks (or e.g. the ISP intentionally tweaking the DNS records it serves up...), but could it at least act as one layer of defense?

  17. batfastad

    I like this idea (see my comment above briefly asking a similar thing) and it's how DKIM already tries to validate the sender of e-mail.

    Self-signed certificates could be automatically accepted by the browser so long as the (sub)domain in the cert's CN field has a TXT record that can be validated against. So you know that the publisher of the certificate also has access to the DNS zone of the domain.

    Whether it would actually work from a technical/cryptographic standpoint, I don't know. I defer to the superior knowledge of others.

    But to me it seems like a good way to verify the relationship between the certificate publisher and the owner of the domain. Certainly better than CAs verifying that same relationship based on access to an e-mail address, which is how most budget certs are done.

  18. scientific_linux_01
    Linux

    google needs to stay out

    I would prefer the open source community come up with a solution, I find that most conceived from mega corps always have big security holes.

This topic is closed for new posts.

Other stories you might like