back to article Mozilla plots TLS 1.3 future for Firefox

Mozilla has decided it needs to lift its HTTPS game, and will default to TLS 1.3 in next year's Firefox 52. Mozilla principal engineer Martin Thomson let developers know about the decision in an e-mail last Friday. “TLS 1.3 removes old and unsafe cryptographic primitives, it is built using modern analytic techniques to be …

  1. Anonymous Coward
    Anonymous Coward

    I have to take a hard look at 0-RTT here. Something seems off there on the re-authentication process although the crypto gods seem to have blessed it.

    1. Tomato42
      Boffin

      it is vulnerable to replay attacks, but the standard will include information about mitigation and kinds of data client can send in the 0-RTT.

      So yes, it's correct that your spider senses are tingling, and unless you're a real Time To First Byte junkie, you're better off not using it. Especially as browsers will need to figure out what is 0-RTT viable or not.

  2. Thought About IT

    PCI Requirements

    It's all very well rushing ahead with new protocols, but it only encourages the payment card industry to insist they are implemented while major players can drag their feet. For instance, Apple only implemented TLS1.2 in their mail client a couple of months ago, while PayPal still haven't implemented it in some of their mail servers, but it's been a requirement to pass a PCI scan for well over a year.

    1. Tomato42
      Boffin

      Re: PCI Requirements

      Except it hasn't been. Only after 30 June 2018 will they require TLSv1.1 (not even TLSv1.2!):

      https://www.pcicomplianceguide.org/ssl-and-early-tls-new-migration-dates-announced/

      Not that you shouldn't have been on TLSv1.2 for few years already!

    2. Anonymous Coward
      Anonymous Coward

      Re: PCI Requirements

      Sometimes, people can't be bothered. Consider the EMV requirement that was SUPPOSED to have enforceable liability backing since the beginning of the year. And yet how many firms STILL insist on swiping? Think about it, the threat of LOSING MONEY, and they STILL won't switch.

  3. Drew 11

    Quick on TLS, dead slow on DANE.

    C'mon Mozilla! We want freedom from the CA TITSUP bug.

    1. Anonymous Coward
      Anonymous Coward

      I think that's because DANE has problems of its own: namely because DNSSEC uses old ciphers (like a signed root that uses 1024-bit RSA when the minimum standard is I think 4096-bit) that browsers like Chrome are trying to retire. Not worth jumping out of the frying pan only to end up in the fire.

      1. AliceWonder32

        DNSSEC does not use "old" ciphers, or any ciphers. A cipher encrypts, DNSSEC does not encrypt. Stop spreading your ignorance. DNSSEC uses cryptography for validation, not encryption. Learn the difference.

        There are two kinds of private keys with DNSSEC - a Key Signing Key (often referred to as KSK) and a Zone Signing Key (often referred to as ZSK)

        A KSK is typically 2048-bit and is on the root server and the TLDs I have looked at. A KSK is generally rotated once a year but there is no official requirement.

        A ZSK is typically 1024-bit and is typically rotated once a month (sometimes once a week) - you are not going to brute force a 1024-bit private signing key in a month. Or even six months.

        DNSSEC also has ECDSA keys available, and once more recursive resolvers and DNS-aware applications support them, they will start to be used providing even stronger cryptography with smaller signatures.

        DNSSEC does not have problems, nor does DANE. DANE provides higher confidence that the server you are talking to is who it claims to be than a CA signed cert does.

        1. Charles 9

          "A ZSK is typically 1024-bit and is typically rotated once a month (sometimes once a week) - you are not going to brute force a 1024-bit private signing key in a month. Or even six months."

          Two words: Shor's Algorithm. Don't assume the State doesn't have a working high-qubit quantum computer humming away as a black project (such as under that data center in Utah).

          Given such a paranoid world, why stick with such short keys at all? Why not make 4096 the LOW end and go from there?

          1. AliceWonder32

            4096-bit keys result in much larger signatures, which would increase the bandwidth needed for the RRSIG records and the cycles needed for validating the signatures.

            Sorry, but real world considerations take priority over black box paranoia that has zero evidence of existing.

            If you want stronger signatures, ECDSA is the way to go, and is the direction DNSSEC is headed. 4096-bit RSA is the wrong solution.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like