"Somebody has used an MITM attack."
You seem to be really interested in pointing out minor grammar and spelling errors at El Reg.
The road towards phasing out the ageing SHA-1 crypto hash function is likely to be littered with potholes, security experts warn. SHA-1 is a hashing (one-way) function that converts information into a shortened "message digest", from which it is impossible to recover the original information. This hashing technique is used in …
"The fact that SHA-2 can’t be used with older browsers and OS’s means that untrusted certificate warnings are going to become commonplace," Munro explained. "And if that happens, the danger is that many users will simply ride rough-shod over such pop-ups, potentially creating the ideal opportunity for man-in-the-middle (MitM) attacks."
Because of course 99% of the people don't do that already.
Are they suggesting an alternative here? The only one that makes sense AFAIK is that SHA-1 isn't really bust yet (just expected to be in a few years) so leave it be. As some have pointed out this is what happened with MD5 and the results weren't great.
"Are they suggesting an alternative here? The only one that makes sense AFAIK is that SHA-1 isn't really bust yet (just expected to be in a few years) so leave it be. As some have pointed out this is what happened with MD5 and the results weren't great."
But unless you drag some people kicking and screaming, they'll never switch. Sometimes, a deadline is the only motivation that works. Better to have some flak during the transition (that you can then fix) than to wait until the last minute only to find out it's one minute too late and information has been compromised.
Yes, we all know that Firefox SHA-1 is far more secure than Windows one...
Well, it is possible for one SHA-1 implementation to be more secure than another, under threat models that are reasonable for some applications. Even if we only consider correct implementations, an SHA-1 implementation may suffer, to a greater or lesser extent, from side-channel leakage which lets an attacker with limited access gain information about its input, for example.
In this sense the SHA-1 offered through Windows APIs (CAPI and CNG) may be more or less secure than the implementation in Firefox. CAPI/CNG support cryptographic devices such as smartcards. Ideally those are more secure against side-channel and other information-leakage attacks than normal application software, but in practice a given instance may not be.
"SHA-1 is a hashing (one-way) function) that converts information into a shortened "message digest", from which it is impossible to recover the original information."
This might suggest that inability to recover the original information is what makes a given hash function secure: it isn't. In fact, secure hash functions are actually the ones from which you might* be able to recover the original information!
A hash function is secure if it is (very) hard to create any different inputs with the same hash --- most particularly that it is very hard to manipulate the input in any manner whilst preserving the hash value.
* by using rainbow tables. e.g. if you say 5e88 4898 da28 0471 51d0 e56f 8dc6 2927 7360 3d0d 6aab bdd6 2a11 ef72 1d15 42d8, I know the original message is "password", but that doesn't mean that SHA256 is any less secure.
"In fact, secure hash functions are actually the ones from which you might* be able to recover the original information!"
If the message digest is shorter than the actual input then there is no way you can recover all the information because by design some of the information has been lost. Its not a compression system.
eg: Given a number "encrypted" using mod function N mod 10 it is simply not possible to recover the original data The best you can do is some up with a range of possibly initial values. As the input gets longer that possible range increases exponentially.
"by using rainbow tables."
Thats not recovering the information, its simply pre-computed before and after values. No encryption algorithm is safe from that unless its output changes based on an internal variable unrelated to the input (eg time).
I was over-simplifying; however my comment was for the benefit of those who might be misled by the statement that "...hashing (one-way) function) ... from which it is impossible to recover the original information" and who might not realise, given the context, that that is true of all hashes, and not just of secure ones, and that the defining feature of secure hashes was actually collision resistance.
My statement that it is only really "secure" hashes where you can "recover" the input should be taken in that context, just an ironic fun point. Of course I understand that, in the absence of size restrictions, there must be an infinity of inputs that have any given SHA256 value. But the chances of the original input having the hash I quoted having been something other than 'password' are very small indeed. So although you cannot really "recover" the information from the digest, you have a lot better chance of guessing the input than you do for, say, a given CRC function. (Although obviously such guessing is severely limited --- the input would have to be present in a "rainbow" table of inputs for which you have already precomputed the hash).
Sorry, but has anyone seen a collision on a SHA-1 hash?
Yes, its theoretically possible, but I haven't seen one yet.
And SHA-1 is still viable when working with big data because it still offers security, yet is a repeatable hash to get the same results.
So while it may be dead for some purposes... there's still a lot of life left in it.
A hash function is secure if it is (very) hard to create any different inputs with the same hash
Your phrasing is ambiguous, and I can't tell if you're trying to describe collision resistance (find x, x' such that h(x) = h(x')) or preimage resistance (given h(x), find x' s.t. h(x') = h(x)) or in some general way both. But collision and preimage resistance are only two of the criteria for a cryptographic hash. (Technically, there's no such thing as a "secure hash". The term is not meaningful in any technical sense.)
Cryptographic hashes have to be "trapdoor" or "one-way" functions (much easier to compute than to compute an inverse); this is preimage resistance.1 Second, given an x, it should be infeasible to find an x' s.t. h(x') = h(x); that's second preimage resistance. Third we have collision resistance: the infeasibility of finding two arbitrary domain points x, x' s.t. h(x) = h(x'). Note that a second-preimage attack is a constrained collision attack, and a preimage attack is a reduced-information alternative to a second-preimage attack. See Rogaway and Shrimpton, "Cryptographic Hash-Function Basics" , section 3.
For practical use, we also want cryptographic hashes to have an output of fixed size, said size determined by the applications to which the hash is put; and we want it to be relatively inexpensive to compute for small and large inputs. There are also secondary critieria which may be implied by the primary criteria, such as the avalanche principle: changing a random bit in the input should affect, on average, half the bits in the output.
Incidentally, the article errs in claiming that a cryptographic hash's output is smaller than the input. A cryptographic hash's output is fixed-length; the input can be of any non-zero length. (Typically inputs shorter than the output must be padded using a standard method specified by the algorithm, but that's an implementation detail that doesn't affect the size or entropy of the actual input.)
In fact, secure hash functions are actually the ones from which you might* be able to recover the original information!
Your claim has nothing to do with cryptographic hashes; it's true for any deterministic function. That is, given a range point, you can perform a brute-force search of the domain to find a mapping. This is just as true for trivial hash functions as it is for cryptographic ones.
1Yes, by the pigeonhole principle, an information-discarding function cannot be inverted to return with probability 1 the specific domain point for every range point. That's irrelevant here; it's true for every function with a range smaller than its domain, so it's not a distinguishing characteristic of a cryptographic hash.
"a situation where two different blocks of input data throw up the same output hash. This is terminal for a hashing protocol"
Err no it isn't. ALL hashing protocols that produce a shorter output than input will suffer this problem and there's nothing that can be done about it. Collisions arn't the problem - the ease of finding collision values is.
Not just that. The REAL threat is the preimage (specifically second-preimage) attack: "Given a known hash, can you produce a plaintext with the same hash?" Second-preimage changes this to "Given a known plaintext, can you produce a distinct second plaintext with the same hash as the first?" If you can do these, THEN the algorithm's in trouble because now you can plant stuff without getting noticed. Thing is, the more likely a collision attack works, the closer you get to a working preimage attack.
> Ken Munro, a director at security consultancy Pen Test Partners, warned...
Whew. It took an effort of will to read further after a source like that was defined, and sure enough, the expected series of possible doom scenarios followed with no sensible route forward. I do wish so-called security consultants would be heavier on reality and lighter on gleefully talking up every possible negative outcome, no matter how unlikely
Firstly, Google could easily fix the Android problem. We should just get that out the way.
Second, the XP usage numbers are pure FUD, there's no hard data backing them up, they're based on things like UA analyses rather than doing some sort of census - UA can be manipulated (and extremely commonly they are) so the extent that it's true is wildly exaggerated.
Now - even if these two data points are valid there's a third problem at play. Should the general well-being be put at risk because some people don't fancy chucking their ~14 year old OS away? About a year ago before all this sha1 weakness stuff happened I stated in reg comments that the XP crypto stack is completely broken for reasons otherwise (and I got about a million downvotes despite being, y'know, right), compounded the state of it is putting other users of networks directly at risk by having to keep stuff around we know is broken.
There's only one solution to this: that we kill all this crappy old support. XP users might then start getting the message that their OS shouldn't be connected to the internet.
"Firstly, Google could easily fix the Android problem. We should just get that out the way."
They have fixed it. You're free to install a later Android version. You can fix it yourself since Android is supposed to be open source, you know.
Google could have baked SHA2 support into Android but they didn't do it for whatever reason. So they should take some blame.
"There's only one solution to this: that we kill all this crappy old support. XP users might then start getting the message that their OS shouldn't be connected to the internet."
The fundamental error in your reasoning is that you didn't read the article - XP does support SHA2 hashes. It only requires the installation of Service Pack 3 - released over 7 years ago. If someone actually needs XP SP2 it is very likely that the computer in question isn't used for web browsing.
If someone actually needs XP SP2 it is very likely that the computer in question isn't used for web browsing.
Or, if it is so used while so far behind on patches, it'll be so riddled with crapware by now that the potential for hash collisions is the least of its user's problems.
"Google could have baked SHA2 support into Android but they didn't do it for whatever reason. So they should take some blame."
Computational intensity, perhaps? We're talking ARM chips here, not exactly the fastest at math-intensive operations. Not only that, consider what was state of the art at the time.
It's all very well saying "just upgrade your browser / OS", but there are also other uses of SSL certs. For example, I recently heard of an outage on an "Internet of Things" application where the client components could not handle SHA-2 certificates and these client components cannot easily be replaced. This application linked an IoT device to a back-end system using HTTPS. When the cert was extended, it broke the client because people did not expect the new algorithm.
I think it's fine that certificate authorities use SHA-2 as the default - but refusing to issue SHA-1 certs in the future seems unnecessarily harsh. It will break quite a number of existing systems over the course of time. Sure - these systems may be exposed to some security risks, but those risks are (at the moment) marginal and that's a trade-off better put in the hands of their owners. I feel that the CAs should be called upon to allow SHA-1 certs to be issued for a longer period of time.
Biting the hand that feeds IT © 1998–2019