Cryptographers have found new chinks in a widely-used digital-signature algorithm that have serious consequences for applications that sign email, validate websites, and carry out dozens of other online authentication functions. The researchers, from Macquarie University in Sydney, Australia, found a way to break the SHA-1 …
SHA-1 is dead
Long live SHA-2!
Use multiple algorithms
As I've mentioned before, it seems to me that the best course of action is to use two algorithms; for example, MD5 and SHA1. Calculate one hash using the specified input, and keep that hash. Take that hash value and append the specified input (if it's a large input such as a file, use a portion of the beginning of the input such as the first 32 bytes), then use the second algorithm to produce a second hash. Use those two hashes together to verify the data. I'm not a cryptographer, but I would assume that this method increases the number of brute-force attempts since a collision in one algorithm will almost certainly not result in a collision in the second algorithm (especially since the second algorithm's input includes more than just the specified input).
For quite a while, I've wondered about these hashes and collisions. It may take a single computer many years to brute-force a collision, but that length of time would be decreased significantly if someone used a botnet to distribute the load. Because of this, we must assume that the most powerful supercomputer (or its equivalent) will be used to try to brute-force new and existing algorithms, and design the algorithms accordingly.
4.5 quadrillion is a lot
Much more manageable than 9 quintillion, but it would still require a very large amount of computing power.
Use multiple algorithms - NOT!
Chris C: what you suggest is often thought by non-cryptographers as an easy solution, but it is not. Combining two weak algorithms certainly does not magically work around the structural weaknesses of these algorithms. In particular the differential path finding techniques would still apply to such a naive construction.
Add Salt FTW
No it doesn't magically fix the algorithms, but I I'm not sure that was what he was implying. I'm not sure about Chris's actual suggestion, but a file encrypted twice by 2 techniques does force you to break both which adds time. Also, you don't always know if an algorithm is (or will be) broken. If the CIA know about a bug in AES, you'll be damn smart if your hard drive is encrypted with AES+serpent or some other cipher. If it's trivial to implement multiple techniques then it is usually better than 1.
Attention cryptographers: brilliant insights available at El Reg!
Ah, how nice to see the Reg readers are as inclined as ever to contribute to the discussion, even if it's of a highly-technical subject they haven't studied. If only these bleedin' obvious ideas had been considered by actual experts in the field!
Chris C: Using two hashes does not offer a high probability of significantly improved security against most attacks, particularly when the hashes come from the same family of compression functions, as is the case with MD5 and SHA-1. As Tom Jones pointed out, you now have a construction Hnew(m) = H1(m) | H2(m). That's sometimes vulnerable to parallel attacks on both hashes. Even if it isn't, then if H1 and H2 have close to the same work factor for a given attack, then Hnew has twice that work factor - so you've effectively built a hash that's one bit longer than what you had before. Better to simply switch to SHA-256 and expand the search space by 2**96.
And *of course* people assume that attackers have access to large distributed networks, for attacks that can be parallelized. Do the math. A naive collision attack on SHA-1 requires trying on average 2**80 preimage pairs (so computing 2**81 hashes). This new attack requires 2**53 hashes on average to find a collision. If you have a network of a thousand machines that can each do about a million hashes (and generate the preimages, and compare the results) a second, that's about 2**30 pairs per second. With that grandiose a botnet running flat out, you'd find a collision in about 100 days. Whoopie. You'd do a lot better with custom hardware - the Hash Collision FAQ suggests that custom ASICs might be able to do on the order of 10**9 hashes per second. Get around 10,000 of those going and you'd find a collision in about 3 hours.
Of course, that's just a collision, which greatly limits your opportunities for interesting attacks. (The "two contracts" scenario is the textbook example, but you have to find some idiot willing to take you up on it.)
Bounty: This is about hash collisions, not encryption. Though the chance of your scheme offering any real additional security is pretty low, too. A successful attack against AES might well apply to the other cipher. But more importantly, attacking the cipher is very rarely how encryption is broken these days. Much easier to attack side channels, sources of key information, or people.
AC: Salting protects against precomputed-dictionary attacks and some traffic analysis ("I don't know what this message is, but I know I've seen it before"). It does nothing to resist finding hash collisions.
Some have already started moving away from SHA1 to something stronger:
- Nokia: Read our Maps, Samsung – we're HERE for the Gear
- Ofcom will not probe lesbian lizard snog in new Dr Who series
- Episode 9 BOFH: The current value of our IT ASSets? Minus eleventy-seven...
- Too slow with that iPhone refresh, Apple: Android is GOBBLING up US mobile market
- Kaspersky backpedals on "done nothing wrong, nothing to fear" company article