back to article Torvalds shoots down call to yank 'backdoored' Intel RdRand in Linux crypto

Linux supremo Linus Torvalds has snubbed a petition calling for his open-source kernel to spurn the Intel processor instruction RdRand - used for generating random numbers and feared to be nobbled by US spooks to produce cryptographically weak values. Torvalds branded Kyle Condon, the England-based bloke who created the …

COMMENTS

This topic is closed for new posts.
    1. Yet Another Anonymous coward Silver badge

      Re: Read the source

      Which is perfectly reasonable.

      Use the Intel RNG if you are doing Monte Carlo simulations - use real random numbers if you are encrypting your plans to kill the president (of Belgium)

      1. Yag

        "...your plans to kill the president (of Belgium)"

        They're right next to the plans to kill the king of the USA I suppose.

  1. Doug Bostrom

    Back to the crib

    "...conspiracy theorists are terrified that RdRand is compromised. "

    Only days later and we're back to "terrified conspiracy theorists."

    How we do love a comforting story (or insult) instead of facts, eh?

    1. Don Jefe

      Re: Back to the crib

      Whether it has been compromised or not, people won't want to admit it. Nobody likes being on the side that got taken advantage of, it is embarrassing. Egos are a powerful thing. People will go far to protect them.

      It is a real problem in all this, standards, systems, processes and products that have been generally assumed to be functioning as advertised are being uncovered as fatally broken. No one really knows how deep the corruption goes but nobody would want to come out and admit their chosen methods were also broken. It's like being an outspoken fan of a great athlete then finding out he's basically an ambulatory large animal pharmaceuticals storage facility and you've tattoed his jersey number on your forehead.

      I have no idea if the thing being discussed is broken or not. It is way out of my field. But I do know people and that no one likes to believe they've been taken advantage of by people they trust. From software icons to journalists all the way down to the person who sweeps the floors. It is fear of being made a fool of that is more dangerous in this than anything else.

      1. Anonymous Coward
        Anonymous Coward

        Re: Back to the crib

        No one really knows how deep the corruption goes but nobody would want to come out and admit their chosen methods were also broken.

        No, we all know that the corruption is very nearly complete.

        We know for example that the NSA were convincing Microsoft many years ago of the advantages of making their operating systems 'helpful' to the US government. We now know that since then the NSA and GCHQ have systemically been targeting every part of computing, hardware, operating systems, applications, and inter system communications. They have coerced and manipulated untold numbers of companies and people to 'assist' them in doing this, and more recently have been legally aided and abetted to do all of this by knee jerk reaction to a terrible attack on American soil.

        Given the levels of different aspects of computing that they have attacked, and the knowledge that they have many 'big' IT companies involved, anyone trusting anything sensitive to a computer now must be stupid.

        An entire industry compromised by fucking dickheads.

        1. Don Jefe
          Thumb Up

          Re: Back to the crib

          You're right. It is all almost certainly fucked.

          I've just been trying to be more specific in my language lately: Know, suspect and thought all have more weight in this conversation than they would have had four months ago.

          Trying to discuss it, maintain awareness of it and not come off as a complete nutter or, even worse, a compete nutter from way back who has now been proven correct is kind of a fine line ya know.

          1. Anonymous Coward
            Anonymous Coward

            @ Don Jefe

            When we were kids it was the bad guys who wanted to control everyhing, you saw it in all the films, on all the TV programs, and read it in all the books, yet somehow we seem to have found ourselves living in a world where it's the people who are supposed to be the 'good guys' who are behaving like that, I can't help feeling that somehow the plot line has got badly mixed up... if it were only a film.

  2. Anonymous Coward
    Anonymous Coward

    "...but it's claimed that mix is trivial (involving just an exclusive OR) and can be circumvented by g-men."

    Erm, "claimed" by whom? That statement is just wrong/stupid in so many ways it beggars belief.

    "Trivial"? Go and read drivers/char/random.c

    "just an exclusive OR"... "just"? No idea what a stream cipher is then... or how THE ONLY UNBREAKABLE cipher - a one time pad - is used.

    "can be circumvented by g-men." Really? Any chance of an elaboration/reference on that? No? Thought not.

    As long as the suspect stream is *a* source of entropy, not *the* source of entropy, and is *thoroughly* mixed into the pool of other sources (as it is) then even if it's malignant it'll still can't damage the overall entropy of the system, even a tiny bit.

    Simple thought illustration: I have a byte of well mixed random data derived from multiple entropy sources, I shall now inadvertently "just" XOR it against a patently malicious quasi-random stream from the NSA - eight 0s. What is my random byte now?

  3. Anonymous Coward
    Anonymous Coward

    Torvalds needs a paranoia transplant

    Torvalds is correct, in that if you have some random numbers in your entropy pool and XOR some new numbers into them (from the CPU), then you increase the entropy even if those new numbers have very little entropy themselves (i.e. are somewhat predictable). Even if the new numbers are completely predictable, you still do no harm.

    However, I'd disagree with Torvalds that this therefore makes it all OK. That's because you still need to estimate how much entropy you've accumulated. To produce random bytes by hashing, you need an estimate of the entropy per byte in your entropy pool because this determines how many pool bytes you need to feed into a hash function to produce each of the random hashes you'll actually use.

    If the CPU is believed to be supplying most of the entropy (because it's the fastest source) but in fact it's producing a predictable sequence, then you will have far less entropy in your pool than you thought. I can see that might be a genuine cause for concern because any secure key you then generate may have less entropy than you thought too (i.e. its bits may not be independent). Yes, exploiting this might require cracking a SHA hash, but that's the sort of advantage that it's plausible for the NSA to have.

    So my approach would be to keep using RDrand but to downgrade its entropy estimate by a large factor to reflect its now much reduced trustworthiness.

    1. Kebabbert

      Re: Torvalds needs a paranoia transplant

      Netscape mixed different random sources, and introdcued a pattern so it was breakable. Donald Knuth says never to mix stuff, instead rely on a proven mathematically strong design. Just becase you can not break your own crypto does not mean it is safe. Read my post further down.

  4. Anonymous Coward
    Linux

    So the NSA got to Torvalds?

    Did they grab a family member? Hand him a bag full of cash?? (/joke off)

    Tux--in need of a penguin-sized tinfoil hat.....

  5. T. F. M. Reader

    drivers/char/andom.c

    A few comments after throwing another glance at random.c in a recent version of kernel code - it's been a few years since the last time:

    * Assume that rdrand is not reliable. Yes, one can run a battery of tests on its output. Note that the best-known battery of tests comes from NIST, and it's been alleged that NSA have influenced NIST. The argument is that rdrand is mixed with other sources of entropy, so it is OK.

    * These other sources of entropy are: user input, disk seek times, and interrupt times. In servers there is no input to speak of (no keyboard or mouse attached). The randomness of disk seek times is due to the turbulence generated in the thin layer of air between the rapidly rotating magnetic disk an its enclosure. Once magnetic disks give way to SSDs this source will disappear. Interrupt times can be affected by external sources (a quick - too quick - glance at the code leads me to believe nothing in the implementation of add_interrupt_randomness() in drivers/char/random.c or in the only call to it from handle_irq_event_percpu() in kernel/irq/handle.c distinguishes between interrupts.), e.g., if my server does a lot of networking I expect most interrupts to come from network cards, and it is at least theoretically possible to send a lot of packets at regular intervals to the server to reduce the overall randomness of this component. This is why historically network cards were excluded from the entropy pool. This last potential problem is probably mitigated to a large extent by taking only the least significant bits into account.

    * The total amount of entropy is limited (without rdrand). It would be exhausted rather quickly if random numbers were used to encrypt everything, to run Monte Carlo simulations, etc. It would also be rather slow. However, normally the random numbers are only used to generate the seed for a PRNG (much faster). Hopefully encryption software does not use PRNGs from standard libraries (they are not very random). However, even a good PRNG is by definition deterministic if you know/guess/recover the seed. The output is statistically indistinguishable from random, but random it is not. Once you've covered the seed space the sequence is known (it's not all there is to encryption, of course, but it is a significant part).

    * Hopefully there is enough entropy for seeds even if rdrand is used. However, if rdrand is randomized and it is a major contributor to the entropy pool I would expect the overall randomness to be lower than without it. This by itself is not enough to demand that Linus gets rid of it, but it is a theoretical concern. See Ted T'so's blurb quoted in the article.

    * I expect it should be considerably easier for NSA to break into most computers exploiting bugs in various programs than cracking somewhat weakened random sequences. I am sure they are ready to use all the attack vectors where needed.

    1. Charles 9

      Re: drivers/char/andom.c

      There is research into alternate sources of entropy from other parts of the CPU. Given a sufficient workload, the registers and other internal workings of the CPU are volatile enough to create a source of entropy (this is the theory behind HAVEGE). Perhaps more research into other independent sources of entropy could be found (I can't think of any, though, off the top of my head that couldn't be subverted in some way).

    2. Charles Manning

      Re: drivers/char/andom.c

      I call you out on two points sir:

      * "one can run a battery of tests". These tests are limited in what they can produce. They are useful for testing simulation-level randomness for mathematical modelling, not security.

      * You assume far too much when it comes to the seek times of disks being predictable and SSDs being even more predictable. SSDs have flash inside which takes a variable amount of time to write/erase. Interrupt times have a large jitter due to other stuff happening on the system - even memory caching has an impact. Network cards still have an impact because servicing them adds jitter (ie entropy) to other interrupts.

      1. Anonymous Coward
        Anonymous Coward

        Re: drivers/char/andom.c

        I call you out on two points sir:

        * "one can run a battery of tests". These tests are limited in what they can produce. They are useful for testing simulation-level randomness for mathematical modelling, not security.

        In fairness, he was saying "one can run a battery of tests but it won't help" - which is exactly what you're saying. Although your reasonings differ. Personally, I very much doubt NIST is rigging those tests in the hope of gaming the cryptography industry. Quite the reverse in fact. Credibility is EVERYTHING in security/subterfuge realms and it's hard to earn. It'd be imperative to earn sufficient credibility for Trojan horses to be widely accepted and obscure, harmless little projects and tools like those are perfect grist for the task. In the early days NSA used to do all this itself - such as when it fucked over IBM's Lucifer... Win cred by spotting and fixing a weakness while at the same time crippling the cipher's strength - then quickly rubber stamp it. Of course the giving with one hand while taking with the other is a bit obvious. So now we have NIST and NSA. The NIST does the giving while the NSA does the taking away. A sort of good-cop / bad-cop routine if you like. So that's OK then - we can all just trust the "good cop" - 'cos we're complete cretins - there's no way the two US government security agencies could possibly possibly working in collusion.

  6. Gene Cash Silver badge

    Android

    Does anyone know what the Android code does? I know it's weak enough to have compromised BitCoin, but I haven't looked at it myself.

    1. Charles 9

      Re: Android

      Android is based a lot on Linux, and /dev/random IIRC isn't too different from its predecessor. However, since most Android devices use ARM, it doesn't have access to a hardware RNG. It can draw in a number of sources of "noise" like network transmissions and user input to help with the entropy issue, but perhaps it lacks the entropy for a more serious implementation.

  7. fortran

    drivers/char/andom.c (T.F.M.Reader)

    What I learned of numerically intensive computing, is if your code needs random numbers, you go find a RNG that suites your needs. If your RNG needs a random seed in starting, you can call /dev/random once for that seed. But making a string from process id, the time, free space on partitions, and what not, and running that through something like MD5 for your seed is probably about as good. But you don't use /dev/random for general user programming. And look at the source for your RNG, to make sure it isn't using /dev/random in some way.

  8. btrower

    Linus is correct in both form and substance.

    I too looked at the code.

    Thought experiment:

    Stream prior to tempering with RDRAND has been encrypted with a secure one time pad.

    Use good RDRAND or bad RDRAND makes no difference in this case, you cannot inspect the plain text without the one time pad key.

    Stream prior to tempering with RDRAND has been encrypted with a non-secure key.

    Use good RDRAND, it is stronger. Use bad RDRAND and it is no stronger but it is no weaker either.

    No matter how compromised RDRAND is, the worst it can do leave the stream as strong as it would be without RDRAND.

    Practically speaking, you can expect RDRAND to add good entropy to most things for most purposes.

    I do not trust the NSA and I think it would be foolish to *rely* upon RDRAND, but a cursory examination of the file below shows that the Linux Kernel gets the use of any entropy there and is unharmed by any compromise no matter how extreme:

    http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/char/random.c

    1. Werner McGoole

      Re: Linus is correct in both form and substance.

      But on Linux, /dev/random is supposed to produce *true* randomness, with full entropy. Its output should be completely unpredictable by an adversary who even knows the exact state of the rest of your system and all the past output. There is no scope for pseudo-randomness or imperfect entropy in /dev/random. If you try to read random bytes and there isn't enough entropy, it must block.

      If you want a non-blocking source of randomness, you read /dev/urandom instead, which uses a pseudo-random number generator seeded from /dev/random. So the quality (true randomness) of the entropy harvested for use in /dev/random IS critically important. If the sources used don't have full entropy, you need to "condition" the data before use, which is a way of concentrating its entropy. For example, you might want to take the "random" CPU data in 1MB chunks and hash each of those down to 64 bytes. Then you could be more confident of having truly random bytes.

      Let me explain why this is important. If you use a pseudo-random number generator (PRNG) to generate a key with a fixed seed, your random numbers obviously won't fill the keyspace* - because it will only ever produce one output sequence. But what people don't seem to realise is that if you seed it with "random" numbers that don't have full entropy, the output *still* won't fill the keyspace. It may look perfectly random and be unpredictable, but an adversary who understands the PRNG well enough doesn't have to search the entire keyspace equally to discover the key.

      So you need to be exceptionally paranoid about /dev/random.

      *By which I mean that the probability of each possible sequence of output bits won't be equal.

      1. Gordon 11
        Coat

        Re: Linus is correct in both form and substance.

        Its output should be completely unpredictable by an adversary who even knows the exact state of the rest of your system and all the past output.

        But surely, at that point Heisenberg would point out that you cannot tell the clock-speed of the system, so cannot predict anything about its future?

      2. Gordon 11

        Re: Linus is correct in both form and substance.

        But on Linux, /dev/random is supposed to produce *true* randomness,

        Has anyone tried using the background radio white-noise to help with this? I can see lots of references to people trying to get rid of white-noise, but none for people trying to make use of it.

  9. Frumious Bandersnatch

    random numbers

    I always pick '2'. Nobody expects that.

    1. Anonymous Coward
      Anonymous Coward

      Re: random numbers

      Me too!

      ;D

    2. Anonymous Coward
      Anonymous Coward

      Re: random numbers

      I always pick "The_Spanish_Inquisition"

      Nobody expects that

    3. PJI

      Re: random numbers

      2 is the universal one for uncertainty: because it is similar to a question mark, "?", that being the brilliant theory behind its use in populations models that I was given.

      1. oolor

        Re: random numbers

        There's no such thing as two.

  10. Kebabbert

    Linus is totally wrong

    There was a famous example of... Netscape(?) did a mix of different random sources. They used a random number generator, added the current millisecond, how much space was left on the harddrive, etc to create a "truly" random number. But researchers succeeded in breaking it, because they knew what was the building blocks, they could infer things, such as "typical hard drive is this big", etc. So, the researchers succeeded in discarding lot of the search universe, so they could decipher everything. It was lot of work, but it was doable. To mix different sources does not make better randomness. The Linux kernel developers would have known this if they studied cryptography (which I have).

    Donald Knuth has a very interesting story on this in his epitome "Art of computer programming". He was supposed to create a random number generator many years ago, so he mixed lot of different random sources, the best he could. And Donald Knuth is a smart mathematician, as we all know. After his attempt, he analyzed it and discovered a very short range. It quickly repeated itself. That learned Donald Knuth that he should never try to make a random generator (or cryptosystem), just because you can not break your crypto or random number generator, it does not mean it is safe. Donald Knuth concludes in his book: it is much better to use a single well researched random generator / cryptosystem than make one yourself. Much better. If you start to mix different sources, you might introduce bias which is breakable. It suffices if the adversary can discard some number in the huge search space to be able to break it.

    So, NSA and the like, would be more concerned if Linus used a proven high quality random generator. As Snowden said: NSA can break cryptos by cheating. NSA has not broken the mathematics. The math is safe, so use a mathematically proven strong random generator instead of making your own. That is very bad. If you study basic cryptography.

    The Linux kernel developers seem to have very high thoughts of themselves, without knowing the subject? Probably they would also claim that their own home brewn crypto system is safe, just because it is complex and themselves can not break it. That would also be catastrophic. They should actually study the subject, instead of having hybris. But, with such a leader....

    1. John Gamble

      Re: Linus is totally wrong

      I know of the story you're referring to, and you're mis-stating it. First, the "mixed sources" random number generator used linear congruential generators -- no PC noise, no cryptographic hashing, and no use of Blum, Micali, and Yao's paper published in 1984 (which is referenced in the current edition of Knuth, see page 179). Knuth argued that if you're going to use a LCG random number generator, use one -- don't mix them.

      This obviously has nothing to do with the current situation, and has had nothing to do with modern cryptographic-level random number generators for twenty years now.

      Do Knuth a favor. Stop misquoting him, and buy the latest edition of his The Art of Computer Programming. It is quite worth it.

      1. Kebabbert

        Re: Linus is totally wrong

        "...I know of the story you're referring to, and you're mis-stating it. First, the "mixed sources" random number generator used linear congruential generators -- no PC noise, ..."

        No, you dont. I studied cryptography back then, and I remembered that some company, was it Netscape?, used the space left on the hard disk as one of the inputs to create random numbers. They used "PC noise", that is for sure. It seems you have not read the same story as I did.

        1. Anonymous Coward
          Anonymous Coward

          Re: Linus is totally wrong

          You still haven't spotted that you've confused random and pseudo-random? Have another look. That alone really makes a mockery of everything you utter.

          1. Kebabbert

            Re: Linus is totally wrong

            Bla.bla. I know the difference. I did some work on group theory and pseudo random generators. It turned out that the work was already known, but I did not know that when I started. You want to read my thesis on the subject??

            1. Anonymous Coward
              Anonymous Coward

              Re: Linus is totally wrong @Kebabbert

              "You want to read my thesis on the subject??"

              Yes please. Pointer to it?

              1. Anonymous Coward
                Anonymous Coward

                Re: Linus is totally wrong @Kebabbert

                >"You want to read my thesis on the subject??"

                >Yes please. Pointer to it?

                Why AC? He's splaffed crap here. That fact alone makes it almost inevitable that he's splaffed crap elsewhere. Why would you want to see it? The select highlights we've been treated to already are certainly enough for this AC.

        2. John Gamble

          Re: Linus is totally wrong

          No, you dont. I studied cryptography back then, and I remembered that some company, was it Netscape?, used the space left on the hard disk as one of the inputs to create random numbers. They used "PC noise", that is for sure. It seems you have not read the same story as I did.

          Please don't mix and match stories. I was referring to your reference to Knuth's mixed-input RNG, and nothing else. Obviously, his conclusion, which you used repeatedly and wrongly, had to do with linear congruential generators, and nothing else.

          As for Netscapes's alleged use of a bad source of randomness, no one is disputing that bad sources of randomness exist. But that has nothing to do with Knuth's example, and has even less to do with current cryptographic random number generators, except as a cautionary tale. At best you are woefully out of date on the state of current technology.

        3. Charles Manning

          Re: Linus is totally wrong

          It surely depends on how you are mixing in the sources.

          If the attacker knows one of the sources, then you can just say that source is always zero (or whatever fixed value).

          If you are mixing in sources by something as simple as an XOR, then you are XORing in zero - which has no effect.

          With the correct mixing algorithms entropy can only be increased, not decreased, by mixing in other sources.

          Where the Netscape issue came from was probably that they started off with some really crappy sources then combined them and saw a statistical spread that made them think they had a good result.

          1. btrower

            Re: Linus is totally wrong

            @Charles Manning

            Re: "With the correct mixing algorithms entropy can only be increased, not decreased, by mixing in other sources."

            Quite correct and nicely put.

          2. Michael Wojcik Silver badge

            Re: Linus is totally wrong

            Where the Netscape issue came from was probably that they started off with some really crappy sources then combined them and saw a statistical spread that made them think they had a good result.

            While Charles has the right of this argument, and Kebabbert (who ironically is lecturing other people about studying cryptography, while displaying a rather glaring ignorance of the subject) is wrong in most particulars, I have to admit I'm growing a bit annoyed at the number of people making offhand references to the Netscape crack without bothering to look up the details. Pro tip: with the help of this new-fangled Internet, it's pretty easy to find out what happened.

            Netscape's original SSL implementation was broken in 1995 or 1996 by Ian Goldberg and David Wagner. You can read their DDJ article about it, but the short version is that on UNIX systems (other platforms were even weaker) Netscape's CPRNG was seeded with the time of day in seconds and microseconds, the browser process ID (pid), and the browser parent-process ID (ppid). In many cases the last value is 1 (the browser process having been reparented to init), so it often had no entropy. The pid is trivial to extract if the attacker has access to the OS and often easy to estimate even if not, so it has little entropy at best. The time in seconds when Netscape seeded its CPRNG is easy to determine, exactly in some cases or to within a small interval, so it has at best a few bits of entropy. That leaves only the microseconds value - less than 20 bits of entropy, sometimes considerably less.

            That entropy was used to seed MD4 (after passing them through a LCRNG which didn't do anything cryptographically useful). MD4 is probably a strong mixing function (it was superseded by MD5 mostly due to speed), but with effectively only around 3 bytes of entropy it's trivial to reconstruct the CPRNG seed and sequence.

            The SSL 1.0 CPRNG is structurally similar to /dev/urandom. Aside from mixing entropy sources, it's not related to /dev/random. /dev/random does suffer from the potential problem of reduced entropy, but people who want to harp on about that might at least demonstrate they're familiar with some of the large corpus of literature on the subject. Like, say, RFC 1750, from 1994. Or Von Neumann's discussion of techniques for removing bias from random bit streams, from 1951 (whence also his famous "state of sin" line). This is not news, folks.

    2. Werner McGoole

      Re: Linus is totally wrong

      I agree you should use a proven algorithm rather than making your own, but I think you've missed part of the point here. A mathematical algorithm can only produce pseudo-randomness. It still needs to be initialised to a non-predictable value otherwise all computers will generate the same pseudo-random sequence (as I think Android was recently found to be doing).

      So good cryptography also depends on a source of true randomness for seeding the mathematical algorithm (and also for re-seeding it occasionally just in case someone spots the pattern). On Linux, /dev/random is the standard place to go to get that "true randomness". So you don't have a choice here. You can't rely on a mathematical formula. You have to have true randomness derived from a physical, non algorithmic source.

      1. Kebabbert

        Re: Linus is totally wrong

        Werner McGoole,

        yes I know all that. I studied cryptography for one of the leading experts in the world. He is world famous and if you have studied cryptography, you have surely heard of him.

        1. Solmyr ibn Wali Barad

          Re: Linus is totally wrong

          "I studied cryptography for one of the leading experts in the world"

          ...and managed to get away quite unscathed.

    3. Anonymous Coward
      Anonymous Coward

      Re: Linus is totally wrong

      "The Linux kernel developers seem to have very high thoughts of themselves".

      Perhaps It would do you some good to look at your self in a mirror and also consider if you have ever developed anything of value.

      1. Anonymous Coward
        Anonymous Coward

        Re: Linus is totally wrong @AC 00:53

        "Perhaps It would do you some good to look at your self in a mirror and also consider if you have ever developed anything of value."

        That has no bearing on the validity of his statement. Not sure which fallacy you're using there - straw man, maybe?

        1. Michael Wojcik Silver badge

          Re: Linus is totally wrong @AC 00:53

          "Perhaps It would do you some good to look at your self in a mirror and also consider if you have ever developed anything of value."

          That has no bearing on the validity of his statement. Not sure which fallacy you're using there - straw man, maybe?

          Argumentum ad hominem. It's a logical fallacy (using Aristotle's terminology and rhetorical scheme) because it is solely an argument about ethos - the standing of the speaker - and not about the facts of the matter. The latter would be logos, hence "logical" fallacy.

          That said, AC's argument is perfectly appropriate for the subjective portions of Kebabbert's rant, and since K has made some rather extravagant claims of expertise in this area and failed utterly to support them, ethos seems to me to be an acceptable register.

          1. oolor

            @ AC: 11th September 2013 06:23 GMT:

            >Yes please. Pointer to it (Kebabbert"s thesis)?

            I did a little googling. Based on the incoherence and lack of info this is my best guess:

            http://en.wikipedia.org/wiki/Voynich_manuscript

    4. This post has been deleted by its author

    5. Anonymous Coward
      Anonymous Coward

      Re: Linus is totally wrong

      Factually WRONG. If you have one "good" random bitstream and one "crap" bitstream, you XOR them and it will be at least as good as the "good bitstream". Of course the crap one must not be functionally dependent on the good one.

      So I assume the good Mr Knuth made a very idiotic mistake or he didn't have a single good bitstream.

      For engineering purposes: run a counter from 1 to 2^64 and perform 3DES (with some 112 bit key you get from hitting keyboard randomly) on it. That will be sufficient for all your needs, believe me on this. Most people will be even OK with an RC4 stream.

      That's actually how people should do it if they have NSA in their security threat model.

      1. Kebabbert

        Re: Linus is totally wrong

        Duke Arco of Bummelshausen

        "...That will be sufficient for all your needs, believe me on this. Most people will be even OK with an RC4 stream...."

        The same RC4 that NSA might have broken?

        http://www.theregister.co.uk/2013/09/06/nsa_cryptobreaking_bullrun_analysis/

  11. loneranger

    Who can tell?

    I have respect for Torvalds, but the NSA has literally hundreds/thousands of PhD mathematicians working on encryption/breaking encryption, so weighing Torvald's smarts against all that brainpower, computing power, and sheer money power, who can say for sure whether his random function/method is compromised or not?

    1. Kebabbert

      Re: Who can tell?

      Mixing random generators are never a good idea, it weakens everything if not done correctly. If you study the subject you would know it. But if you are a mere Linux developer, he would of course believe he knows everything.

This topic is closed for new posts.

Other stories you might like