back to article Torvalds shoots down call to yank 'backdoored' Intel RdRand in Linux crypto

Linux supremo Linus Torvalds has snubbed a petition calling for his open-source kernel to spurn the Intel processor instruction RdRand - used for generating random numbers and feared to be nobbled by US spooks to produce cryptographically weak values. Torvalds branded Kyle Condon, the England-based bloke who created the …

COMMENTS

This topic is closed for new posts.
  1. Destroy All Monsters Silver badge
    Facepalm

    > impossible to audit

    Yeah, some people haven't heard about statistical tests for randomness.

    1. asdf

      no surprise

      Randomness is a remarkably complex mathematical topic. It seems easy enough but is remarkably easy for the layman (like me sadly) to screw up and misunderstand. Still my guess is he means it might spit out verifiable random numbers today but if some magic date(s) (such as around 9/11 every year for example) is hit it suddenly by a slight bit doesn't for example. Just trusting a black box and verifying outputs has more than once in science been accepted as dogma and held things back.

      1. This post has been deleted by its author

    2. Yet Another Anonymous coward Silver badge

      Statistical tests for randomness DO NOT tell you if somebody else knows the sequence.

      There are published books of random numbers, they pass all the tests for randomness (as does pi) but the next number in the sequence isn't a secret if you wrote the book.

      1. Charles Manning

        pseudo-randomness

        Quite correct. Stuff that is often not at all random can look random if you don't know the pattern.

        Such pseudo-randomness is, for example, used in the encoding of GPS signals.

    3. Androgynous Cupboard Silver badge

      Lies! Damn lies!

      I'll give you two feeds - one of random data, the other an AES256 encrypted stream of bytes from an initialization vector only I know. Lets see if you can tell them apart.

    4. Christian Berger

      You cannot test for randomness, you can only test for not having certain regularities.

      Decent pseudo random generators will look like true random, but they aren't.

      1. Anonymous Coward
        Anonymous Coward

        "Random"

        There is no such thing as "random", at least according to some researchers who tested the statement. "Random" is only our human inability to view a large enough sample of the event in order to perceive the pattern - there is always a pattern it is simply that we do not understand it (Pi is from the universal ratio of a circle's circumference to its diameter, and therefore the source of the mathematical construct is the universal presence of the circle in life itself). "Unpredictability", as Wiki's entry for "random", is truly what we humans interact with: our (distinctly) human inability to understand where an event will lead in the future.

        If you understand that humans simply cannot see the "Big Picture" to break the code of "randomness", then you'll also accept the fact that events like the NSA breaking codes is inevitable: they used a large enough sample (enough computing power) to discern the patterns.

        1. Kebabbert

          Re: "Random"

          @Anonymous Coward,

          What "there is not such thing as random"? Have you read professor Chaitin's work on algorithmic information theory where they define the very concept of randomness? Read it and then come back.

          (I apologize, but it is funny how similar you sound like a Linux kernel developer, who thinks he knows everything, when in fact he has not studied the subject or know nothing on the subject. Hybris all Linux kernel developers display. But I am not accusing you of this, I am just saying it sounds a bit funny)

        2. Schultz
          WTF?

          There is no such thing as "random"

          This statement is fundamentally false. Even if we would happen to live in a completely deterministic universe, the laws of quantum physics would only allow us probabilistic statements about the presence and the future. In other words, it is impossible to tell from within the universe if the universe is deterministic and it is therefore impossible to predict the future.

          As we cannot predict the future, any measurement is probabilistic and contains true randomness in the sense of unpredictable outcomes or fluctuations. This randomness can be very small if we move towards the realm of classical physics, but the uncertainty is still there, it's just very small.

          So if you want true randomness, use a quantum-mechanical measurement, such as described here or here. I'd have to look into it, but I would guess that a random number generator based on a UV LED, a beamsplitter, and two avalanche photodiodes should be quite simple and cheap.

          1. Anonymous Coward
            Anonymous Coward

            Re: There is no such thing as "random"

            So if you want true randomness, use a quantum-mechanical measurement, such as described here or here. I'd have to look into it, but I would guess that a random number generator based on a UV LED, a beamsplitter, and two avalanche photodiodes should be quite simple and cheap.

            No need for such complexity: A simple forward-biased BJT will amplify the noise from inside its junction, which is a quantum-mechanical device. All you have to do is quantize it and remove any bias. Lots of examples on the web. One would hope that it is something like this drives the random number generator in the Intel chips, but who knows.

        3. Dagg Silver badge
          Devil

          Re: "Random"

          >There is no such thing as "random", at least according to some researchers who tested the statement.

          Incorrect, try quantum fluctuation. Quantum mechanics runs on randomness.

          The quantum noise from a diode is random and is used in random number generators. I don't know exactly what the intel engineers used, but basing it on the random quantum noise from a semi conductor junction would appear to be the easiest method.

          However, if there is any post processing of the random number...

          1. RandSec

            Re: "Random"

            @Dagg: "The quantum noise from a diode is random and is used in random number generators. I don't know exactly what the intel engineers used, but basing it on the random quantum noise from a semi conductor junction would appear to be the easiest method."

            Noise from a reverse-biased diode in breakdown is analog, not digital. Typically, the better it is, the tinier it is, and so harder to distinguish from amplifier noise, sampling, and a-to-d conversion. It is also not random, in that it has both an uneven statistical value distribution and long-term correlations between values. Indeed, the closer one looks, the worse the situation is. In theory it should work, but in practice requires deep understanding for serious results.

            The investigation of a randomness device needs an unusual mindset, in the sense that simply claiming the sequence to be random is not enough, and statistical measures also are not enough. We know this because we could record any "random" sequence, and any test which would certify those results would be wrong when we re-use the sequence. But most designers will claim that their result "must" be random and stop, as they eventually must, simply to get on with life. For there is no natural end to this investigation. And the moment problems are found, and the device fixed, the investigation starts again.

            Given a physically-random RNG device which passes basic tests, the only recourse is for someone other than the designers to invest extraordinary and unrewarding effort to expose pattern in the results. And if the results actually do cause change in the design, the only recourse is to do it all over again.

            All of cryptography suffers from this. No cipher is proven secure in use. Generally speaking, all past ciphers have failed. Yet the only way to get a better cipher is to somehow, after years of work and as pure public service, find a problem in the current one. Nowadays, that would be the one approved by the US government, for why would insurance cover using anything else? And the end result would be yet another government-approved cipher.

            The way around this is to not have a standard cipher, but instead have a standard cipher INTERFACE. Allow people to use whatever cipher they want. The more ciphers the better. Do not protect all knowledge in society with a single cipher!

            Then require that 3 ciphers be used in sequence, each with an independent key, and one of the ciphers would be the current standard. This is multiciphering, with a result at least as strong as the standard cipher. Is "costly" multiciphering actually "needed"? Obviously it is, because we never should have trusted any single cipher, and certainly no longer can.

            Note that all data ciphering should take a long random value or "nonce" (n-sub-once), encrypt it under that channel key, and then use the random value as a message key for the actual ciphering. By making the random value very long, we can reduce the impact of non-randomness from a randomness generation system we inherently cannot certify.

            Realistically, though, ciphers start with plaintext and end with plaintext. It is unnecessary to "break" the cipher if one can access the plaintext, and malware bots do exactly that. Once again no tools exist for a normal computer which guarantee to detect a hiding bot. Obviously a prior-instrumented machine can see malware run and hide, but our problem is the normal machines we have, after the fact, and their problem is more about "infection" than malware running. While malware itself is encountered rarely, an infected machine runs malware on every session.

            To address infection, we need to make the equipment not accept infection. That means no current hard drives (including USB flash and SSD), because they are easily written by malware. That also means no video card, because that BIOS could be infected. And it means re-flashing the motherboard and router BIOS periodically. But all of this could be avoided with proper hardware design, and the fact that it is not, is, frankly, suspicious. My guess is that certain organizations appreciate the fact that virtually every machine in the world can be infected, and that the users can do almost nothing about it.

            "However, if there is any post processing of the random number..."

            Because physical quantum noise is tiny, it must be detected by sensitive and error-prone physical processes. A common conceit is that randomness is the goal so "random" errors in detection can be ignored. That is false.

            It is almost universal that the physics and detection mechanics combine to produce a non-flat or uneven statistical distribution from a physical RNG. If we want flat, post processing is not optional.

      2. Anonymous Coward
        Anonymous Coward

        >Decent pseudo random generators will look like true random, but they aren't.

        As do (rather topically) decent ciphertexts... but they also aren't... but they are what Intel openly admits to using to obfuscate the true nature of this "random" stream. If *the source* was *random* it couldn't benefit from obfuscation with the NIST/NSA's own AES. Yet AES ciphertext it is. Obfuscation = snake oil.

        In my view, by far the most interesting, revealing and scandalous piece of this article was this snippet of TT's quote: "I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction..."

        So Intel/NSA is actively going around "pressuring" projects to use its propitiatory "random" data, and only its propitiatory "random" data, to the exclusion of all else! Nothing remotely spooky about that. :O

        1. Anonymous Coward
          Anonymous Coward

          Whoops! propitiatory != proprietary

          Think my computer must have tried to help me with that one.

        2. Anonymous Coward
          Anonymous Coward

          @AC 22:47

          "So Intel/NSA is actively going around "pressuring" projects to use its propitiatory "random" data, and only its propitiatory "random" data, to the exclusion of all else! Nothing remotely spooky about that. :O"

          Any evidence, other than the quote? Interesting how some people so worried about security will take things at face value when it says what they want to hear..

    5. Kebabbert

      As I explain further below, Netscape(?) mixed different random sources (current millisecond, space left on hard disk, etc) with a random number generator - and researchers broke it.

      As Donald Knuth explains in his Art of Computer Programming: mixing random sources is never a good idea. His own home brewn random generator which mixed lot of stuff, had a remarkably short period before repeating itself. Read his book on random number generators. It is obvious that Linux kernel developers have not, nor have they studied the subject of cryptography. Donald Knuth says it is better to rely on a proven mathematically strong system, than making your own. Read my post further down.

    6. Anonymous Coward
      Anonymous Coward

      "statistical tests for randomness."

      You are such a cute little boy. Have a cookie poisoned by NSA.

      They will make sure the statistical imbalance is of such a high order you can look for aeons to find out what they did, if you don't inspect the circuit.

      Think of the following scheme:

      On CPU power-up, one of 120000 different keys is used to start an RC4 keystream cipher. The output of that generator is then used in the RDRAND instruction and sold as "physical randomness".

      The nice men of the gubbermint build a machine for a few million dollars to iterate said 120k possibilities.

    7. JCitizen
      Coffee/keyboard

      Let's just assume...

      that the Chinese have already compromised all chips going out to the world. Now - what did you say Mr. Torvalds?

      I got a problem with FOSS where chip design is not revealed along with all code for freedom lovers everywhere.

  2. DrXym

    I doubt it has much to do with randomness

    Want to compromise https? Just get a CA or two in your pocket to give you a root cert, or a bank's signed key or sign your own key for the same domain, or bribe a highly placed employee to do the same, or just plain old fashioned steal the signing certs. Then you can do man in the middle attacks to your heart's content.

    Compromising randomness is likely a far harder proposition.

    1. asdf

      Re: I doubt it has much to do with randomness

      Yeah central trust in SSL certs has been known as a big fail for a decade at least. Pretty much since it was first proposed.

    2. Yet Another Anonymous coward Silver badge

      Re: I doubt it has much to do with randomness

      Remember when they were planning this feature the NSA presumably thought the enemy might be Russia or China, or Iran, or Belgium. It's difficult to persuade the KGB to use Google for it's root CA.

      Although nowadays it's probable that both the NSA and KGB outsource to Booz-Allen

    3. Anonymous Coward
      Anonymous Coward

      Re: I doubt it has much to do with randomness

      However, a fake certificate will have to use a fake public key and that can be detected by anyone who knows the correct public key. So it's more likely that stealthy listening is done with knowledge of the private key and then the spooks don't have to bother the CA chaps.

      How to get the private key? Well Google probably just gives it to them. Otherwise, there's legal pressure, side-channel attacks, or hacking.

      1. the spectacularly refined chap

        Re: I doubt it has much to do with randomness

        How to get the private key? Well Google probably just gives it to them. Otherwise, there's legal pressure, side-channel attacks, or hacking.

        You've obviously never actually applied for a cert then. You generate your own keys and give the public key to the CA for signing. They never see your private key.

    4. Tom 13

      Re: Compromising randomness is likely a far harder proposition.

      Yes. I noticed the original article was very careful to say 'circumvented or broken' (or words to that effect). And all the articles since then have, umm, edited for shortness. Yeah, that the ticket! Edited for shortness, not edited to enflame and/or mislead.

  3. Vociferous

    Easiest way of compromizing a random number generator...

    ...is to have it repeat its number series after a few hundred million iterations. Presumably people test for that nowadays, though.

    1. PyLETS

      Re: Easiest way of compromizing a random number generator...

      Using AES256, I think you'll need 2**256 iterations before the pseudo random sequence repeats itself which is large in relation to the number of atoms and time quanta in the universe and its expected lifetime. Of course if you know block X in the system you also know block X+1, but if blocks X .... X+n are used as key material e.g. in a stream cypher where attacker doesn't get to see any X, the sequence of key material will be unknowable by viewing ciphertext created by XORing plaintext and the key which effectively becomes the one-time pad generated once Alice and Bob share secret X e.g. using Diffie Hellman.

    2. Anonymous Coward
      Anonymous Coward

      Re: Easiest way of compromizing a random number generator...

      No, the easiest way to compromise a pseudo random number generator (a truly random number generator can never be compromised, except by perverting its correct operation) is to know both the algo and the seed. Unless you're just talking about hunting for skew and similar design failings - in which case, yes, people test for that stuff now. it's precisely what the statistical tests are designed to catch.

      For example, suppose this Intel TPM output was actually a pseudo random number generated by adapting AES to function as a stream cipher (just as Intel's documentation describes it as being). Then it would certainly LOOK random. It would pass any statistical analyses you might care to throw at it (as it does). But would it be random? That would depend on upon what it's being applied. If it's just an exercise in futility applied to a true random stream for the sheer hell of it (just as Intel's documentation describes it as being) then it too can be considered truly random. If it was the naked stream cipher seeded perhaps buy a uuid and simple on-chip counter/clock (for example) then it would still look random, pass all the tests, differ from everyone else's, etc but be totally predictable and derived from a vanishingly small set of data.

      So what is it? Unless you're one of a handful of trusted engineers at NSA/Intel, then your only way to know is either:

      1) Crack open a TPM chip and indulge in a spot of etching, microscopy and logic analysis while, no doubt, pitting your wits against the best obfuscation that NSA/Intel can contrive.

      2) Crack AES. Assuming it really is AES as documented.

  4. Anonymous Coward
    Anonymous Coward

    I think Torvalds is losing it

    In this case I simply don't know if he's right or wrong, and quite frankly I also don't quite care any more.

    But I do think Torvalds is really losing it. Some websites even seem to start recollections of Torvalds' outbursts and the one I came across on Paritynews also mentioned another recent outburst regarding ARM/Soc developers:

    "Ok. I still really despise the absolute incredible sh*t that is

    non-discoverable buses, and I hope that ARM SoC hardware designers all

    die in some incredibly painful accident. DT only does so much.

    So if you see any, send them my love, and possibly puncture the

    brake-lines on their car and put a little surprise in their coffee,

    ok?".

    At first I thought this comment to be fake(d). The article I mentioned above linked to this entry on the Indiana LKML archive and being unfamiliar with all the LKML archives I started digging on lkml.org. And sure enough; the same message is present.

    I think comments like these are crossing borders, not to mention being dangerous.

    One of the reasons Torvalds bursts out in the way he does is because he feels there's no other way to get his point across. Apparently, according to him, there are a bunch of "stupid people" subscribed to the mailing list and the only way to get his point across is to be blunt and direct.

    I can see that, I don't agree, but each to his own.

    But if people are so "stupid" that you have to yell and rant to get your point across, then why can you trust them to understand that what is being said here is just "an opinion" or maybe even a "joke"?

    That doesn't quite add up for me. Now all of a sudden people are smart enough to understand the "subtleties"?

    As said at the top I don't really care that much any more, but I have to wonder how long before this is really getting out of control. I hope for Torvald's sake that no ARM/Soc developer gets himself in a car accident.

    1. asdf

      Re: I think Torvalds is losing it

      I guess we should just be glad one of the world's great coders (speed he got git up and running proved that) only has some sociopath tendencies. He could be in jail for murder like that idiot Reiser. Lol as almost all on here know computer skillz != people skillz.

      1. Anonymous Coward
        Anonymous Coward

        Re: I think Torvalds is losing it

        Yeah, but it still doesn't make it acceptable to behave in this way. Calling for someone to kill people on the Internet is pretty poor, even if "it's a joke". We wouldn't put up with this behaviour in a pop star, it's not ok in a programmer, especially if he runs a high profile project.

    2. sisk

      Re: I think Torvalds is losing it

      There's no doubt that Torvalds suffers from a severe lack of both social skills and tact, but the man is a very talented coder and has done a very good job as the chief maintainer of what has become the most common kernel in the world.

      That said, I've never gotten involved with kernel development partially (and only partially) because of the way he treats people. Much as I respect him as a coder and love Linux I do not and would not tolerate being spoken to in the manner that Torvalds usually speaks to people with whom he has disagreements. I've put people on my ignore lists in several forums for that sort of nonsense. It's one thing to disagree, even to disagree forcefully, but it's entirely another to start hurling insults at the first sign that you might be smarter than the person on the other side of the debate.

      1. Yet Another Anonymous coward Silver badge

        Re: I think Torvalds is losing it

        >suffers from a severe lack of both social skills and tact

        "benefits" from ..

      2. Nick Porter

        Re: I think Torvalds is losing it

        Does Torvals actually speak to people like this face-to-face or is it only behind the safety of a keyboard? Because I find it extraordinary that no-one has given his fat face a slap so far.

        1. Yet Another Anonymous coward Silver badge

          Re: I think Torvalds is losing it

          You have to get past his legions of killer penguins (*with laser beams)

        2. Don Jefe

          Re: I think Torvalds is losing it

          Meh, the IT industry really isn't known for satisfying bruised egos with fisticuffs. People get even in different ways; a good example is that picture El Reg uses of Torvalds where he looks like an angry toilet paper salesman flipping you off for choosing a different brand. The industry has a long memory too. I kind of think it is worse than getting belted in the gob, which you get over quickly. Bruised egos take a lot longer to heal.

          Every industry has a way of dealing with assholes and to be honest, Torvalds just happens to be 'famous' and have a shitty attitude. There are many, many people in the IT industry who are nameless faces but they're known for being raging dickheads. I don't know why, but piss poor attitudes are quite prevalent in this industry.

          There are four welders in our machine shop though and I'm pretty sure they'd put a cigarette out in your eye if you talked to one of them like that. But that's their industry, they electrocute each other for fun...

          1. h4rm0ny

            Re: I think Torvalds is losing it

            "But that's their industry, they electrocute each other for fun..."

            Which conversely a lot of office workers or programmers would try to do them for assault for. :D Different people have different ways of communicating. I kind of resent this insufferable choking culture of tissue paper softness that is being forced down from above. I'd far rather Linus's transparent position than a lot of the nicey-nicey double-dealing I've had to put up with from others.

        3. h4rm0ny

          Re: I think Torvalds is losing it

          "Does Torvals actually speak to people like this face-to-face or is it only behind the safety of a keyboard"

          I've seen him present (on GIT and the failings of the CVS / Subversion model). He began the presentation by saying "You can disagree with me if you want, but if you do then you're stupid and you're ugly". And you know what? It got a good laugh from the crowd. He's not only very smart, he's also quite funny. I think he may be wrong on this, but I have no problem with the way he communicates.

          1. Kebabbert

            Re: I think Torvalds is losing it

            @h4rm0ny

            "...I've seen Torvalds present (on GIT and the failings of the CVS / Subversion model). He began the presentation by saying "You can disagree with me if you want, but if you do then you're stupid and you're ugly". And you know what? It got a good laugh from the crowd...."

            You know, they would laugh at anything he would say. They are his worshippers, and he is their God. He is flawless in their eyes. Even if Torvalds insulted and humiliated them, they would gladly accept being peed upon. They are brain washed, a sect.

            No sane person would accept Torvalds behavior, as we can see in this thread.

      3. Anonymous Coward
        Anonymous Coward

        Re: I think Torvalds is losing it @sisk 18:42

        "but it's entirely another to start hurling insults at the first sign that you might be smarter than the person on the other side of the debate."

        In his case I'd say it's more a case of the moment he thinks he's smarter. I would respect his coding skills, but his being a monumental dick tends to eclipse them. Since one of the most touted benefits of open source is that open collaboration leads to good software (in my opinion it should be "can lead to",) maybe a bunch of more level-headed people should be looking at ways to reduce dependence on him and his approval.

    3. Crazy Operations Guy

      Re: Idiots on the mailing list

      If he is tired of idiots on the mailing list, then why not just set up a white-list for who can send to it? A lot of projects do this where only a small group is allowed to post to the mailing list but anyone can subscribe to it, in this case, limit the people who can post to it to just the kernel devs themselves and maybe one or two exceptions. And if these idiots are kernel devs, what is he doing letting people like that do such critical work?

      But what do I know? I'm just a 'Masturbating Monkey'. http://article.gmane.org/gmane.linux.kernel/706950

    4. Anonymous Coward
      Anonymous Coward

      Re: I think Torvalds is losing it

      Torvalds' current pattern of actions is all too typical of men who see themselves as "successful" - it is simply called THE GOD COMPLEX. It is all too common in today's society - I am successful, so I know more about everything than you do - and it is what has poisoned society for the rest of us.

      In a nutshell: it is male egotism, multiplied by an exponent of money and/or social acknowledgement. From bankers to lawyers to politicians to businessmen to coders, we live in the shadow that it cast across the land.

      We are truly doomed.

    5. Anonymous Coward
      Anonymous Coward

      Re: I think Torvalds is losing it

      I'd rather have Torvalds loosing it than Ballmer

      1. John Smith 19 Gold badge
        Unhappy

        Re: I think Torvalds is losing it

        "I'd rather have Torvalds loosing it than Ballmer"

        Maybe it's a Finn thing?

    6. Don Jefe

      Re: I think Torvalds is losing it

      What you are talking about is called "Management by Intellectual Intimidation". It requires the 'manager' to be put on a pedestal by others who consider him to be intellectually superior to themselves. It is a form of hero worship/veneration.

      By exploding in a fit of temper you cause your 'subordinates' to carry your torch for you. The person you're exploding at is obviously so inferior you can't be bothered to stoop to his level and explain things. So your private assault force takes care of it for you. It is an effective form of management. I don't like it but it does make things easier on the 'manager'. His people do all the work.

      As an interesting note: This practice, or rather attempts at it, runs rampant in IT. The big difference is that others must place you on the intellectual pedestal for it to be considered a management style. If a person puts themselves on the pedestal and does this they're just being arrogant dicks. Either way it is not an accurate measure of intelligence, just people's perception of intelligence.

    7. Charles Manning

      He'rs right about ARM DT though

      As someone that works with this stuff on a day-to-day basis, it is a friggin mess.

      I probably wouldn't doctor coffee or tamper with brake lines though.

    8. The First Dave

      Re: I think Torvalds is losing it

      No doubt about it - lesson one in random numbers, is that you can't improve the randomness of a pseudo-random stream by combining it with another pseudo-random stream, so WTF is Torvalds on about?

  5. Anonymous Coward
    Anonymous Coward

    Torvalds has a point... Of sorts...

    RDRand contribution is diluted quite a bit on a modern system. So from that perspective Torvalds is right.

    From the perspective of TinFoil Assurance... When you read the architecture for RDRand implementation it is quite clearly specified that it is "postprocessed" to ensure that it can deliver a high rate random stream compliant to a set of particular FIPS-es. It is not a raw entropy source which is the significant difference between it and for example Via C7 implementation or some of the older hardware random generators (these you have to feed into a pseudorandom generator to produce a high rate stream).

    In any case, I am not going to take Linus judgement on this - I will wait and see what Theo De Raadt Tinfoil brigade will do in the OpenBSD random number generator. This will say all that needs to be said about the quality of that instruction's random numbers.

    1. Charles Manning

      What DeRaadt does...

      I hunch that what DeRaadt does will not just be motivated by the technical aspects but also by his ideology.

      Very few people would be able to access the inner workings of the instruction to make an informed decision on how it works. Those that reject the instruction just because they don't know how it works are acting "on principle" and likely not with a sound technical analysis.

      Where Kyle Condon's argument falls down is thinking that the magic instruction can undo entropy. Even if it was hardwired to emit 0, it would not compromise the additional entropy sources.

  6. Anonymous Coward
    Anonymous Coward

    Read the source

    I'm no C programmer but even I can read comments and verify a function does as it says. This is the header for the "arch" RNG access function. This is not the function that is used for /dev/random and is probably relegated as a last resort for /dev/urandom which doesn't block when it runs out of entropy as /dev/random will.

    I think the point is clear.

    /*

    * This function will use the architecture-specific hardware random

    * number generator if it is available. The arch-specific hw RNG will

    * almost certainly be faster than what we can do in software, but it

    * is impossible to verify that it is implemented securely (as

    * opposed, to, say, the AES encryption of a sequence number using a

    * key known by the NSA). So it's useful if we need the speed, but

    * only if we're willing to trust the hardware manufacturer not to

    * have put in a back door.

    */

    Cheers

    Jon

    1. Yet Another Anonymous coward Silver badge

      Re: Read the source

      Which is perfectly reasonable.

      Use the Intel RNG if you are doing Monte Carlo simulations - use real random numbers if you are encrypting your plans to kill the president (of Belgium)

      1. Yag

        "...your plans to kill the president (of Belgium)"

        They're right next to the plans to kill the king of the USA I suppose.

  7. Doug Bostrom

    Back to the crib

    "...conspiracy theorists are terrified that RdRand is compromised. "

    Only days later and we're back to "terrified conspiracy theorists."

    How we do love a comforting story (or insult) instead of facts, eh?

    1. Don Jefe

      Re: Back to the crib

      Whether it has been compromised or not, people won't want to admit it. Nobody likes being on the side that got taken advantage of, it is embarrassing. Egos are a powerful thing. People will go far to protect them.

      It is a real problem in all this, standards, systems, processes and products that have been generally assumed to be functioning as advertised are being uncovered as fatally broken. No one really knows how deep the corruption goes but nobody would want to come out and admit their chosen methods were also broken. It's like being an outspoken fan of a great athlete then finding out he's basically an ambulatory large animal pharmaceuticals storage facility and you've tattoed his jersey number on your forehead.

      I have no idea if the thing being discussed is broken or not. It is way out of my field. But I do know people and that no one likes to believe they've been taken advantage of by people they trust. From software icons to journalists all the way down to the person who sweeps the floors. It is fear of being made a fool of that is more dangerous in this than anything else.

      1. Anonymous Coward
        Anonymous Coward

        Re: Back to the crib

        No one really knows how deep the corruption goes but nobody would want to come out and admit their chosen methods were also broken.

        No, we all know that the corruption is very nearly complete.

        We know for example that the NSA were convincing Microsoft many years ago of the advantages of making their operating systems 'helpful' to the US government. We now know that since then the NSA and GCHQ have systemically been targeting every part of computing, hardware, operating systems, applications, and inter system communications. They have coerced and manipulated untold numbers of companies and people to 'assist' them in doing this, and more recently have been legally aided and abetted to do all of this by knee jerk reaction to a terrible attack on American soil.

        Given the levels of different aspects of computing that they have attacked, and the knowledge that they have many 'big' IT companies involved, anyone trusting anything sensitive to a computer now must be stupid.

        An entire industry compromised by fucking dickheads.

        1. Don Jefe
          Thumb Up

          Re: Back to the crib

          You're right. It is all almost certainly fucked.

          I've just been trying to be more specific in my language lately: Know, suspect and thought all have more weight in this conversation than they would have had four months ago.

          Trying to discuss it, maintain awareness of it and not come off as a complete nutter or, even worse, a compete nutter from way back who has now been proven correct is kind of a fine line ya know.

          1. Anonymous Coward
            Anonymous Coward

            @ Don Jefe

            When we were kids it was the bad guys who wanted to control everyhing, you saw it in all the films, on all the TV programs, and read it in all the books, yet somehow we seem to have found ourselves living in a world where it's the people who are supposed to be the 'good guys' who are behaving like that, I can't help feeling that somehow the plot line has got badly mixed up... if it were only a film.

  8. Anonymous Coward
    Anonymous Coward

    "...but it's claimed that mix is trivial (involving just an exclusive OR) and can be circumvented by g-men."

    Erm, "claimed" by whom? That statement is just wrong/stupid in so many ways it beggars belief.

    "Trivial"? Go and read drivers/char/random.c

    "just an exclusive OR"... "just"? No idea what a stream cipher is then... or how THE ONLY UNBREAKABLE cipher - a one time pad - is used.

    "can be circumvented by g-men." Really? Any chance of an elaboration/reference on that? No? Thought not.

    As long as the suspect stream is *a* source of entropy, not *the* source of entropy, and is *thoroughly* mixed into the pool of other sources (as it is) then even if it's malignant it'll still can't damage the overall entropy of the system, even a tiny bit.

    Simple thought illustration: I have a byte of well mixed random data derived from multiple entropy sources, I shall now inadvertently "just" XOR it against a patently malicious quasi-random stream from the NSA - eight 0s. What is my random byte now?

  9. Anonymous Coward
    Anonymous Coward

    Torvalds needs a paranoia transplant

    Torvalds is correct, in that if you have some random numbers in your entropy pool and XOR some new numbers into them (from the CPU), then you increase the entropy even if those new numbers have very little entropy themselves (i.e. are somewhat predictable). Even if the new numbers are completely predictable, you still do no harm.

    However, I'd disagree with Torvalds that this therefore makes it all OK. That's because you still need to estimate how much entropy you've accumulated. To produce random bytes by hashing, you need an estimate of the entropy per byte in your entropy pool because this determines how many pool bytes you need to feed into a hash function to produce each of the random hashes you'll actually use.

    If the CPU is believed to be supplying most of the entropy (because it's the fastest source) but in fact it's producing a predictable sequence, then you will have far less entropy in your pool than you thought. I can see that might be a genuine cause for concern because any secure key you then generate may have less entropy than you thought too (i.e. its bits may not be independent). Yes, exploiting this might require cracking a SHA hash, but that's the sort of advantage that it's plausible for the NSA to have.

    So my approach would be to keep using RDrand but to downgrade its entropy estimate by a large factor to reflect its now much reduced trustworthiness.

    1. Kebabbert

      Re: Torvalds needs a paranoia transplant

      Netscape mixed different random sources, and introdcued a pattern so it was breakable. Donald Knuth says never to mix stuff, instead rely on a proven mathematically strong design. Just becase you can not break your own crypto does not mean it is safe. Read my post further down.

  10. Anonymous Coward
    Linux

    So the NSA got to Torvalds?

    Did they grab a family member? Hand him a bag full of cash?? (/joke off)

    Tux--in need of a penguin-sized tinfoil hat.....

  11. T. F. M. Reader

    drivers/char/andom.c

    A few comments after throwing another glance at random.c in a recent version of kernel code - it's been a few years since the last time:

    * Assume that rdrand is not reliable. Yes, one can run a battery of tests on its output. Note that the best-known battery of tests comes from NIST, and it's been alleged that NSA have influenced NIST. The argument is that rdrand is mixed with other sources of entropy, so it is OK.

    * These other sources of entropy are: user input, disk seek times, and interrupt times. In servers there is no input to speak of (no keyboard or mouse attached). The randomness of disk seek times is due to the turbulence generated in the thin layer of air between the rapidly rotating magnetic disk an its enclosure. Once magnetic disks give way to SSDs this source will disappear. Interrupt times can be affected by external sources (a quick - too quick - glance at the code leads me to believe nothing in the implementation of add_interrupt_randomness() in drivers/char/random.c or in the only call to it from handle_irq_event_percpu() in kernel/irq/handle.c distinguishes between interrupts.), e.g., if my server does a lot of networking I expect most interrupts to come from network cards, and it is at least theoretically possible to send a lot of packets at regular intervals to the server to reduce the overall randomness of this component. This is why historically network cards were excluded from the entropy pool. This last potential problem is probably mitigated to a large extent by taking only the least significant bits into account.

    * The total amount of entropy is limited (without rdrand). It would be exhausted rather quickly if random numbers were used to encrypt everything, to run Monte Carlo simulations, etc. It would also be rather slow. However, normally the random numbers are only used to generate the seed for a PRNG (much faster). Hopefully encryption software does not use PRNGs from standard libraries (they are not very random). However, even a good PRNG is by definition deterministic if you know/guess/recover the seed. The output is statistically indistinguishable from random, but random it is not. Once you've covered the seed space the sequence is known (it's not all there is to encryption, of course, but it is a significant part).

    * Hopefully there is enough entropy for seeds even if rdrand is used. However, if rdrand is randomized and it is a major contributor to the entropy pool I would expect the overall randomness to be lower than without it. This by itself is not enough to demand that Linus gets rid of it, but it is a theoretical concern. See Ted T'so's blurb quoted in the article.

    * I expect it should be considerably easier for NSA to break into most computers exploiting bugs in various programs than cracking somewhat weakened random sequences. I am sure they are ready to use all the attack vectors where needed.

    1. Charles 9

      Re: drivers/char/andom.c

      There is research into alternate sources of entropy from other parts of the CPU. Given a sufficient workload, the registers and other internal workings of the CPU are volatile enough to create a source of entropy (this is the theory behind HAVEGE). Perhaps more research into other independent sources of entropy could be found (I can't think of any, though, off the top of my head that couldn't be subverted in some way).

    2. Charles Manning

      Re: drivers/char/andom.c

      I call you out on two points sir:

      * "one can run a battery of tests". These tests are limited in what they can produce. They are useful for testing simulation-level randomness for mathematical modelling, not security.

      * You assume far too much when it comes to the seek times of disks being predictable and SSDs being even more predictable. SSDs have flash inside which takes a variable amount of time to write/erase. Interrupt times have a large jitter due to other stuff happening on the system - even memory caching has an impact. Network cards still have an impact because servicing them adds jitter (ie entropy) to other interrupts.

      1. Anonymous Coward
        Anonymous Coward

        Re: drivers/char/andom.c

        I call you out on two points sir:

        * "one can run a battery of tests". These tests are limited in what they can produce. They are useful for testing simulation-level randomness for mathematical modelling, not security.

        In fairness, he was saying "one can run a battery of tests but it won't help" - which is exactly what you're saying. Although your reasonings differ. Personally, I very much doubt NIST is rigging those tests in the hope of gaming the cryptography industry. Quite the reverse in fact. Credibility is EVERYTHING in security/subterfuge realms and it's hard to earn. It'd be imperative to earn sufficient credibility for Trojan horses to be widely accepted and obscure, harmless little projects and tools like those are perfect grist for the task. In the early days NSA used to do all this itself - such as when it fucked over IBM's Lucifer... Win cred by spotting and fixing a weakness while at the same time crippling the cipher's strength - then quickly rubber stamp it. Of course the giving with one hand while taking with the other is a bit obvious. So now we have NIST and NSA. The NIST does the giving while the NSA does the taking away. A sort of good-cop / bad-cop routine if you like. So that's OK then - we can all just trust the "good cop" - 'cos we're complete cretins - there's no way the two US government security agencies could possibly possibly working in collusion.

  12. Gene Cash Silver badge

    Android

    Does anyone know what the Android code does? I know it's weak enough to have compromised BitCoin, but I haven't looked at it myself.

    1. Charles 9

      Re: Android

      Android is based a lot on Linux, and /dev/random IIRC isn't too different from its predecessor. However, since most Android devices use ARM, it doesn't have access to a hardware RNG. It can draw in a number of sources of "noise" like network transmissions and user input to help with the entropy issue, but perhaps it lacks the entropy for a more serious implementation.

  13. fortran

    drivers/char/andom.c (T.F.M.Reader)

    What I learned of numerically intensive computing, is if your code needs random numbers, you go find a RNG that suites your needs. If your RNG needs a random seed in starting, you can call /dev/random once for that seed. But making a string from process id, the time, free space on partitions, and what not, and running that through something like MD5 for your seed is probably about as good. But you don't use /dev/random for general user programming. And look at the source for your RNG, to make sure it isn't using /dev/random in some way.

  14. btrower

    Linus is correct in both form and substance.

    I too looked at the code.

    Thought experiment:

    Stream prior to tempering with RDRAND has been encrypted with a secure one time pad.

    Use good RDRAND or bad RDRAND makes no difference in this case, you cannot inspect the plain text without the one time pad key.

    Stream prior to tempering with RDRAND has been encrypted with a non-secure key.

    Use good RDRAND, it is stronger. Use bad RDRAND and it is no stronger but it is no weaker either.

    No matter how compromised RDRAND is, the worst it can do leave the stream as strong as it would be without RDRAND.

    Practically speaking, you can expect RDRAND to add good entropy to most things for most purposes.

    I do not trust the NSA and I think it would be foolish to *rely* upon RDRAND, but a cursory examination of the file below shows that the Linux Kernel gets the use of any entropy there and is unharmed by any compromise no matter how extreme:

    http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/char/random.c

    1. Werner McGoole

      Re: Linus is correct in both form and substance.

      But on Linux, /dev/random is supposed to produce *true* randomness, with full entropy. Its output should be completely unpredictable by an adversary who even knows the exact state of the rest of your system and all the past output. There is no scope for pseudo-randomness or imperfect entropy in /dev/random. If you try to read random bytes and there isn't enough entropy, it must block.

      If you want a non-blocking source of randomness, you read /dev/urandom instead, which uses a pseudo-random number generator seeded from /dev/random. So the quality (true randomness) of the entropy harvested for use in /dev/random IS critically important. If the sources used don't have full entropy, you need to "condition" the data before use, which is a way of concentrating its entropy. For example, you might want to take the "random" CPU data in 1MB chunks and hash each of those down to 64 bytes. Then you could be more confident of having truly random bytes.

      Let me explain why this is important. If you use a pseudo-random number generator (PRNG) to generate a key with a fixed seed, your random numbers obviously won't fill the keyspace* - because it will only ever produce one output sequence. But what people don't seem to realise is that if you seed it with "random" numbers that don't have full entropy, the output *still* won't fill the keyspace. It may look perfectly random and be unpredictable, but an adversary who understands the PRNG well enough doesn't have to search the entire keyspace equally to discover the key.

      So you need to be exceptionally paranoid about /dev/random.

      *By which I mean that the probability of each possible sequence of output bits won't be equal.

      1. Gordon 11
        Coat

        Re: Linus is correct in both form and substance.

        Its output should be completely unpredictable by an adversary who even knows the exact state of the rest of your system and all the past output.

        But surely, at that point Heisenberg would point out that you cannot tell the clock-speed of the system, so cannot predict anything about its future?

      2. Gordon 11

        Re: Linus is correct in both form and substance.

        But on Linux, /dev/random is supposed to produce *true* randomness,

        Has anyone tried using the background radio white-noise to help with this? I can see lots of references to people trying to get rid of white-noise, but none for people trying to make use of it.

  15. Frumious Bandersnatch

    random numbers

    I always pick '2'. Nobody expects that.

    1. Anonymous Coward
      Anonymous Coward

      Re: random numbers

      Me too!

      ;D

    2. Anonymous Coward
      Anonymous Coward

      Re: random numbers

      I always pick "The_Spanish_Inquisition"

      Nobody expects that

    3. PJI

      Re: random numbers

      2 is the universal one for uncertainty: because it is similar to a question mark, "?", that being the brilliant theory behind its use in populations models that I was given.

      1. oolor

        Re: random numbers

        There's no such thing as two.

  16. Kebabbert

    Linus is totally wrong

    There was a famous example of... Netscape(?) did a mix of different random sources. They used a random number generator, added the current millisecond, how much space was left on the harddrive, etc to create a "truly" random number. But researchers succeeded in breaking it, because they knew what was the building blocks, they could infer things, such as "typical hard drive is this big", etc. So, the researchers succeeded in discarding lot of the search universe, so they could decipher everything. It was lot of work, but it was doable. To mix different sources does not make better randomness. The Linux kernel developers would have known this if they studied cryptography (which I have).

    Donald Knuth has a very interesting story on this in his epitome "Art of computer programming". He was supposed to create a random number generator many years ago, so he mixed lot of different random sources, the best he could. And Donald Knuth is a smart mathematician, as we all know. After his attempt, he analyzed it and discovered a very short range. It quickly repeated itself. That learned Donald Knuth that he should never try to make a random generator (or cryptosystem), just because you can not break your crypto or random number generator, it does not mean it is safe. Donald Knuth concludes in his book: it is much better to use a single well researched random generator / cryptosystem than make one yourself. Much better. If you start to mix different sources, you might introduce bias which is breakable. It suffices if the adversary can discard some number in the huge search space to be able to break it.

    So, NSA and the like, would be more concerned if Linus used a proven high quality random generator. As Snowden said: NSA can break cryptos by cheating. NSA has not broken the mathematics. The math is safe, so use a mathematically proven strong random generator instead of making your own. That is very bad. If you study basic cryptography.

    The Linux kernel developers seem to have very high thoughts of themselves, without knowing the subject? Probably they would also claim that their own home brewn crypto system is safe, just because it is complex and themselves can not break it. That would also be catastrophic. They should actually study the subject, instead of having hybris. But, with such a leader....

    1. John Gamble

      Re: Linus is totally wrong

      I know of the story you're referring to, and you're mis-stating it. First, the "mixed sources" random number generator used linear congruential generators -- no PC noise, no cryptographic hashing, and no use of Blum, Micali, and Yao's paper published in 1984 (which is referenced in the current edition of Knuth, see page 179). Knuth argued that if you're going to use a LCG random number generator, use one -- don't mix them.

      This obviously has nothing to do with the current situation, and has had nothing to do with modern cryptographic-level random number generators for twenty years now.

      Do Knuth a favor. Stop misquoting him, and buy the latest edition of his The Art of Computer Programming. It is quite worth it.

      1. Kebabbert

        Re: Linus is totally wrong

        "...I know of the story you're referring to, and you're mis-stating it. First, the "mixed sources" random number generator used linear congruential generators -- no PC noise, ..."

        No, you dont. I studied cryptography back then, and I remembered that some company, was it Netscape?, used the space left on the hard disk as one of the inputs to create random numbers. They used "PC noise", that is for sure. It seems you have not read the same story as I did.

        1. Anonymous Coward
          Anonymous Coward

          Re: Linus is totally wrong

          You still haven't spotted that you've confused random and pseudo-random? Have another look. That alone really makes a mockery of everything you utter.

          1. Kebabbert

            Re: Linus is totally wrong

            Bla.bla. I know the difference. I did some work on group theory and pseudo random generators. It turned out that the work was already known, but I did not know that when I started. You want to read my thesis on the subject??

            1. Anonymous Coward
              Anonymous Coward

              Re: Linus is totally wrong @Kebabbert

              "You want to read my thesis on the subject??"

              Yes please. Pointer to it?

              1. Anonymous Coward
                Anonymous Coward

                Re: Linus is totally wrong @Kebabbert

                >"You want to read my thesis on the subject??"

                >Yes please. Pointer to it?

                Why AC? He's splaffed crap here. That fact alone makes it almost inevitable that he's splaffed crap elsewhere. Why would you want to see it? The select highlights we've been treated to already are certainly enough for this AC.

        2. John Gamble

          Re: Linus is totally wrong

          No, you dont. I studied cryptography back then, and I remembered that some company, was it Netscape?, used the space left on the hard disk as one of the inputs to create random numbers. They used "PC noise", that is for sure. It seems you have not read the same story as I did.

          Please don't mix and match stories. I was referring to your reference to Knuth's mixed-input RNG, and nothing else. Obviously, his conclusion, which you used repeatedly and wrongly, had to do with linear congruential generators, and nothing else.

          As for Netscapes's alleged use of a bad source of randomness, no one is disputing that bad sources of randomness exist. But that has nothing to do with Knuth's example, and has even less to do with current cryptographic random number generators, except as a cautionary tale. At best you are woefully out of date on the state of current technology.

        3. Charles Manning

          Re: Linus is totally wrong

          It surely depends on how you are mixing in the sources.

          If the attacker knows one of the sources, then you can just say that source is always zero (or whatever fixed value).

          If you are mixing in sources by something as simple as an XOR, then you are XORing in zero - which has no effect.

          With the correct mixing algorithms entropy can only be increased, not decreased, by mixing in other sources.

          Where the Netscape issue came from was probably that they started off with some really crappy sources then combined them and saw a statistical spread that made them think they had a good result.

          1. btrower

            Re: Linus is totally wrong

            @Charles Manning

            Re: "With the correct mixing algorithms entropy can only be increased, not decreased, by mixing in other sources."

            Quite correct and nicely put.

          2. Michael Wojcik Silver badge

            Re: Linus is totally wrong

            Where the Netscape issue came from was probably that they started off with some really crappy sources then combined them and saw a statistical spread that made them think they had a good result.

            While Charles has the right of this argument, and Kebabbert (who ironically is lecturing other people about studying cryptography, while displaying a rather glaring ignorance of the subject) is wrong in most particulars, I have to admit I'm growing a bit annoyed at the number of people making offhand references to the Netscape crack without bothering to look up the details. Pro tip: with the help of this new-fangled Internet, it's pretty easy to find out what happened.

            Netscape's original SSL implementation was broken in 1995 or 1996 by Ian Goldberg and David Wagner. You can read their DDJ article about it, but the short version is that on UNIX systems (other platforms were even weaker) Netscape's CPRNG was seeded with the time of day in seconds and microseconds, the browser process ID (pid), and the browser parent-process ID (ppid). In many cases the last value is 1 (the browser process having been reparented to init), so it often had no entropy. The pid is trivial to extract if the attacker has access to the OS and often easy to estimate even if not, so it has little entropy at best. The time in seconds when Netscape seeded its CPRNG is easy to determine, exactly in some cases or to within a small interval, so it has at best a few bits of entropy. That leaves only the microseconds value - less than 20 bits of entropy, sometimes considerably less.

            That entropy was used to seed MD4 (after passing them through a LCRNG which didn't do anything cryptographically useful). MD4 is probably a strong mixing function (it was superseded by MD5 mostly due to speed), but with effectively only around 3 bytes of entropy it's trivial to reconstruct the CPRNG seed and sequence.

            The SSL 1.0 CPRNG is structurally similar to /dev/urandom. Aside from mixing entropy sources, it's not related to /dev/random. /dev/random does suffer from the potential problem of reduced entropy, but people who want to harp on about that might at least demonstrate they're familiar with some of the large corpus of literature on the subject. Like, say, RFC 1750, from 1994. Or Von Neumann's discussion of techniques for removing bias from random bit streams, from 1951 (whence also his famous "state of sin" line). This is not news, folks.

    2. Werner McGoole

      Re: Linus is totally wrong

      I agree you should use a proven algorithm rather than making your own, but I think you've missed part of the point here. A mathematical algorithm can only produce pseudo-randomness. It still needs to be initialised to a non-predictable value otherwise all computers will generate the same pseudo-random sequence (as I think Android was recently found to be doing).

      So good cryptography also depends on a source of true randomness for seeding the mathematical algorithm (and also for re-seeding it occasionally just in case someone spots the pattern). On Linux, /dev/random is the standard place to go to get that "true randomness". So you don't have a choice here. You can't rely on a mathematical formula. You have to have true randomness derived from a physical, non algorithmic source.

      1. Kebabbert

        Re: Linus is totally wrong

        Werner McGoole,

        yes I know all that. I studied cryptography for one of the leading experts in the world. He is world famous and if you have studied cryptography, you have surely heard of him.

        1. Solmyr ibn Wali Barad

          Re: Linus is totally wrong

          "I studied cryptography for one of the leading experts in the world"

          ...and managed to get away quite unscathed.

    3. Anonymous Coward
      Anonymous Coward

      Re: Linus is totally wrong

      "The Linux kernel developers seem to have very high thoughts of themselves".

      Perhaps It would do you some good to look at your self in a mirror and also consider if you have ever developed anything of value.

      1. Anonymous Coward
        Anonymous Coward

        Re: Linus is totally wrong @AC 00:53

        "Perhaps It would do you some good to look at your self in a mirror and also consider if you have ever developed anything of value."

        That has no bearing on the validity of his statement. Not sure which fallacy you're using there - straw man, maybe?

        1. Michael Wojcik Silver badge

          Re: Linus is totally wrong @AC 00:53

          "Perhaps It would do you some good to look at your self in a mirror and also consider if you have ever developed anything of value."

          That has no bearing on the validity of his statement. Not sure which fallacy you're using there - straw man, maybe?

          Argumentum ad hominem. It's a logical fallacy (using Aristotle's terminology and rhetorical scheme) because it is solely an argument about ethos - the standing of the speaker - and not about the facts of the matter. The latter would be logos, hence "logical" fallacy.

          That said, AC's argument is perfectly appropriate for the subjective portions of Kebabbert's rant, and since K has made some rather extravagant claims of expertise in this area and failed utterly to support them, ethos seems to me to be an acceptable register.

          1. oolor

            @ AC: 11th September 2013 06:23 GMT:

            >Yes please. Pointer to it (Kebabbert"s thesis)?

            I did a little googling. Based on the incoherence and lack of info this is my best guess:

            http://en.wikipedia.org/wiki/Voynich_manuscript

    4. This post has been deleted by its author

    5. Anonymous Coward
      Anonymous Coward

      Re: Linus is totally wrong

      Factually WRONG. If you have one "good" random bitstream and one "crap" bitstream, you XOR them and it will be at least as good as the "good bitstream". Of course the crap one must not be functionally dependent on the good one.

      So I assume the good Mr Knuth made a very idiotic mistake or he didn't have a single good bitstream.

      For engineering purposes: run a counter from 1 to 2^64 and perform 3DES (with some 112 bit key you get from hitting keyboard randomly) on it. That will be sufficient for all your needs, believe me on this. Most people will be even OK with an RC4 stream.

      That's actually how people should do it if they have NSA in their security threat model.

      1. Kebabbert

        Re: Linus is totally wrong

        Duke Arco of Bummelshausen

        "...That will be sufficient for all your needs, believe me on this. Most people will be even OK with an RC4 stream...."

        The same RC4 that NSA might have broken?

        http://www.theregister.co.uk/2013/09/06/nsa_cryptobreaking_bullrun_analysis/

  17. loneranger

    Who can tell?

    I have respect for Torvalds, but the NSA has literally hundreds/thousands of PhD mathematicians working on encryption/breaking encryption, so weighing Torvald's smarts against all that brainpower, computing power, and sheer money power, who can say for sure whether his random function/method is compromised or not?

    1. Kebabbert

      Re: Who can tell?

      Mixing random generators are never a good idea, it weakens everything if not done correctly. If you study the subject you would know it. But if you are a mere Linux developer, he would of course believe he knows everything.

      1. Charles 9

        Re: Who can tell?

        But we just read that mixing RNGs in particular ways can't hurt and only help. How can mixing RNGs reduce their reliability? Are you saying an adversary could create a stream designed to negate (and thus sabotage) an RN stream? Or is something else involved?

        1. Anonymous Coward
          Anonymous Coward

          Re: Who can tell?

          An adversary could create a stream designed to negate (and thus sabotage) an RN stream if (and only if) he ALREADY KNEW the RN stream. Otherwise he can only ADD entropy.

  18. John H Woods Silver badge

    Simple h/w device?

    Can't we get USB devices to produce random numbers from some kind of quantum noise - shot noise or something? Is it possible to devise a circuit that is both too simple to contain a backdoor but fast enough and random enough to act as a cryptographic RNG?

    1. Kebabbert

      Re: Simple h/w device?

      That is a good idea actually. It should have a market, indeed. For instance, a small radioactive source, or microphone, or something similar. Another idea would be to record noise from a current microphone and extract randomness from it.

      One friend at uni, had to create random numbers for a software, so he took a photo with the usb camera, and hashed the photo to extract random numbers. His software used a usb camera, so it had access to a usb camera.

    2. Anonymous Coward
      Anonymous Coward

      Re: Simple h/w device?

      "Can't we get..."

      There used to be. http://entropykey.co.uk/ offered a rather nice such USB stick based on noise from reverse leakage of diodes. Shipped complete with simple open drivers which fed the entropy into the kernel's /dev/random pool thus augmenting rather than replacing everything else your system could draw on. They were damned cheap too.

      Sadly, when I visited the website some months ago to order a handful, they'd gone. Can't help thinking the timing was most unfortunate - if only they'd clung on a little longer! There are alternatives but everything I've found is much more costly, even the much poorer designs.

      1. Homer 1
        Holmes

        Re: Entropy Key

        Still alive and kicking:

        "The Entropy Key uses P-N semiconductor junctions reverse biassed with a high enough voltage to bring them near to, but not beyond, breakdown in order to generate noise. In other words, it has a pair of devices that are wired up in such a way that as a high potential is applied across them, where electrons do not normally flow in this direction and would be blocked, the high voltage compresses the semiconduction gap sufficiently that the occasional stray electron will quantum tunnel through the P-N junction. (This is sometimes referred to as avalanche noise.) When this happens is unpredictable, and this is what the Entropy Key measures." ~ http://www.entropykey.co.uk/tech

        I've had one for years. Works well, if rather slowly.

    3. Werner McGoole

      Re: Simple h/w device?

      There are some resources here to make use of devices you may already have (like a sound card):

      http://www.vanheusden.com/aed/

  19. Alan(UK)
    Coat

    John von Neumann

    "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."

    1. agricola
      Boffin

      Re: John von Neumann

      Very good!

  20. JeffyPoooh
    Pint

    Linux vs. Windows

    "...Relying solely on the hardware random number generator which is using an implementation sealed inside a chip which is impossible to audit is a BAD idea."

    MS Windows-based software would have suggested "...Relying solely on the hardware random number generator THAT is using an implementation sealed inside a chip THAT is impossible to audit is a BAD idea."

  21. Glen Turner 666

    Host key generation is more of a risk

    The real risk is the generation of SSL host keys so early in the system first boot that there is no other source of entropy other than the hardware RNG. Best of all these weak keys are permanent.

    1. Charles 9

      Re: Host key generation is more of a risk

      Some implementations store some of the random data on shutdown to help jumpstart the generators on next boot. That would reduce the window of vulnerability in that regard.

  22. carl_

    Linus is OK - just human that's all

    i think this "lack of people skills" is simply frustration.

    this man works damn hard and don't forget, unlike most other damn-hard-working guys, he does it for all of us. his is a labour of love and that makes a big difference to how his actions are to be taken.

    what pisses him off is wasted time. he has had enough of people who don't sacrifice their time to fully understand something, then believe they are sure enough to push a patch or even just voice an opinion. time after time it comes down to him to do their work for them and fix the problem.

  23. doorknobus

    >RdRand

    Don't know about this Intelly stuff but, back in the day, different models of the same architecture might implement some instructions differently for all sorts of reasons, whilst conforming to the specification.

    Might this not still apply? In which case wouldn't we need to look at a somewhat lower level than 'RdRand'?

  24. agricola
    Big Brother

    One question, one comment:

    Someone needs to dig very deeply into why INTEL is trying to strong-arm anybody having to do with the software design of SECURITY implementation. No ties to the spooks, huh, Intel?

    Perhaps the EFF needs to get involved here.

    Wait...one more time; let me see if I understand correctly. A HARDWARE manufacturer is trying to INFLUENCE the DESIGN of an OPERATING SYSTEM'S SECURITY PRACTICES??!!

    ---------------------------------------------------------------

    "I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction... Relying solely on the hardware random number generator which is using an implementation sealed inside a chip which is impossible to audit is a BAD idea."

    A very, very bad idea, indeed.

  25. agricola
    Holmes

    Required Reading

    We all need to read

    “Schneier on NSA's encryption defeating efforts: Trust no one"

    by Grant Gross, IDG News Service, September 06, 2013, 4:06 PM.

    1. Anonymous Coward
      Anonymous Coward

      Re: Required Reading @agricola 01:59

      "We all need to read

      “Schneier on NSA's encryption defeating efforts: Trust no one" "

      And yet you're taking the statement by Ts'o at face value. Didn't you even get to the end of the title of your recommendation?

  26. Anonymous Coward
    Megaphone

    Linus Torvalds

    Linus Torvalds calls someone else ignorant? I mean SERIOUSLY???

    Er, reality check time Mr Torvalds.

    You will be thinking you're the Gordon Ramsay of the computer world next,,,

  27. totaam

    If you have a problem with jokes about death..

    then you have a problem with jokes about death.

    No one on the mailing list seems to have taken the joke seriously, and why should they? Why do you?

    (technical argument aside - and yes, Linus is right about that)

  28. Anonymous Coward
    Anonymous Coward

    Free Education Regarding Randomness

    First there is the philosophical question whether the universe develops in random or deterministic fashion. Mr Einstein evidently believed in God determining every single fart and every single photon emission. "God does not roll a dice". That would put the effectiveness of ciphering directly in the hands of said deity.

    Having said that, the consensus amongst modern physicists is to ignore religion and replace it by faith into quantum physics, which definitely needs strong randomness.

    Elementary processes such as photon emission/absorption indeed can not be explained by deterministic theories, but only by stochastic laws. It is the consensus amongst physicians that this randomness can be measured and recorded by various means.

    For example, the lattice noise (caused by the random movements of atoms aka. "temperature") can be measured by measuring the voltage drop of a current flowing through a resistor or a diode. That voltage drop contains an AC component which sounds like "white noise", if strongly amplified. That's what you hear in an untuned radio. You only need to digitize that signal and have physical randomness.

    The final thing is not to extract "too much" bits/second from your randomness source. That's an art and science in itself. Another challenge is to remove "hidden bias", which is best done using a hash function.

    But a capable electrical engineer or physicist can create a 100 kbit/s physical randomness source in a few weeks.

    If Intel does not realease the chip circuirtry for this instruction, yeah, then they probably got an NSL to fuck it up for USG.

    1. Anonymous Coward
      Anonymous Coward

      Re: Free Education Regarding Randomness

      Also regarding complicated randomness sources like "untuning a radio" or "geiger counters" - they are either unsafe or not worth the complication. An "untuned radio" could become "tuned" any time, especially if your adversary knows your scheme. Then your random numbers could suddenly become sin(w*t).

  29. Anonymous Coward
    Anonymous Coward

    Trust In US/Israeli CPUs

    Is of course a major issue. One government is know to subvert essentially everything they can and the other one is in constant war with her neighbours. That's an even stronger reason to do that.

    Bottom line: a GNU CPU is needed in addition to a GNU operating system.

    Now let the howling of the Intel $hills roll in.

    (the current Intel CPU architecture has been designed in Israel)

  30. John Smith 19 Gold badge
    Unhappy

    So from the *code* comments it's faster, but use at own risk.

    Could he be p**sed off that said petitioner has not taken the 10secs or so to do so?

    Remember one of Linux's goals, is if you don't like the kernel version you bought, re-compile it.

    BTW the same argument can be leveled at any processor. In principle if you can control layout at the layout level you can get a noisy diode and sample it's output. That said where that diode is placed on the chip could have a major impact on how "random" its bit stream is for example. You actually want random power bus to drive it for example.

    In fact I think a good on chip hardware RNG probably violates most of the key rules for robust digital logic design because you don't want a robust consistent output, you want completely bats**t crazy output with a varying probability of producing a 1, a 0, and any length of 1s and 0s in between.

    Not quite as simple as it sounds.

  31. Dan 55 Silver badge
    Thumb Down

    A little bit too arrogant

    Here are the facts...

    - Generating random numbers are a really complicated subject.

    - It's been discovered that the tinfoil hat brigade were right.

    - This is making even security and cryptography specialists think about their fields in new ways.

    So:

    - Time must be taken to come up to find out if there is a problem with RdRand and if its use must be avoided or modified.

    Not:

    - Slap someone down right away on my mailing list for my OS, because I was right, I was always right, and therefore I always will be right.

    1. Michael Wojcik Silver badge

      Re: A little bit too arrogant

      Time must be taken to come up to find out if there is a problem with RdRand and if its use must be avoided or modified.

      OK. RdRand is only one source of entropy for /dev/random. Either it is not compromised, in which case its use is fine. Or it is compromised, and contributes exactly as much entropy as avoiding it would contribute (i.e. none). So there's no benefit to avoiding it.

      There, that was about 10 seconds. Was that enough time, or shall I repeat the exercise?

      1. Dan 55 Silver badge

        Re: A little bit too arrogant

        Maybe you should repeat it.

        1. Michael Wojcik Silver badge

          Re: A little bit too arrogant

          @Dan 55: I don't think you understand the piece you linked to. That attack compromised the output of RdRand. All it could do in the /dev/random situation is reduce the entropy of /dev/random by the amount of entropy that could be produced by an unaltered RdRand. It cannot reduce the entropy of other sources.

          There, repeated. Happy?

  32. Bladeforce

    OK by me!

    Top guy! I'd have a pint with this guy in the local pub anyday. Can you imagine having a pint with Ballmer? You'd end up glassing yourself through boredom and as for Bill Gates...can they guy even take a pint of beer without passing out?

    Torvalds is the man and the world revolves (and many companies too) around this mans talent

  33. fortran

    "Linus is wrong", worst case scenario (router/plug computer)

    Routers and plug computers don't have hardware RNG, regardless of what particular mechanism is involved. There is no keyboard or mouse. The only randomness they have is timing of the requests (packets) that come in, and something about the requests themselves.

    The computer caches some amount of "random data", so that it has something to start from on the next reboot. During operation, it receives some amount of new random data. A person generates a Poisson devitate based on how much random data has been received since booting. This is an estimate of how much of our previous cache of random data to delete. We can't delete all of it, and there may be reasons to restrict it to less than that (force the router to reboot over and over). But, we delete some amount of previous data with some amount of new data. And on a reboot, we use the time as a seed to shuffle the data we have.

    There is no sense visiting random.org for data, as the DNS may be poisonned and all you get back is zeros. Even using a bad RNG, a person can generate random dot quads to ping (ignoring the reserved networks). And all you want to know is the time to respond to a single ping. DNS isn't involved, so hopefully nothing interfers with that. But, pinging (a single ping) to known sites, and comparing that to previous pings would be useful.

    I am not a crypto professional, and no desire to be one. I do want to be able to do Monte Carlo studies (and similar) properly.

    Whether you call it blending, or something else, it seems reasonable to not trust a single source of randomness. For things like routers and plug computers, I think they are our best environments to try and produce good random numbers (via /dev/random). If what we do for these computers works well, they probably are not a problem for computers which have active user inputs.

    Some hardware RNG can make use of radioactive sources. A radioactive source emits with Poisson statistics. But, not all radioactive substances have a half life independent of external conditions. Best example, a nuclei which decays by electron capture, cannot decay if there are no electrons present (deep space). If your hardware RNG has uranium in it, the spontaneous fission of U-238 is influenced by the concentration of muons.

    I seen someone talking about diodes sensitive to UV, I think this is great. UV doesn't penetrate matter very far (not like things like muons or neutrinos), and if one can produce a RNG whose output can be mapped to the uniform 0-1 accurately, I am all for that.

    Just don't use /dev/random for user programs, except possibly to get a seed value.

  34. Anonymous Coward
    Anonymous Coward

    So far in this thread we have:

    0. People who have not bothered to look at the code.

    1. People who don't understand what the code is doing.

    2. People who think they know cryptography.

    3. People who think linux programmers are morons.

    4. People who really think they know cryptography.

    5. People who misquote Knuth and attempt to weasel their way out of it.

    And now 6. People who complain about the lack of standards and competence.

    There's probably about a handful of comments worth reading. Is this the new "new" slashdot? When did they produce an usb interface for asses?

    1. btrower

      I looked at the code.

      @AC

      I did look at the code. Maybe add a 7th to your list -- people who did not read all of the comments.

      Re: "People who really think they know cryptography"

      I doubt that anybody who is reasonably competent is anything but humble about this. Claiming the expertise makes you a target for an attack -- to take you down a peg, I guess. Anybody is vulnerable no matter how expert.

      My long term work is in something I call 'Data Packaging'. Although it is focused more on parsimony and reliability, it necessarily involves cryptography. Effective cryptography is exceedingly difficult. The more I learn about it the more pessimistic I am about securing against powerful and determined adversaries.

      I would go a step further than Schneier: Trust nothing. In designing security you need to consider every single item in the system a potential point of attack and every entity must be viewed as an attacker. I mean everything -- the sender, the receiver, the carrier, unrelated third parties, the compiler developer, the hardware designers, Linus Torvalds, every aspect of electromagnetism, even the math. The human beings who are attempting to guard secrets are particularly juicy attack vectors.

      You may not be able to imagine or defend against some things, but you can't secure it if you don't try. I am highly suspicious of the entire security structure of the Internet. Keys are unnecessarily small, for instance. Some of the least trustworthy entities are the ones signing our keys. We use algorithms sponsored by one of our adversaries. It is ridiculous.

      The NSA is, by definition, a security adversary. Why on earth would we be using designs strongly influenced by them? We should, indeed, be using hardware RNGs, but we should not be relying on a black box instruction from Intel and we should not place any trust in any single entity or single devices. Good random sources are crucial for cryptography. Where are they?

      We can not rely upon cryptography alone, no matter how excellent. We also need to put in place laws and customs that recognize what is private such that even if the secret escapes we minimize its impact.

      I consider my understanding of security infrastructure somewhat primitive *and* I am not a cracker. I don't spend my time breaking security systems except in my own testing. If I can see issues with our current practices, you can bet that skilled crackers see gaping holes.

      1. Michael Wojcik Silver badge

        Re: I looked at the code.

        I would go a step further than Schneier: Trust nothing.

        As a threat model, that's useless. Infinite vigilance is impossible. At some point,you run up against Descartes' "evil genius" problem: it's possible that your senses or even your mental faculties have been deranged by some outside agency. So in order to take any purposeful action whatsoever you have to provisionally trust something - and in practice a great many things. You may choose to withhold absolute trust, on principle, but even that is unavoidable to some extent, due to combinatorial explosion (you'll never get around to doubting everything you could doubt) and recursion (are you sure you doubted that idea you think you remember doubting a minute ago?).

        More often than not, when the Reg has one of these security-related stories, someone starts waving the "trust nothing" flag. Maybe they'll cite "Reflections on Trusting Trust" or something similar. It's pure ideology. The best anyone can hope to do is create a strategic threat model that prioritizes risk properly and act as a perfect Bayesian reasoner in applying it, starting with best-guess axiomatic probabilities. Even that's probably impossible to sustain in the real world - it's an asymptotic goal at best.

        1. btrower

          Re: I looked at the code.

          @Michael Wojcik

          The purpose of the 'trust nothing' notion is to invite people to look in places they would not otherwise look for vulnerabilities. You cannot assess a risk you never even thought about. Had people really looked specifically for vulnerabilities in the RNG portion of key generation, it is hard to imagine that we would have had so many key generation problems. I still, BTW, think the problem of RNG is greater than most realize.

          New ideas come from looking in new places or thinking about things differently. You *do* need to learn how to draw 'properly' before you become a Picasso (who was described as being able to 'draw like an angel'), but necessary is not sufficient.

          The thing about the unexpected is precisely that it is not expected. You need to look well outside the box if you are to anticipate novelty.

          Attackers can concentrate all of their resources on a single point of attack. Defenders have to secure the entirety of the perimeter. A single breach is all an attacker needs to win.

          You need only overlook a single weakness to lose this game. If you do not even try to get coverage, you have no hope of getting coverage at all. You dismiss a possible weakness without inspection, it would seem, when you say "it's possible that your senses or even your mental faculties have been deranged by some outside agency". Indeed. My interest is in protection against powerful adversaries. You cannot dismiss a possible line of attack entirely. You may not be able to defend against biases inserted into your thought processes, but you *can* minimize your exposure by minimizing the extent to which, for instance, you depend upon your thought processes in key generation. Case in point: passwords contain much less entropy than they should because they are thoughtfully generated by humans whose thought processes are biased toward mnemonics and trivial substitutions. People responsible for creating rules governing passwords routinely create bad rules that make it difficult to remember passwords and trivial to break them.

          You may trust the person who handed you the compiler that you used to generate your code. If you do, you have entirely exposed yourself to that line of attack. I am not that skilled at these things, but even I could create a malicious C compiler. Can, for instance, a manufacturer of a hardware RNG be compromised?

          Rainbow tables made unfeasible attacks feasible. Salts render them unfeasible again, but there are weak ways and strong ways to implement salts. If you do not even look at these things, how can you determine the extent of the weakness and likelihood of attack?

          I got vigorously downvoted for slamming our existing security infrastructure and suggesting I could do better. I can hardly think how you could do anything worse. I am astonished at the pretense of security as it is. We are at war using sticks and stones against adversaries using intelligent drones armed with nuclear weapons.

          You seem to know this stuff well enough. Perhaps your comments are aimed at security professionals you assume cover all the known bases. Even in that case, it is puzzling how you think discouraging a thorough threat analysis, including things you do not expect to go wrong, makes sense. Given the comments of the readership here it is clear enough that most people reading our comments are much too trusting to have a hope of securing anything.

          As it currently stands, we have an entire Public Key infrastructure that is secured at its root by entities we cannot possibly trust. Are you seriously suggesting we just accept on faith that these untrustworthy signers will not break faith with us?

          Even with my paranoia, I did not give much thought to the notion that the NSA might actually be able to retroactively spy on me by collecting and storing such a massive amount of data that it would include me.

          In the 1980s I personally designed and built communications software that protected a billion dollar banking institution. As of its retirement in the 1990s, it was never breached even though it provided access directly into the banking mainframe. Since that time, the anti has been upped considerably. That system would have little hope of standing against even a poorly funded civilian attack now, let alone attack by a well armed adversary like the NSA.

          Here is a real-world example of what happens to security when you have blinders on:

          http://trac.filezilla-project.org/ticket/5530

          A gaping security hole has existed in a popular open source tool for literally years because the maintainers just can't accept that they have a weakness.

          I stand by my 'trust nothing' sentiment. Yes, you need to trust something eventually, but trusting it up-front without inspection is a recipe for failure. The weekly security updates on software that has been actively maintained for decades is proof enough for me that we have all been too trusting for too long.

          1. John Gamble

            Re: I looked at the code.

            Here is a real-world example of what happens to security when you have blinders on:

            http://trac.filezilla-project.org/ticket/5530

            A gaping security hole has existed in a popular open source tool for literally years because the maintainers just can't accept that they have a weakness.

            Ouch. I was completely unaware of this, and I've used FileZilla. Thank you for the heads-up.

          2. Michael Wojcik Silver badge

            Re: I looked at the code.

            You dismiss a possible weakness without inspection, it would seem, when you say "it's possible that your senses or even your mental faculties have been deranged by some outside agency".

            Sigh. All that and you apparently fail to understand a basic argument about epistemology - one that has a direct bearing on security in general and your security ideology in particular. What possible grounds do you have for arguing that I dismissed anything "without inspection"? That's the antithesis of the evil genius argument. I find it hard to view as credible an argument about epistemology (which is what the "trust nothing" stance ultimately is) from someone who so radically misunderstands Descartes' evil-genius thesis.

            The thing about the unexpected is precisely that it is not expected. You need to look well outside the box if you are to anticipate novelty.

            Spare me your lectures on egg-sucking, please. I've been working in information security for nearly two decades. I'm well aware of issues of novelty.

            If you do not even try to get coverage, you have no hope of getting coverage at all.

            Trivially false. It is obviously impossible to consider all possibilities, for several reasons, some of which I explained in my previous post. So if "coverage" means anything useful in this context, either it's a priori impossible, or this claim is incorrect.

            I don't understand why this basic point is so hard to understand. "Trust nothing" is not possible in practice. You cannot start from an empty set of axioms. Whatever you choose to question, you have to begin with some assumptions. So "trust nothing" is just a slogan, and as a security principle, it's a pretty vapid one. It's far more productive to do some actual robust threat modeling.

            In the 1980s I personally designed and built communications software that protected a billion dollar banking institution.

            Been there, done that, got the t-shirt. Mine's still in use at half a dozen customer sites.

            Yes, you need to trust something eventually, but trusting it up-front without inspection is a recipe for failure.

            OK. When was the last time you audited your food supply to make sure no one was tainting it with mind-altering drugs? When did you last confirm that none of your neighbors are spying on you? What kind of substantive audit have you done of the software running on your computers? Checked for keyloggers recently? Claiming you "inspected" all of these, and the thousands of other things you'd have to "inspect" every day, would be patently false; so what attacks have you prevented by espousing "trust nothing"? And even if you could maintain ceaseless and ubiquitous vigilance (and I really hope it is obvious to everyone that is impossible), you still fall foul of the evil genius: you have to trust the evidence of your senses and your cognitive processes at some point, or you can't use them to inspect anything else.

  35. JCitizen
    Devil

    Spintronics...

    that's the answer - whip out your USB device and add an infinite variable to your encryption scheme. Let's see the gubbamint beat that one! HA!

This topic is closed for new posts.

Other stories you might like