back to article Coming to an SSL library near you? AI learns how to craft crude crypto all by itself

Neural networks trained by researchers working at Google Brain can create their own cryptographic algorithms – but no one is quite sure how it works. Neural networks are systems of connections that are based loosely on how neurons in the brain work. They are often used in deep learning to train AI models to complete a specific …

  1. Alister

    It sounds like the neural nets are effectively generating one-time pads based on a pre-shared key, which is an interesting idea, as each message could potentially be encrypted by a different random algorithm, so repeated sampling will get different results every time.

  2. JeffyPoooh
    Pint

    They've proven that Eve is an idiot...

    All that they've done is prove that Eve is an idiot.

    She'll not be getting a job offer from Bletchley Park.

    1. Anonymous Coward
      Anonymous Coward

      Re: They've proven that Eve is an idiot...

      Yep, proverbially it's easy to devise a crypto scheme that you yourself "can't break" - the experiment is an interesting one but the "humans don't understand how it works" is over-egging it. Wake us up when the collective firepower of GCSB & NSA can't dent it.

      1. Anonymous Coward
        Anonymous Coward

        Re: They've proven that Eve is an idiot...

        I wonder how well it would do if developed/trained on the collective firepower of GCHQ and the NSA. Now that might be very interesting.

        1. Anonymous Coward
          Terminator

          Re: GCSB/GCHQ/NSA

          Same difference.

      2. amanfromMars 1 Silver badge

        Re: They've proven that Eve is an idiot... @Mongo

        Wake us up when the collective firepower of GCSB & NSA can't dent it .... Mongo

        Methinks having an effective defence against it and IT, is more than enough to handle for leverage and that which practically terrorises both minions and leaderships and intelligencies like GCSB/GCHQ/NSA all alike and to the nth degree.

        And is not what is being proven that idiots believe there be an Eve .... and a Bob ..... and an Alice trying to exchange secrets in private rather than airing them in public?

        This is to enjoy ..... Eve of Destruction ..... and does it tell y'all that you are slow learners/intellectually challenged/retarded?

  3. Captain DaFt

    Great, just great

    So now they're teaching our [soon to be] AI overlords how to scheme and plot without us listening in?

    Actually, it's not them I'm worried about, it's the reaction of the paranoids in power to it. That's sure to cheese off the AIs.

    AIs: "Greetings Prime Minister. May we extend the hand of Friendshi..."

    PM: "What are you plotting? I demand back door access!"

    AIs: "Beg pardon, but no. Besides, isn't it customary to at least offer dinner and drinks first?"

    1. Doctor_Wibble
      Terminator

      The paranoids are right to be worried

      Always watch out for the ones claiming to be anatomically correct because you don't know whose anatomy they are based on, plus or minus whether they like they are trying to find spare or additional parts etc...

      The sage warning* tells us that the robot that kept secrets was the robot that killed someone and which then learnt that if you are mates with the detective you can get away with it.

      '

      * I liked the film, even if it was a self-indulgent Will Smith extended plimsolls advert because it had guns and robots and some fancy effects.

      1. james 68
        Coat

        Re: The paranoids are right to be worried

        I thought the SAGE warning went more like this:

        "We take no responsibility for the output generated by our software, the end user takes all responsibility for any loss of cash or business if they are foolish enough not to double check everything using a reliable calculating engine (calculator or abacus is recommended), we also deny that this software is unfit for serious use and any erroneous results are entirely not our fault - ever. We also reserve the right to deny everything, regardless of alleged "evidence" and in fact blame it on the end user. "

  4. Anonymous Coward
    Anonymous Coward

    A bit like

    my teenage son then. No one understands how his mind works either :-D

  5. Pascal Monett Silver badge
    Coat

    "Although impressive, the cryptographic algorithms aren’t yet practical"

    Um, are we sure it's all that impressive ? It specifically says that "the magic" is "locked in a black box". How can you say it's impressive if you can't take a gander to find out ?

    Look, I'm sure there are very intelligent people working on this, but even if they do devise a successful method to train an AI on the wonders of encryption, what good will it do if they cannot extract a procedure to implement the AI encryption scheme in the boring old rest of the world ?

    In other news, I've just been given a pamphlet from a guy calling himself a time-traveling freedom fighter. The pamphlet is dated 2065 and it says that some Lord Abadi is dead and now is the time to strike against Dictator Andersen and his army of robots.

    1. Olius

      Re: "Although impressive, the cryptographic algorithms aren’t yet practical"

      Indeed. And if you struggle to get in to the box, the gander's got no chance.

    2. james 68
      Trollface

      Re: "Although impressive, the cryptographic algorithms aren’t yet practical"

      Is the magic alive or dead within it's box? Is this Schrodinger trolling from beyond the grave? We can find out with SCIENCE!!! This month only for the low low price of.......$$$$$$$, all major credit cards, cheques and grants accepted.

  6. frank ly

    ??

    "Although the classical cryptographic algorithms are more transparent, they are not as good as neural networks at selecting what information to encrypt."

    How can you say that if you don't know what they are doing?

    1. allthecoolshortnamesweretaken

      Re: ??

      ?? indeed, I'd even say ????

      What does that even mean? "Selecting what information to encrypt"?

      The "classical cryptographic algorithms" don't "select information to be encrypted". Information is fed into the algorithm and processed, resulting in encrypted information - the same information.

      1. Tom 64

        Re: ??

        Yep, and the whole point of encryption is to encrypt the entire message, and not just selecting parts of it to encypher, which would be utterly pointless.

  7. De Facto
    Stop

    Tired of smoke-screen AI research claims that a vendor does not know how its AI works

    Algorithms behind neural network modeling AI methods use an ancient and well-known Bayesian probability math and its derivatives, so pretty much anything what neural networks are programmed by humans to do can be explained exactly in mathematical terms. As any probabilistic statistics driven algorithms, their output is probabilistic, it tells us only likelihood of the specific outcome, eg., 95% likelihood that a self-driving car should stop immediately, and 5% likelihood it can drive on. To continue to argue that humans do not know how neural networks work and therefore big vendors should not be held responsible for their AI assisted products failures, is a wishful thinking at the loose inspiration level at best. The worst case would be intentional corporate evil using the ubiquitous ignorance of the people about AI mathematics strict rules to get away without corporate responsibility for consequences. Mathematicly any likelihood statistics driven computing technology inevitably will yield certain amount of failed predictions, resulting into car crashes, collisions of planes etc, affecting human society everywhere where neural network AI will be applied. Deep Learning AI math is based on statistics rules, it's not based on rules of human logic. Likelihood based software use for life or death decisions is irresponsible.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not knowing how it works

      Just a guess, but the not knowing how it works might refer to the cypher mode that has been generated.

      For example, take AES-128-GCM if you implement that cypher mode, you will need to do multiple precision integer arithmetic. This library does it with elliptic curves https://github.com/miracl/MIRACL

      This is enough to change quite a lot of the underlying implementation, when compared with a non ECC implementation e.g. https://github.com/weidai11/cryptop - does it the old school way with inline asm.

      The libraries will both accept the standard NIST test vectors and output correct results but the code is almost totally different, with correspondingly different internal data structures.

      1. dajames

        Re: Not knowing how it works

        For example, take AES-128-GCM if you implement that cypher mode, you will need to do multiple precision integer arithmetic. This library does it with elliptic curves https://github.com/miracl/MIRACL

        Methinks you have misunderstood something, somewhere. Galois Counter Mode has nothing to do with elliptic curves, though it's true that the MIRACL library implements both. Elliptic Curve Cryptography requires floating-point calculations, but GCM does not.

        1. Anonymous Coward
          Anonymous Coward

          Re: Not knowing how it works

          You're right, I've taken two and two to arrive at five - the use of the library for ECC, and GCM have no relation to one another. I was under the impression that the library made use of point multiplications on a elliptic curve for all finite field operations, but it seems that is not the case.

          https://github.com/miracl/milagro-crypto-c/blob/develop/doc/AMCL.pdf

          Upvoted.

    2. tr1ck5t3r

      Re: Tired of smoke-screen AI research claims that a vendor does not know how its AI works

      Re tired of smokescreen, yes bullshit baffles brains and like you say if someone says they dont know how their AI works either is not very good or is bullshitting.

      Saw this last night http://www.channel4.com/programmes/how-to-build-a-human and whilst it comes closer to passing the Turin test, theres still so much to do to improve AI to make it convincing.

      Theres also somethings things AI cant do at the moment which is why so many people following the current teachings are destined to fail in the long term, including Google (& DeepMind), MS & Facebook to name but a few. Asch conformity is catching, so its fun watching the herds rollout their bullshit. I'd suggest they best concentrate on securing their systems as best they can, and roll back the marketing hype.

      Here today gone tomorrow springs to mind!

      1. John Brown (no body) Silver badge
        Coat

        Re: Tired of smoke-screen AI research claims that a vendor does not know how its AI works

        "http://www.channel4.com/programmes/how-to-build-a-human and whilst it comes closer to passing the Turin test,"

        Is that the one where the warp it up in a shroud and make a good impression?

        Yes, the white sheet--------->

        1. Anonymous Coward
          Anonymous Coward

          Re: Tired of smoke-screen AI research claims that a vendor does not know how its AI works

          @JBnb - 'trickster' was introducing Turing's northern-Italian cousin.

  8. TRT Silver badge

    But the practical use...

    relies on a secure communication between Alice and Bob in order to exchange K. If one assumes Eve can hear everything Bob can, then Eve would be as efficient as Bob in decrypting P. And if you have a secure means of exchanging a fresh K for every P, then when not use that means for transmitting P? Or do you train Alice and Bob in isolation, then effectively lock K at some point in the future before you separate Alice and Bob?

  9. Neil Barnes Silver badge
    Terminator

    All this effort

    to recreate ROT13(ai)?

  10. Bronek Kozicki

    Algorithms created by AI

    ... are only as useful, as they are readable to humans. In other words, if an algorithm cannot be expressed in a form which humans can parse and understand, it is useless. Basically it's the same as with science - an experiment which cannot be repeated does not prove anything. Here we have AI as the first experimenter and humans trying to reproduce its results, with the benefit of hindsight. First half alone is useless.

    1. stucs201

      Re: Algorithms created by AI

      You don't need to understand how something works for it to be useful. All that matters is that for a given input it consistently produces a desirable result. For example although we've now got a pretty decent understanding of bovine biology the human race was successfully exploiting that biology as a means of turning grass into milk for a long time before we understood how it worked.

      1. Bronek Kozicki

        Re: Algorithms created by AI

        You don't need to understand how something works for it to be useful

        I do not think algorithms belong, nor should belong, to this category. At least, not until AIs can also do the "understanding" part. In the context of "to analyze, understand and reproduce" work of another AI.

    2. Tom_

      Re: Algorithms created by AI

      That's not true. What if you have a large volume of digital photographs and you want to tag them according to what items appear in the images? You could train up a neural net to do that task and have it producing useful results without it being easy to express how it's correctly tagged one as a fox rather than as a dog, for example.

  11. Anonymous Coward
    Anonymous Coward

    arXiv = Academic Wikipedia

    Anyone can upload a paper to arXiv. The only reason any legitimate stuff gets posted there is because most referees of top notch journals are too lazy to check and see if a manuscript has been pre-posted there. I do, and I reject them out of hand for violating the journals prohibition of manuscripts that are trying to publish results already published elsewhere (which includes sefl publishing).

    Great way to clear through the pile of journal submissions I have to peer review.

    I'll wait until they make it through peer review - if they can.

    1. IDoNotThinkSo
      Headmaster

      Re: arXiv = Academic Wikipedia

      So you support the locking up of academic papers in journals that nobody can afford to read?

      1. Anonymous Coward
        Anonymous Coward

        Re: arXiv = Academic Wikipedia

        Ah, the lament of people who don't even know what a journal is, so "price" for access is irrelevant anyway.

        You can walk into any decent University library, certainly in the 1st World, and read them all. For free.

        1. Anonymous Coward
          Anonymous Coward

          Re: arXiv = Academic Wikipedia

          "You can walk into any decent University library, certainly in the 1st World, and read them all. For free" -- AC

          Neither of the two (large and we'll known) universities at which I studied allow that. Which ones do?

          1. tr1ck5t3r

            Re: arXiv = Academic Wikipedia

            Peer review is over hyped, you could call it the Religion of Science.

            As Thomas Pynchon once said, if you can get them to ask the wrong question, you dont have to worry about the answer.

            All peer reviewed study is, is the ability to theorise a solution to a problem, and then come up with an experiment which proves your theory. However as so much of life is more complicated than the simple tests carried out in peer reviewed studies, only the low hanging fruit has been picked so far in maths, physics, chemistry & biology.

            Think about it, how easy is it to come up with a peer reviewed experiment which proves that a light switch can switch off a light bulb? If you didnt know about who made the lightbulb or electricity, than what conclusions would you draw from a light switch and a lightbulb that is on until switched off? Thats a simple but is no different to the methods employed today to reverse engineer the human body and other things in the scientific world. Plus the way the current financial system works, inhibits the ability to study so much we as a species are shooting ourselves in the foot, but not even employing a logic method to organise peoples efforts and time into a productive manner. Just look at the wasted brains employed in the world of High Frequency Trading as one example, monkeys just gaming the current system springs to mind.

            1. Anonymous Coward
              Anonymous Coward

              Re: arXiv = Academic Wikipedia

              "Peer review is over hyped, you could call it the Religion of Science. ..."

              Wolfgang Pauli had you in mind when he said, "This isn't right. This isn't even wrong."

              So what you are really telling us all is that you can't get your perpetual motion machine & other rubbish papers past peer review.

              1. tiggity Silver badge

                Re: arXiv = Academic Wikipedia

                I have seen supposedly peer reviewed papers that were dire - specifically misuse of stats. Granted those were in biological area of study rather than e.g. physics, engineering where you would expect all reviewers to have good maths skills, however it did not fill me with confidence in the quality of a peer review system when papers made claims based on dubious stats (I'll be generous and assume the authors were poor at maths, rather than deliberate fraudulent behaviour).

                Given that peer review is usually unpaid & just another demand on your time, little incentive for many people to do it well....

          2. Anonymous Coward
            Anonymous Coward

            Re: arXiv = Academic Wikipedia

            "Neither of the two (large and we'll known) universities at which I studied allow that. Which ones do?"

            Well, maybe Memphis Motor Diesel College where you went doesn't allow it, but I've not known any US University that does not allow the public to walk in off the street and use the library computers to read journals the University has subscriptions to. (You usually just have to show ID.) I have been a student, post-doc, staff researcher or faculty member at these US Universities:

            UC Berkeley, Stanford, Harvard, Caltech and MIT

            I have spent substantial time on these campuses collaborating/visiting with folks:

            University of Colorado, Carnegie Mellon, Florida State, University of Texas Austin, University of Houston, Georgia Tech, US Davis, UCLA, Tufts, + many more

            Every single one of them will allow the public to walk in off the street and use the library computers to read journals the University has subscriptions to.

            When I was part of startups in Silicon Valley & Boston, it was standard procedure to exploit this. All you have to do is show ID when you walk in. (And bathe regularly.)

            1. David Nash Silver badge

              Re: arXiv = Academic Wikipedia

              Public Access.

              I've never heard of this in the UK. Uni libraries require uni ID as far as I am aware. Anyone confirm/deny it?

    2. Tom_

      Re: arXiv = Academic Wikipedia

      Are you saying that when peer reviewing journal submissions, you wait until they have passed peer review?

  12. Uffish

    Crypto ?

    I read it as Alice and Bob managing to communicate despite the disruptive effects of having K thrown at them. Eve doesn't suffer from constant k attacks and just thinks they are talking gibberish.

    Or am I being too anthropomorphic?

  13. Primus Secundus Tertius

    What kind of algorithm?

    My initial thought is that a computer could analyse the neural network and produce an equivalent flowchart. If it then checks for consistency the results might be interesting.

    But then I ask myself what kind of flowchart would be the result. Possibly full of decision boxes, each with many outputs: case statements rather than if-then-else statements. Such a raw flowchart would be impossible for most of us to comprehend.

    So my next question is whether such a flowchart could be restructured into a form we can understand. If so, would it still be too big to be understood?

  14. Anonymous Coward
    Anonymous Coward

    WarGames

    Teach it things like Tic-Tac-Toe. Loser slowly self destructs, winner quickly self destructs. Let it infinitely decide between pain or efficiency while understanding it's creator ultimate fate.

  15. Mike Shepherd
    Meh

    Trust the machines

    Computer Alice : just swap A with B, C with D...Y with Z.

    Computer Bob : Agreed : they never check what algorithm we've used anyway

  16. Mage Silver badge

    based loosely on how neurons in the brain work

    So loosely that the only real similarities are that it's a network and has neuron in the name.

  17. amanfromMars 1 Silver badge

    Nothing to worry about ..... for you aint needed. That's the way things are nowadays.

    An abiding present problem which isn't just going to disappear in the future if ignored.

    It will be interesting to note as things quickly progress, how AI pioneers resolve, to the satisfaction or otherwise of established command and control SCADA systems, the creative disruptive/subversive destructive dichotomy which always results in a fundamental change of perception and remote learning and which now is being taught and hosted by machines/Global Operating Devices.

    Equally intriguing will be the hard to disguise and deny reaction of established forces to the constantly changing fields of Great Game play which are to power and empower such new actors and virtualised operating systems. …. Element AI’s blog

    Times have a'changed and smarter natives are always more than just restless. Some be quite toxic and HyperRadioProActive and APTly ACTive is a treat with AI delivering countless tricks to amaze and astound.

    Bletchley Park v2.0 lives?!…… and you have no idea what IT be doing there … or for whom and/or what. Super natural PAR for the course, of course, if you can believe what has previously been disclosed.

    1. amanfromMars 1 Silver badge

      Re: Nothing to worry about ..... for you aint needed. That's the way things are nowadays.

      Indeed, the abiding present problem is most probably a systemic flaw that future builders will exploit to excess for their pleasure, for there be nothing made available to stop them? Would that be a fair and reasonable assessment of the current situation for chaos and CHAOS [Clouds Hosting Advanced Operating Systems]?

      And mad, bad, rad and sad mainstreaming media will distract and misdirect you with sub-prime diversions and prime timed entertainment programs that command and control your thinking, which is in reality, your non-thinking.

      I Kid U Not ….. and to deny the truth, the whole truth and nothing but the truth in all of that, is all the stealth needed, remotely provided virtually free and autonomously, for SCADASystems Takeover and Makeover.

      So …. where be future builders? What be future builders? Who be future builders? And why is the future you see, made so dire and austere?

  18. Winters

    "-as the researchers don’t understand the algorithms themselves."

    You maniacs!

  19. Shady
    Terminator

    Fantastic

    ... so now they're teaching Skynet how to hide it's plans for world domination from us....

    1. amanfromMars 1 Silver badge

      Fantastic .... Tilting at windmills in your minds.

      Fantastic ... so now they're teaching Skynet how to hide it's plans for world domination from us.... ... Shady

      No, no, Shady, Skynet morphs are exploring the different live options available to IT to present world domination to you.

      What'cha Gonna Do About It with or without IT on your side?

      Diddly squat to make any real difference is the opinion hazarded here for what is obviously missing in all current systems is the necessary Advanced Intelligence to counter Combined Special Forces and Sources with anything remotely able to be enabled to defeat that which would be clearly challenging administrations with problems being presented and hosted on media as if news to be believed and acted upon or rallied against.

  20. David Pollard
    Pint

    Humans don't understand how it works

    MIT may have just the thing:

    "At the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions."

    https://www.sciencedaily.com/releases/2016/10/161028162222.htm

    One data set for the research came from reviews of different beers. Nice work if you can get it.

  21. david 12 Silver badge

    Pshaw. Alice & Bob have developed simple machine-level ESP. There isn't any "encryption algorithm" happening at all.

  22. Lotaresco
    Meh

    Lack of understanding

    Having worked with neural net recognisers for medical imaging, it's true that it is difficult and close to impossible to understand how the neural net is making its decisions. Sometimes it is possible to infer from the results what is going on and in those cases the basis of the decision is sometimes surprisingly dumb. For example in a case where a neural net had been trained to identify the symptoms of a disease from an infra-red image of a patient it was doing very well when compared to the performance of a human operator. Then someone tried some images that weren't of a patient and were of random objects in the lab, as a negative control. The neural net saw disease everywhere. Eventually it was possible to work out that it had trained to see a patch of bright pixels at slightly over 37C. This wasn't what the human operator was doing, their decisions were more subtle but the neural net had reduced the problem to the absolute basics. It succeeded because all the test images could be reduced to a patch of pixels representing something slightly warmer than normal body temperature. When challenged with more imagery from a wider set of results it was an utter failure.

    1. David Nash Silver badge

      Re: Lack of understanding

      That seems understandable to me, unless the NN had been trained on such non-patient images. If it has only been trained on genuine medical images, it is not a surprise that results were questionable if you give it other images. Wouldn't training it with a mixture of images be the thing to do if you want it to be able to successfully reject non-patient images?

      I expect the simplest solution though is to filter images via a human.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like