back to article Fool ML once, shame on you. Fool ML twice, shame on... the AI dev? If you can hoodwink one model, you may be able to trick many more

Adversarial attacks that trick one machine-learning model can potentially be used to fool other so-called artificially intelligent systems, according to a new study. It's hoped the research will inform and persuade AI developers to make their smart software more robust against these transferable attacks, preventing malicious …

  1. MacroRodent Silver badge

    Datasets

    Isn't there also a tendency to train AI:s with publicly available datasets like ImageNet? An attacker can improve his changes by using the same dataset to train his test adversary.

  2. johnrobyclayton

    AI Buster Buster Buster Buster

    Train an AI to recognise something.

    Train the next AI to fool the first AI.

    Train a third AI to recognise the attempts to fool the first AI

    Train the fourth AI to fool the third AI

    Train the fifth AI to recognise the attempts to fool the third AI

    Wash, rinse, repeat

    1. Adrian 4 Silver badge

      Re: AI Buster Buster Buster Buster

      Train the first AI on a subset of the training data

      Train the second AI on a different subset

      etc.

      Obtain a consensus

      Teamwork gives humans a better result, through diversity as well as parallelism, than individuals.

      Unless it's a committee.

      1. Michael Wojcik Silver badge

        Re: AI Buster Buster Buster Buster

        Train the first AI on a subset of the training data

        Train the second AI on a different subset

        It's been done, with assorted variations. Usually even with a single model you have a held-aside portion of the corpus for purposes such as testing and determining correction parameters.

        ML is steadily moving toward more and more complex architectures anyway - certainly more complex than just "train several models on different corpora and then let them vote". Graph Network (GN) architectures currently look like the most plausibly practical generalization to me, at least for problem domains where sufficient hardware resources can be applied. GNs let you combine lots of different models in various complex ways.

    2. TechnicalBen Silver badge

      Kinda.

      How do we, or biology do it? We use multiple models for single senses/data sets or we use multiple senses/data sets. We ask additional people etc.

      So you could use two differently modeled ais. This lowers the size of the collision space of errirs or exploits. You could use multiple types of sensor, IR and RGB sensors, 3d or sound (if distinguishing an African or European swift).

      Finally we could use the ai with human assistance... though that really only works better for false positives and not negatices... though theoretically you could retrain against known exploits as they are discovered by the human part of the check this way.

      1. Anonymous Coward
        Anonymous Coward

        Re: Kinda.

        The problem with using two different "ais" is that we have only one paradigm today that's any good at all for bigger-than-toy problems.

        WINTER IS COMING

    3. I.Geller Bronze badge

      Re: AI Buster Buster Buster Buster

      Any one of us could be deceived. What makes you think the AI can't?

    4. Michael Wojcik Silver badge

      Re: AI Buster Buster Buster Buster

      Points for the reference to The Big Hit, but you do know you're basically describing the GAN (Generative Adversarial Network) architecture, right? We're basically doing this already.

  3. I.Geller Bronze badge

    lexical clone

    Create a lexical clone of yourself (or someone else) and this will solve (to some degree) the problem with models: your lexical clone will contain all the necessary knowledge. That is, one will have to deceive AI, not artificial and limited models.

    1. Michael Wojcik Silver badge

      Re: lexical clone

      Ah, it's always good to hear from one of the commentariat's resident kooks. Ilya, I don't think I've ever seen anyone else use the phrase "lexical clone" the way you do, but if you have a reference for some text which does, I'd like to read it.

      Is it worth pointing out that 1) humans can also be deceived, or that 2) it is not a priori obvious that there is any functional distinction between human intelligence and the universe of possible "artificial models"? Attempts to prove such a difference generally either appeal to untestable attributes or rather suspect arguments about formal power (viz. Penrose).

      (Also, I have to say that I skimmed your patent and I'm not sure I see anything very novel there, except perhaps your compatibility formula. Expanding a kernel phrase into a small corpus using synonyms and grammatical transformations is pretty well established in NLP. But I didn't look at it terribly closely.)

      1. I.Geller Bronze badge

        Re: lexical clone

        I. 2) it is not a priori obvious that there is any functional distinction between human intelligence and the universe of possible "artificial models"?

        While sufficiently long tuples are formed there is no difference between humans and Lexical Clones; where in mathematics tuple is a finite ordered list (sequence) of elements. Speaking of my Lexical Clones I meant that our minds are sets of tuples that can be somehow fixed as sets of related patterns.

        II. There is no difference between how we humans and how computer thinks - if and when comprehensive tuples describing as many situations as possible are formed. That is created Virtual Assistants (a synonym for my Lexical Clones).

        III. The patent office granted them.

  4. Michael Wojcik Silver badge

    Good ol' Northeaster

    Northeaster University

    As an alum of Northeaster[n] myself, I'd like to note that it's a pretty good place, and certainly better than that crummy University of Westchristmas.

    Also, this typo has inspired me to think of the old place as "Nor'easter University", a pun which until now somehow escaped my attention. (Note Northeastern is in Boston, where the regional term "nor'easter", for a strong storm coming in from the Atlantic, can be heard.) Indeed, it now strikes me as rather a shame that the university's athletic congregations are not called the Northeaster Nor'easters, which sounds a hell of a lot tougher than "Huskies". Plus huskies are the mascots of approximately a million colleges and universities in the US, including U Conn, which is practically next door.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019