back to article Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can

It doesn't take much to confuse AI image classifiers: a group from Japan's Kyushu University reckon you can fool them by changing the value of a single pixel in an image. Researchers Jiawei Su, Danilo Vasconcellos Vargas and Sakurai Kouichi were working with two objectives: first, to predictably trick a Deep Neural Network ( …

  1. Bill Posters
    Pint

    Huh

    Just proves dogs are an AI's best friend as well.

  2. foxyshadis

    So they created a badly-trained machine learning algorithm, limited it to 32x32, and then created an easy attack against it? This is the kind of spam publishing that floods the lower-tier journals. I'm not even remotely interested until it's at least tested against one of the dozens of existing commercial machine learning algorithms.

    It might have been relevant in the 90's, when algorithms actually did downsample to such an extreme just to work at all in the processing power available, but this has literally zero implication on anything today, it's pure wankery by academics way out of touch with the state of the industry.

    1. Anonymous Coward
      Anonymous Coward

      The difference between science and technology

      So they created a badly-trained machine learning algorithm, limited it to 32x32, and then created an easy attack against it? This is the kind of spam publishing that floods the lower-tier journals.

      You are confusing science (exploring and establishing the principles) and technology (making things). This manuscript is about science of recognition - while you are asking for a sellable product, which may or may not exist in the end.

      At least to me, this appears to be an original and thought-provoking work. You are correct that the attack as it stands would not work against a modern commercial system (however, see below). That is not the point. The main message is that image recognition is basically lossy compression or hashing: you are required to convert a large dataset (an image) to its hash (the category name). If your set of training inputs does not cover the entire Hilbert space of the dataset (ie you do not include all posssible nonsensical images in your training regimen - which is obviously impossible), your hash is guaranteed to mis-classify some of the perturbed images.

      What this paper shows is that the changes needed to a sensible image (which you classify/hash correctly) to modify its classification are surprizingly small (0.1% of the pixels), and shows how to find these changes. If the fraction of the pixels needed to take the image outside of the volume covered by the training set remains the same for large images (which obviously needs to be examined - something the paper does not hide or downplay in any way), then for a respectable 5-megapixel image you might need to modify about 5 thousands pixels - something human eye might still ignore as noise or an inconsequential blemish.

      Furthermore, the attack as it stands is already useful against modern networks trying to classify parts of a larger picture - which may be not much larger than 1000-pixel size.

      1. Anonymous Coward
        Anonymous Coward

        Re: The difference between science and technology

        Thank you AC, for a marvellous explanation.

        Would it follow that this attack illustrates a major flaw in the AI and image recognition proposed for self driving cars? Not necessarily from a security perspective, but simply that the ability to classify an image and then calculate risks and responses.

      2. jmch Silver badge

        Re: The difference between science and technology

        " If your set of training inputs does not cover the entire Hilbert space of the dataset (ie you do not include all posssible nonsensical images in your training regimen..."

        That excellent insight also relates to how training of ANNs is completely different from the way humans learn. From birth on, humans are spending 10-18 hours a day looking at things. The vast majority of these images aren't anything in particular, so humans are VERY good at ignoring noise in a picture*. ANNs spend a paltry time training compared to that, and from my (albeit limited) experience with them, the training sets are typically much more weighted to what is wanted to be recognised rather than what is not wanted to be recognised.

        So maybe one important result to be drawn from this paper is that ANNs need to be trained with training sets with many thousands of negatives for every positive.

        *too good actually, as sometimes humans can filter out even glaringly obvious things if we are focusing on something else eg Invisible Gorilla experiment

        1. Muscleguy

          Re: The difference between science and technology

          I'm short sighted and a distance runner. if it is raining or just overcast and still in the morning if I'm running without my glasses on I'm always mistaking green bins at bus stops as portly people waiting for the bus. My visual system's best guess from the information.

          I have thus become very sceptical of such things and find them quite amusing.

          Whenever you look at something identify it then go 'that's not it' and properly identify it this process is happening.

          When doing science, especially when doing things like looking down microscopes when you might get confronted with something you have never seen before or something you have but in a different aspect or cut in a different plane you have to train yourself to be properly sceptical and really LOOK at what is actually there, not what you think is there. Training students to do this takes time.

          I remember reading about a student palaeontologist in the field. They were looking for early mammal fossils which are usually isolated teeth or small fragments of jaw with teeth in them. He described going out and scanning the dusty stone field and seeing nothing. Older more experienced people were finding things but he couldn't see them but then suddenly teeth started leaping out at him.

          I also remember at my first postdoc I was sitting reading papers while things got set up (as you do) and some molecular biologists behind me were trying to dissect out 8day mouse embryos and failing to find them. I eventually gave in and offered to help. I looked down the scope and the disembodied head of an embryo was 'staring' back at me. They are at that stage covered foamy pink uterus lining, the embryos are like slightly smoked glass sculptures. These guys couldn't see it until I pointed it out. They all became moderately competent embryologists.

          Seeing what is there can be hard. Artists go through a similar process of learning to really see.

      3. Anonymous Coward
        Anonymous Coward

        Re: Thanks for the really well thought comments and reply.

        As said above... this proves if an algorithm "knows" what a dog is, or if it is just hashing a load of images and getting an idea what the closes set of data points are for "images most likely to be a dog".

        A proper "trained" net would instead look for eyes (dark or light spots, set apart, right size etc), then ears, then fur, then colour. A "working" net would just do anything that works, even if it is easily tricked. One type can be fooled by a single wrong pixel (or a focused attack) the other cannot (though other attacks, such as a toy dog ;) can trick it).

        So next time someone adds "AI" or "neural net" to a product you are buying, hope it's not mission, life or Pizza delivery ("Siri order me a Pizza to my home", "Ok, Zebra ordered to Rome") vital!

  3. Neil Barnes Silver badge
    WTF?

    The adjusted pixels

    are hardly unfindable in the presented illustrations.

    1. Anonymous Coward
      Anonymous Coward

      Re: The adjusted pixels

      That was not the point. The point was not to hide the deviation but to show that a small (<0.1%) image change catastophically destroyed the ability of the algorithm to catgorise the image and make it appear to be something else.

      1. My Alter Ego
        Joke

        Re: The adjusted pixels

        "...catastophically destroyed the ability of the algorithm to catgorise the image..."

        That's a little unfair, some of like dog pictures!

      2. Anonymous Coward
        Anonymous Coward

        Re: The adjusted pixels

        "change catastophically destroyed the ability of the algorithm to catgorise"

        Minor nit - we're talking about the ability of the *trained model* to classify the image, not a given algorithm in the normal sense. Half the point of ANNs is that the end "algorithm" is not well-defined; it is produced arbitrarily and unpredictably based on the input training set and the design of the training network.

        "It shows that these hash functions (AI) are seriously unstable, and -more importantly - don't work in an even vaguely similar way to human pattern recognition."

        Less minor nits:

        1) We're not talking about hash functions - there's naff all hashing in this

        2) The outputs of the training can be remarkably similar to human pattern detection, as far as we can tell. The following two (excellent) blogs show what comes out when you attempt to visualise what the individual feature extraction and classification layers are actually doing in a network:

        https://hackernoon.com/visualizing-parts-of-convolutional-neural-networks-using-keras-and-cats-5cc01b214e59

        http://blog.cloudera.com/blog/2017/10/understanding-how-deep-learning-learns-to-play-set/

        Of particular interest is the tendency to pick up on contrasting colours and edges, one of the most important cues in natural vision. And remember, we've not told the model to look for edges. It's decided itself that that's the most significant feature to extract.

        That's also (probably) what leads to this attack being so successful; it's only trained on a limited set of very natural photos. They've stuck a great big lump of unnatural high-contrast in the middle of the picture so the network has no idea what to do and thinks it's a bleedin' horse.

        Such disruption attacks can be prevented with either some initial data cleansing or bigger models, and the attacks are often impractical because they depend on access to (or the ability to reverse engineer) the probability distributions the model comes out with. However the attacks are still academically interesting because they allow us to quantify the sensitivity of certain approaches and design around that.

        1. Richard 12 Silver badge

          Re: The adjusted pixels

          An image classifier is really a type of hashing algorithm.

          It takes a large number of varied inputs and puts them into a small number of specified buckets, and the same input goes into the same bucket every time. That's what hashing does.

          The only real difference is the intended purpose.

          A cryptographic hash wants small input changes to give large output changes, and for it to be very difficult to find another input giving the same output.

          A hash for a hashmap wants an even spread of outputs for the input set, and to be very fast.

          Input (image) classifiers want small input changes not to change the output at all.

          1. LionelB Silver badge

            Re: The adjusted pixels

            An image classifier is really a type of hashing algorithm.

            Not that simplistic: google Convolutional Neural Newtorks.

    2. Richard 12 Silver badge
      Facepalm

      Re: The adjusted pixels

      I believe those are markers to show where they changed it, not what it was changed to.

      It shows that these hash functions (AI) are seriously unstable, and -more importantly - don't work in an even vaguely similar way to human pattern recognition.

      Human vision can be tricked fairly easily, but it usually requires a change across a significant part of the field of vision.

  4. Mage Silver badge

    Hah!

    Hot Dog!

    1. Anonymous Coward
      Anonymous Coward

      Re: Hah!

      Changed to Jumping Frog, Albuquerque

  5. maffski

    No different to guessing a password

    It seems to be an iterative attack. Essentially no different to guessing a hashed password by coming up with something that creates the same hash. If your access control isn't rate limiting or cutting off then it deserves to be bypassed.

  6. This post has been deleted by its author

  7. David Roberts
    Pint

    But can it tell the difference

    Between a dog and a fox?

    Hint 5 * ->

  8. MacroRodent

    Another demo...

    Another demo that machine vision still has a long way to go. Just recently a different group demonstrated an attack where a tiny change to a physical object, like adding a suitable sticker to a traffic sign, caused an image classifier to mistake it for something else entirely, even though to a human it still was obviously the traffic sign. Nice to know we are still better than machines at something...

    1. BoldMan

      Re: Another demo...

      Yes but what is scary about this is all the techno-illiterates who reckon that self-driving cars will be here shortly when its obvious the sensor technology just isn't up to scratch yet.

  9. c1ue

    Interesting work and equally good comments.

    ANNs, AI and machine learning are all the rage, but the reality is still that their capabilities are suspect.

    For one thing, human beings are continuously training - from birth to death - on noise filtering, pattern recognition, etc.

    How many images does a human being process from year 1 to year 10 vs. a repetitive set of dog vs. cat pics? Quantity does not necessarily replace quality.

    In theory, over time, the robots can keep improving - but ultimately the main problem is that their survival scores are based on income. Or in other words, they will only improve to the point where their makers expect to make money - a very low level of acceptable form of success.

    Darwinian in the crapification sense as opposed to Darwinian in the "nature red in tooth and claw" sense.

    1. Anonymous Coward
      Anonymous Coward

      But another way of looking at it might be: Humans can only recognise what they themselves have observed or another has observed and described for them.

      However AI/Machine can assimilate the information of all of them. If you have 1 million AI vehicles driving for one hour then cumulatively they have visualised more than a single human in a lifetime.

      A language translation engine can recognise more languages than any human can, or ever hope to. That doesn't mean it is better at recognising the translation between two distinct languages that a bilingual human can.

      1. Richard 12 Silver badge
        Gimp

        True, they have captured more images.

        However, unless there is another someone making sure they know when they got it wrong, they won't learn anything.

        In humans that role is taken up by the many other humans around, who have received different training sets and can use carefully chosen training phrases such as "Are you blind you ****wit?" to inform of errors.

        If the AIs in the cars are all synced, they all get the same training set and won't realise many mistakes.

        If they're not synced, then when crashes occur the "outdated" AI gets blamed and people get very angry.

        Which is a bit of a catch-22.

  10. Rebel Science

    Geoffrey Hinton Is Right. Backpropagation Must Go

    If your goal is AGI and you're using backpropagation, you're hopelessly lost. If you're not using spike timing, you're doing it wrong. What should replace backpropagation?

    1. LionelB Silver badge

      Re: Geoffrey Hinton Is Right. Backpropagation Must Go

      Don't entirely disagree, but that article annoyingly conflates backpropagation and supervised learning. Backpropagation is an efficient iterative algorithm used to implement supervised learning in feed-forward networks. It is not synonymous with supervised learning.

      The article argues strongly for unsupervised learning - which is fair enough - but to my mind over-eggs the pudding, "Learning" in humans (and indeed other animals) involves both supervised and unsupervised learning. Try sticking your hand in a fire; the supervisor (i.e., "the world") will send you a very powerful error signal.

      (Nor is STDP the only game in town for unsupervised learning).

      1. Anonymous Coward
        Anonymous Coward

        Re: Geoffrey Hinton Is Right. Backpropagation Must Go

        I tend to think that in any practical system there must be some supervision of learning, or why would the system learn something that you might be interested in? You'd end up with a self-driving car that ignores the traffic and is only interested in classifying the local flora and fauna, or decides that all the sensory input is boring and so it should just ignore it and think about number theory instead.

        I also tend to think that I'll be long dead before AI does something that I would really find interesting. I don't see much fundamental progress since the 1970s.

  11. Danger Mouth

    Looking at it from the other end...

    Your self driving car uses a camera. A pixel, or few on the sensor die over time and suddenly the "no entry" sign becomes a "national speed limit applies" sign to the AI.

  12. anthonyhegedus Silver badge

    But AI is going to take over the world. We are all going to die.

    1. Anonymous Coward
      Anonymous Coward

      > But AI is going to take over the world. We are all going to die.

      We will be able to protect ourselves by wearing one pixel badges. Sadly a lot of dogs, frogs, horses etc are going to fall to the killbots as a result.

  13. myhandler

    So is the general rule that AI pattern recognition works off perceptual hashing?

    There must be more to it - that seems fundamentally not AI but just massive statistical guesswork.

    They should be converting to vectors, like SVG etc, then looking for patterns.

    1. Anonymous Coward
      Anonymous Coward

      These days, anything that uses a neural network is classified as AI, no matter how it is used.

      1. LionelB Silver badge

        These days, anything that uses a neural network is classified as AI, no matter how it is used.

        I think we're going to have to learn to live with that. You could argue that the term "AI" has become de facto re-defined (debased?) But since nobody really knew or agreed on what AI ought to mean in the first place*, that's hardly tragic.

        *Seems to me that a majority of Reg respondents appear to conflate "real AI" with "human-like intelligence". Okay, that's a thing, but that's not the only game in town, and, with respect to the current state of play, sets the bar impossibly high. I'd be happy, at this stage, to see more research and engineering of "insect-like intelligence", or even "bacteria-like intelligence" - we're not even there yet.

    2. LionelB Silver badge

      So is the general rule that AI pattern recognition works off perceptual hashing?

      There must be more to it - that seems fundamentally not AI but just massive statistical guesswork.

      Not sure what you mean by "statistical guesswork", but perhaps a bit closer to "hierarchical perceptual hashing" or something like that. Google "convolutional neural networks" (CNNs). Roughly, they're multi-layer feed-forward networks that apply spatial filters to patches of an image and pass the results on to the next layer, thus building up recognition of patterns at a hierarchy of spatial scales.

      They should be converting to vectors, like SVG etc, then looking for patterns.

      I doubt a vector format would work terribly well for spatial pattern search, insofar as the spatial relationships between visual elements of the image would be more difficult to extract. CNNs use spatial convolution, as that works rather well for extracting spatial "motifs" from a (raster) image.

  14. Arthur the cat Silver badge

    ANPR

    I wonder if this sort of attack would work against ANPR? Probably not, given the relevant (sub)image set is small and legally required to be a certain shape and size, but if it were the possibilities for mischief would be huge.

    1. Anonymous Coward
      Anonymous Coward

      Re: ANPR

      Maybe if you have your daughter "accidentally" put a couple unicorn stickers on it in the right place (not even on the numbers) that'll do the trick!

  15. Anonymous Coward
    Anonymous Coward

    I guess the question is how much does the pixel have to change in order to flip the image classification. In short, is it enough for a given pixel to go from #666600 to #666620 which is very unlikely to be detected by the human eye. A followup might be that since one image was successfully modified to be put in all nine classifications, what is the implication to use this as a means of passing secret information?

  16. Alistair
    Windows

    I'll just leave this here for the AI to work on

    But they answered: "Frighten? Why should any one be frightened by a hat?"

  17. Captain DaFt

    Not the article I wanted to read

    Especially just moments after reading this one.

    From BBC:

    "But a consequence of the design is that it behaves like a "black box".

    Its behaviour can be observed but the underlying processes remain opaque."

    From El Reg:

    "Not only that, but the boffins didn't need to know anything about the inside of the DNN – as they put it, they only needed its “black box” output of probability labels to function."

    Doesn't look so good for iPhone's new security ID tech, does it?

  18. Anonymous Coward
    Anonymous Coward

    One-dimensional, hence exceedingly fragile

    Without delving into technical details, it looks to me as though the designers of those systems have made the mistake of treating the challenge as no more complicated than the Turing Test.

    As has often been pointed out, to pass the Turing Test a computer does not have to "think", "feel", or in any way simulate the operation of a human nervous system. All it has to do is, on one single occasion for a limited time, manage to hold up its end of a conversation in such a way that its interlocutor cannot distinguish it from a human being.

    The pattern recognition systems designed so far apparently aim only to meet certain performance criteria under "normal" conditions. They do not seem to have been designed to cope with unusual or adverse conditions. They need to be subjected to the ministrations of a Tiger Team - people who will go to great lengths to make them fail. Only if they can be shown to go on working reliably regardless of such adverse conditions can they be considered as even eligible for safety-critical tasks.

    1. LionelB Silver badge

      Re: One-dimensional, hence exceedingly fragile

      That fragility sounds to me like a consequence of a crappy training regime. If you want robust behaviour you need to train your networks on noisy data, and even design deliberately confounding/deceptive training data.

      A promising and increasingly popular avenue to achieving better robustness is "adversarial networks", where you have one network trying to get better at the task at hand, while another network tries to get better at deceiving the original network.

  19. Sssss

    Real AI

    Ohhhh, is that a yellow Labrador up there. Idiots. I know what the issue is, free-floating datasets. I don't know if they have figured out how neural nets work (hint, good reason to avoid using it until you know what's happening. Anybody seen this in science fiction movies?).

    Now, in normal intelligence, data in anchored and crossed linked between these anchored points, represented by structures in the brain. In a niave neural mesh there is none of these structures. So, it should be possible to change one value and have that change produce a wider change, as has happened here. The AI understands niavely the data as a blob, and the blob can change and free form move around under influence. Hence, the data is not really a horse or a car as we understand it. It maybe start as image 1 and image 2. Over time with training it may have parts that are an identifiable subset which it can identify with, in broad terms. The sets are not set in stone, like visual and category systems, and therefore be malleable. The rules themselves are malleabile, as they have no bounds. You are dealing with something more open than a 1.5 year old, even if you can train them up to a 6 year olds level, some simple coaxing produces undesirable outcomes.

    In the human brain, you have various systems by to anchored to. The visual system, at a low level (I have experienced this) understand images as various geometric shapes, and builds up on top of that, and fills in with shade, and texture and detail. All these things might have it's own unique physical anchored point to form a category that can be cross linked with others, and also hierarchy link along with others. The strength of the bonds forms shapes, structures, and structure links/pathways into different systems in the brain. A single point change to an image does not result in the object changing category usually, only changes in detail in related categories. The mind has proof from multiple anchored category information that it is a car, and remains a car. You would have to retrain (brainwash basically) to convince somebody it isn't a car. The brain sees the shape sub-characteristics, and functional mechanics, which proves it is a car or a dog. A single pixel may change but the mind sees a bonnet, windows, doors roof, wheel to turn, and eye, snout like dog, head like dog, legs to move and tail to wag, like a dog, and it's probably a dog, that has a funny looking pixel (in this case, car with funny looking pixel). This is because the category sets are boundaried and anchored and cross linked ("reminds me of..". To get this happening in computer terms, there needs to be logical reality based seperate systems to anchor, sets and hierarchical data subsets, and discrete seperate spaces/anchor sets. This can be done in software.

    This comes from some stuff I've come up with unrelated to AI, and some AI stuff I came up with in primary school to emulate the human mind. My proposed model of human thought also has match subsequent research.

    The above will probably help. As I said to an article about Google claiming there search AI is up to the level of a 6 year old, in recent years my search results look like they were given by a six year old. So, the industry needs the help I fear.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like