back to article Explain yourself, mister: Fresh efforts at Google to understand why an AI system says yes or no

Google has announced a new Explainable AI feature for its cloud platform, which provides more information about the features that cause an AI prediction to come up with its results. Artificial neural networks, which are used by many of today's machine learning and AI systems, are modelled to some extent on biological brains. …

  1. macjules

    debuggability

    Sometimes I despair at how the English language is being bastardised.

    1. Sleep deprived

      Re: debuggability

      Yet I like this concept. I sometimes have a hard time finding out the reason my non-AI code behaves the way it does.

    2. Jon Blund

      Re: debuggability

      Seriously, English has been a mongrel since day one.

      Languages evolve, debuggabilty is a reasonable attribute in the software development universe.

    3. Graham Dawson Silver badge

      Re: debuggability

      This is a feature of the language. Might as well complain that adjectives exist.

    4. DJV Silver badge

      Re: debuggability

      James Nicoll summed it up nicely (or not, depending on your point of view): The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don't just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and riffle their pockets for new vocabulary.

      1. IceC0ld

        Re: debuggability

        English has pursued other languages down alleyways to beat them unconscious and riffle their pockets for new vocabulary.

        ahh, the almost compulsory now, Terry Pratchett inclusion, lovely stuff

    5. Eric Olson

      Re: debuggability

      Dude, do you even English?

    6. Anonymous Coward
      Anonymous Coward

      Re: debuggability

      I'm sticking with Fortran IV.

  2. iron Silver badge

    Looking at that 'cat or dog?' picture I'm surprised the AI didn't pay any attention to the cat's eyes. Surely a strong indication that an animal is a cat would be it having cat eyes?

    1. Sleep deprived
      Happy

      But what if it also has canine teeth?

      1. Brewster's Angle Grinder Silver badge
        Holmes

        Then it's a sabre-tooth tiger. Simples.

        1. Steve Knox
          Boffin

          Except tigers don't have cat eyes.

          1. Brewster's Angle Grinder Silver badge

            And sabre-tooth tigers aren't tigers. But as an ambush predadator, smilodon probably had slit eyes.

            Let me point out that cats' pupils are circular in limited light. And while dogs may not have slit pupils, foxes do. But this turns out to be a fascinating and poorly researched rabbit hole. And according to the goto paper, the large cats evolved circular pupils from an ancestor with slit pupils. It's not clear whether smilodon would have had slit pupils or whether it's lifestyle fulfilled the conditions to evolve them - it's eyes weren't as forward facing as modern cats. But it's certainly possible.

    2. CrazyOldCatMan Silver badge

      animal is a cat would be it having cat eyes?

      Except their are other species that have very similar eyes..

      (Mind you - I'm struggling with the caption picture - how many times to you have to identify an animal that has a small sprig of vegetation on its nose?)

    3. Michael Wojcik Silver badge

      I've seen several collections of NN activation 'heat maps" for image classification, and they're pretty much always surprising. That is not, I think, very reassuring.

      There have been some attempts to extract deeper explanations for these "evolved features" (that is, functions that the NN stack has generated through unsupervised learning). Explanatory techniques such as the feature ranking performed by Google AI Explanations aren't particularly useful for the deep convolutional stacks typically used for image recognition, because the evolved features aren't created by human judges and so don't make much sense to human analysts (or have useful labels like "max temp"). So researchers have turned to information theory, for example, and visualization.

      Heat maps are one approach to visualizing what signals are being extracted from images by deep convolutional NNs. Unfortunately, while this area of research has produced hundreds of really quite interesting and thought-provoking papers, we still really don't understand why deep CNN architectures evolve the features they evolve. That's one reason for the more recent push toward creating interpretable models rather than trying to explain black-box ones.

      1. Bill Michaelson

        Sometimes inference is enough, sometimes almost a head-scratcher

        When I played with a classification API, the code told me the groundhog in my backyard is a bear. OK, fair enough. Then it told me a concrete bench is a skate park. Well, I suppose the latter classification is more accurate - in a way.

        I'm reminded of push-polling except that we barely control how we ask the questions.

  3. Pascal Monett Silver badge

    "AI Explainability"

    Using AI to explain AI. What could possibly go wrong ?

    Start by making AI debuggable. I don't care if you need an expert to interpret the trace, so long as there is expert around who can.

    Then make it easier.

    Stop trying to leap to user-friendly at all costs. There can be intermediate steps.

  4. Oh Matron!

    Bias....

    Whilst we have humans training models, they will never perform to their best capabilities. Take a recent Wolf model.... It was trained well, by a human, but performed poorly...

    The reason was that the photos used to train the model had wolves in them, but also trees and snow. And the model picked up on these instead of the doggos. We all associate wolves with snow, but there's plenty of them wondering around central and southern europe

    To, this explainability is key to understanding, or, ironically, predicting, how well a model will perform.

    1. CrazyOldCatMan Silver badge

      Re: Bias....

      We all associate wolves with snow

      That's because we[1] have managed to kill off, or severely hamper, the ones that live in non-snowy parts..

      [1] That's the human 'we'. And we here in the UK have managed to kill off all our large predators, leaving only foxes and wildcats to represent them..

    2. Michael Wojcik Silver badge

      Re: Bias....

      Explanation tools like these are generally used on models trained by unsupervised learning, not those trained by humans.

    3. Bill Michaelson

      Re: Bias....

      Have we tried categorizing dogs walking on Wall Street? Say, in front of the NYSE?

  5. keb
    Big Brother

    Applying Asimov's 3 Laws of Robotics to corporations

    is one way to kickstart ethical or "responsible" AI development.

    Corporations (and other legal entities) are self-perpetuating organizations that are structurally sociopathic and have consistently demonstrated such behaviour.

    1. AndrueC Silver badge
      Alert

      Re: Applying Asimov's 3 Laws of Robotics to corporations

      Asimov had one view point. Frank Herbert had another.

      "In the future, mankind has tried to develop artificial intelligence, succeeding only once, and then disastrously. A transmission from the project site on an island in the Puget Sound, "rogue consciousness!", was followed by slaughter and destruction, culminating in the island vanishing from the face of the earth."

      ..and before you know it you have a Jesus Incident :-/

      I don't know if we'll ever produce true AI but the idea both amazes and scares me in equal measure.

    2. Michael Wojcik Silver badge

      Re: Applying Asimov's 3 Laws of Robotics to corporations

      When you come up with a robust mechanism for doing that, you let us know, eh?

      So far the best humanity has been able to come up with appears to be late capitalism with a moderately-strong regulatory state, and that's still so buggy Adobe would be ashamed of it.

  6. macjules
    WTF?

    Selling England by the pound?

    Applaudable that London's Met police are soon to be assisting in the "AI" removal of terrorist or violent videos BUT (and it is a big but) giving that data to Facebook might be construed as dangerous in itself, let alone not informing Home Office exactly how much footage they handed over to Facebook? If this is indeed day-to-day video footage how would the Met obtain this and by what means and do they need ICO permission?

    1. CrazyOldCatMan Silver badge

      Re: Selling England by the pound?

      More like The Battle of Epping Forest..

  7. Nigel Sedgwick

    Tainting of Training Datasets etc

    I find it disappointing that the main article (reporting on interview of Dr Andrew Moore of Google) gives, as its main example of the benefits of "AI Explainability", that they had detected use of a tainted training dataset (and presumably also tainted validation dataset and tainted evaluation dataset). These datasets being ones in which expert human annotation had been included (in the images) rather than just the appropriate labelling of each image in the various datasets.

    Such tainting of the images is something that should never happen. This because of training, validation and evaluation protocols - which were clearly inadequate in the reported case.

    "AI Explainability" is, in many applications (including most of those to do with medical diagnosis and health and safety), an additional requirement. It is especially important where the feature set is automatically generated from scratch, rather than using human-defined features that have an implied explanation of their relevance. And it should be noted that techniques extracting feature sets from training data (eg Deep Learning) do have some advantages in technical performance over feature sets drawn (merely) from expert human knowledge.

    Such "AI Explainability" is useful in avoiding such examples as the problematic wolf recognition (mentioned by commenter Oh Matron! above) being at least partly snow recognition. Also I've read of recognition of grey rather than of elephants - and battle tanks only photographed on tarmac, with lack of tank mainly being against forest background. Example problems such as these are actually failures in selection of adequate training, validation and evaluation datasets; however they are (in all fairness) more difficult to prevent than the tainting of the dataset images with expert human judgements.

    It may well be that one of the better practical approaches to "AI Explainability" is to use image recognition techniques for many subordinate feature-groups, followed by statistical pattern matching of the presence of several (but not requiring all) of these subordinate feature-groups (also in realistic geometric arrangements). For example, battle-tank recognition by requiring tracks, body, turret, gun - with likely geometric relationships and taking account of obscuration by buildings, rubble, low vegetation, trees, infantry - plus counts of such tank subordinate feature-groups. There could also be sensor fusion, eg from infra-red and visible spectrum images - again including likely geometric relationships, between the same and different sensor types. "AI Explainability" would come from listing items detected of the subordinate feature-groups and their geometric relationships.

    Best regards

    1. Martin an gof Silver badge

      Re: Tainting of Training Datasets etc

      The tanks one has been around for ages. I'd almost swear I remember being told that story back in the 1990s when computers started to become powerful enough and cheap enough to dream that they might one day be usefully employed on this sort of task (think - automated defenses).

      The way I remember it was that the system was supposedly trained to distinguish between 'friendly' and 'enemy' tanks and when it failed in real life they discovered that all the images of friendly tanks had been well-lit, uncluttered images while the 'enemy' images were grainy, often taken on dull or wet days, so the model had boiled it down (essentially) to sunny = friend, rainy = enemy. Of course, back then it wasn't called 'AI', it was an 'expert system' or somesuch.

      I wondered at the time whether getting a system to recognise a 'whole' was really the right way to do it, when recognising 'parts' might be easier and the recognition of the whole can be based on the parts recognised and their physical relationships.

      Maybe it needs additional inputs as you suggest - IR is a good start, and radar. The military already have tracking systsms for missiles that use these senses in 'intelligent' ways. Combining with depth information would also provide additional data points.

      Judging by what I see in cars though, the goal seems to be to pertorm recognition on the least amount of information possible - often images from a single simple camera. The speed sign recognition system in my wite's car is proof positive that this approach doesn't work, even in exceptionally simple and limited use cases!

      M.

      1. Danny 2

        Re: Tainting of Training Datasets etc

        "while the 'enemy' images were grainy"

        That's a nice idea but I don't believe it. It's akin to the story of the soviet dogs with mines on their backs to attack panzers but running under more familiar soviet tanks. The thing is AI has plenty of clear images of foreign tanks because they keep parading them.

        https://xkcd.com/2228/

        1. Anonymous Coward
          Anonymous Coward

          Re: Tainting of Training Datasets etc

          The tank example was to detect tanks in camouflage so only ‘real’ pictures were used.

        2. Martin an gof Silver badge

          Re: Tainting of Training Datasets etc

          "while the 'enemy' images were grainy"

          That's a nice idea but I don't believe it

          I said "if I remember" :-) - the point I was trying to make was that this sort of thing, i.e. of "self learning" algorithms learning the wrong thing because of poor training data, has been known about for decades. I mentioned the 1990s because in my (admittedly poor) memory I heard the story sometime between university and my first proper job and I remember thinking that getting a computer to recognise any kind of image reliably was a bit of a feat - my final year project at university had involved trying to digitise an "edge" in an image from a video camera.

          A company with Google's resources and 25+ years of research papers to build upon should have learned from other people's mistakes. Yes, it's difficult. No, there are no shortcuts.

          M.

    2. Michael Wojcik Silver badge

      Re: Tainting of Training Datasets etc

      It may well be that one of the better practical approaches to "AI Explainability" is to use image recognition techniques for many subordinate feature-groups, followed by statistical pattern matching of the presence of several (but not requiring all) of these subordinate feature-groups (also in realistic geometric arrangements).

      In the jargon of the field, that's interpretability, not explainability. The latter is analyzing models post hoc; the former is constructing models using already-understood features. See the last link in my post above.

  8. Anonymous Coward
    Anonymous Coward

    "many of them the physician had lightly marked on the slide"

    So they are using training data that hasn't been sanitised, if that's how they do all of their training you can not really trust anything that has been done. Also no wonder the NN are getting so big compared to what the size recommendations /needs used to be, if they are putting crap in no wonder it takes so much to get it to work.

    Training should be done with quality data, not quantity. You introduce randomisation / crap data programmatically, modifying the good data, to make it more robust / less rigid.

    1. Anonymous Coward
      Anonymous Coward

      Quality and quantity

      Quality and quantity are still both required. In this case it does seem a seriously compromised work flow from the outset as they simply were not using data the way the system would have been expected to function, ie before human intervention.

      Overall its one of the issues that despite the benefits of repeatability, testing and parameter sensitivity analysis removing bias from prexisting biased environments is hard as the data set is biased simply by quantity and you begin to trade off quantity for quality by reducing your training set to a balanced example that does not reflect , sadly , real world situations. AI can still be useful as part of the process but needs to fit into a system that works overall. Hardly new to filter and clean your data inputs!

      1. Il'Geller

        Re: Quality and quantity

        AI technology annotates words patterns using dictionaries and encyclopedias, which are the only texts with virtually no bias, cleansed from any kind of unnecessary information. I. e., AI makes words unique using the best source of uncontaminated, first-class information.

        As for quantity: AI technology builds chains of dictionary definitions related to the meaning of words. These chains can be 50-200 or more paragraphs long. Which provides both the quantity and quality! For example, the AI can find 2-5 websites where Google outputs tens of millions; that was used by me in NIST TREC QA, and IBM-in Jeopardy!.

  9. Danny 2

    Computer says no

    HAL : I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

    (Has anyone else here worried that AI has taken over Microsoft, hence Windows 10?)

    1. Teiwaz

      Re: Computer says no

      (Has anyone else here worried that AI has taken over Microsoft, hence Windows 10?)

      W10, artificial?, yes, intelligent?, doesn't seem like it.

      W10 users?, artificial?, no, intelligent?, debateable

  10. Cuddles

    Cycling times

    "The tool shows factors like temperature, day of week and start time, scored to show their influence on the prediction."

    Looking at the results, I can't help being surprised that accidentally diverting into a non-Euclidean geometry didn't have greater impact.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like