back to article New phone who dis? Facial recognition models more farcical despite progress

AI systems have superior abilities at recognising faces in theory, but when they're deployed in practice they often fail miserably. Machines can analyse people's faces in a way that humans cannot. They can place landmarks outlining facial features to calculate minute details like the distances between the eyes, nose and lips. …

  1. Pascal Monett Silver badge
    Stop

    "AI doesn't always work as well as you'd expect in real life"

    AI works exactly as the scenario says it should because the only place we have AI is in science-fiction.

    In real life, we only have statistical analysis machines and they are only as good as the data they were trained with and the humans who wrote the code - which means not very at this point in time.

    1. Yet Another Anonymous coward Silver badge

      Re: "AI doesn't always work as well as you'd expect in real life"

      Depends on the data, for things like cancer cell screening they beat humans. Oddly for characterising galaxy images they have been beating humans since the 90s

      1. JeffyPoooh Silver badge
        Pint

        Re: "AI doesn't always work as well as you'd expect in real life"

        YAAC suggested, "...Oddly for characterising galaxy images they have been beating humans since the 90s."

        It's then odd that humans are still involved.

        Ref. Galaxy Zoo

        So your supposed counter example is also a counter example to itself.

        The field of AI is rife with false claims. Thus it may be assumed that all AI claims are false or hyped. It's a very accurate assumption. Basically the entire field is full of morons and liars.

        1. martinusher Silver badge

          Re: "AI doesn't always work as well as you'd expect in real life"

          Those classification machines are not 'intelligent' as such, they're just a very sophisticated type of filter, one that we really can't quantify how it works. Training data is going to vary depending on where we live -- I'd guess that Chinese systems are biased towards recognizing Chinese people.

          Incidentally, I'm a human that finds it difficult to recognize individual human faces, including those of close relatives. These articles treat facial recognition by humans as a fait accompli, something we all do with ease. That's just not true, and I'd guess that I'm not the only person who doesn't get faces right (but at least I admit to it...).

          1. Diogenes

            Re: "AI doesn't always work as well as you'd expect in real life"

            When he was younger my son always had kids come up to him thinking he was Harry Potter (even has a scar on his forehead - but it doesn't look the same & is on the wrong side). There was another boy at his school that at first glance I often mistook for him. Even today , there re a few around my local shops that I need to look twice at to make sure it is not him paying us surprise visit from his home 1000km away !

          2. Olivier2553 Silver badge

            Re: "AI doesn't always work as well as you'd expect in real life"

            These articles treat facial recognition by humans as a fait accompli, something we all do with ease. That's just not true, and I'd guess that I'm not the only person who doesn't get faces right (but at least I admit to it...).

            And until we understand how human do (or do not) recognize faces, we will not be able to create algorithm that properly mimic that ability.

      2. TechnicalBen Silver badge

        Re: "AI doesn't always work as well as you'd expect in real life"

        Those are not really "AI" though. They *ARE* statistical look up tables. For when our squidgy brains and eyes hit the limits of their ability, we can pass it on to a computer. Just as we do for say, fine manipulation tools, or heavy construction tools. We are not replacing a human, but adding to.

        Getting a computer to check the size of a cell or star almost perfectly, is simpler and more accurate, and faster, than getting a human to do it. Just in the same way getting in my car to go to the shops is quicker than running (well, depends on the runner, and the traffic! XD ).

  2. Anonymous Coward
    Anonymous Coward

    It doesn't work? I'm surprised!

    Wow, so the real world is massively and unpredictably variable and not like a static photo/dataset - who knew???

    1. macjules Silver badge

      Re: It doesn't work? I'm surprised!

      "the real world is massively and unpredictably variable and not like a static photo/dataset of white pasty-faced geeks working for Facebook or Google"

      There, FTFY

  3. Stuart Grout

    So how do we correct the 'real world' so that it behaves more like the data sets?

  4. c1ue

    Glasses is one variable. Then there's facial hair, tinted contact lenses, hair color, tan/no tan, makeup, lighting, growth (maturation/obesity/dieting, the list goes on and on.

    AI is like idiot unevolved children playing with wooden blocks: they can say which blocks match but good luck when transferred to the real world where everything isn't a wooden block.

    The "but if only we had better data" is a crock because the real problem is that they don't actually know what constitutes a robust identifier system. People and animals have a system tested over millions of years in the real world; AI facial has literally a handful of years of real world exposure.

    I haven't seen it, but there might be research which has attempted to quantify just how many data points are uses by people ton identify faces.

    My bet is that it is a lot more than the relative handful present AI use: distance between eyes, nose size, jaw angle, etc. And also a lot more quantitative: attractive people are symmetrical to the millimeter level, so clearly human pattern recognition is taking that into account even if subconsciously.

    1. Richard Jones 1
      Holmes

      @c1ue

      I suspect that is a large part of the issue, a human or an animal can make a multi pass assessment of a face and still get it wrong, if my experience is anything to go by, but that is a different story. An initial human scan of a crowd will possibly result in some hits, or perhaps no hits, so on a second, third, or forth scan, additional data will be sought, until the confidence level rises. Animals and humans, to a lesser extent can call in other aids, a sense of smell, analysis of body language, etc. Can the expert system even check for more data points, does the expert system even experience moving images, does it have any relevant experience of dealing with crowds in crowd situations where subjects might be distracted, etc.? With a lot more evolution these system should improve, but my confidence in that happening is limited. I suspect that the initial inputs are all that the devices will ever get, with no option to increase either the setup ('training') data, or the live field data data points. I doubt that a current system could easily realise that a subject with a proverbial wooden leg, or even no leg would not be the same as a face connected to no such 'feature'. Most humans and probably most animals would take that in automatically. Make up and glasses are already to fool systems, (and some humans), I suspect that will continue to be the case for sometime.

      1. TechnicalBen Silver badge

        Re: @c1ue

        A bit of both. Some expert systems have been found to be checking texture, not shape. Thus they can check things... but again, unlike the human or animal, the reward system might be wrong, or the pre-arranged learning mechanic (as above example, shape was suppose to be applied, but colour was in error) may be wrong.

        Some systems are not able to though. So say a facial recognition system may be programed to only take in single images. So it has no ability to link movements etc. Then later they add it in (as with finger print sensors checking if the finger is alive!). Where as, as you say, humans and animals tend to not have much of a restriction at all, when it comes to learning.

  5. Ian Emery Silver badge

    Humans !!

    They all look the same!!! [robotic voice]

    1. W.S.Gosset Bronze badge

      THEY'RE MADE OUT OF MEAT

      A BRILLIANT v.short scifi CLASSIC which everyone should read if they haven't already, and read again for a chuckle if they have.

      "You're asking me to believe in sentient meat."

      ...

      "You know how when you slap or flap meat, it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat."

  6. spold Bronze badge

    Learning point perhaps...?

    From the article I gather it's rather unfortunate if your face happens to look like the back-end of a bus

  7. Olivier2553 Silver badge

    Too large a training set?

    One reason may be that the training set is too large. Human are able to recognize the handful persons we already know, not just any and everybody.

    Before we are able to recognize someone, we need to first memorize that face, learn that new person. And maybe we adapt our recognition ability to the sample of persons we know: in a sense, we train the ML with the test set and not reverse.

    One thing is the case of twins. We will develop new criteria to distinguish twins, we know it is harder to make the difference than for other persons.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019