back to article Silly Google's Photos app labelled black people as gorillas

Google's new Photos software automatically labelled images of black people as "gorillas". The ad giant has since apologised. Mountain View's hugely embarrassing blunder comes just one month after it launched its cloud-hosted photo storage service, and made a big deal out of its machine-learning features. Google also warned …

  1. Anonymous Coward
    Anonymous Coward

    Maybe

    Google should hire a black employee or two, for a change. This is what you get when there are no black folks working there to test this kind of rubbish app on.

    1. Anonymous Coward
      Anonymous Coward

      Re: Maybe

      Perhaps they did it deliberately in order to get massive free publicity coverage for their product?

    2. Anonymous Coward
      Anonymous Coward

      Re: Maybe

      I can't speak for Google's diversity policy, but I'm fairly sure there's a large enough test sample size online of pictures for almost every ethnicity you care to mention.

    3. Meerkatjie

      Re: Maybe

      Like the time we had to test an off-the-shelf IVR system that failed to recognise female voices. Was a bit of a problem for us since the majority of our callers were female.

    4. Anonymous Coward
      Anonymous Coward

      Re: Maybe

      > This is what you get when there are no black folks working there to test this kind of rubbish app on.

      Awsome ... consulting insult analysts.

      "Hey Pete, c'mere. Are you insulted by this."

      "Nope"

      "And this?"

      "No."

      "How about this?"

      ''Err ... nope."

  2. John Robson Silver badge

    Question?

    "However, the question has to be asked: why did Google release such a half-baked app for showtime in the first place?"

    That's not a question - they needed test subjects...

    1. ratfox

      Answer

      It is not half baked. It probably works as it should in 99.9999% of the time. however, when you have hundreds of millions of users, that still means hundreds of cases that are wrong.

      It's like saying Google Maps is half baked because there is a street that is missing in your town. It's still damn useful.

      1. Jeffrey Nonken

        Re: Answer

        Ratfox, thank you for expressing thoughtful conjecture instead of just bashing Google.

    2. 142

      Re: Question?

      And indeed, it's not even half baked!

      By all accounts, the ratio of correct categorisation to mistakes is extremely good for an image detection system of this sort.

    3. Anonymous Coward
      Anonymous Coward

      Re: Question?

      "That's not a question - they needed test subjects..."

      Yep, we are all Google's Software Quality Assurance and Test (SQuAT) team.....

      Anon - I know of one organisation that really is thick enough to have software test team with that name.

      1. Charles Osborne

        Re: Question?

        Software Quality Uh...ssurance And Test?

    4. Matt Bryant Silver badge
      Facepalm

      Re: John Robson Re: Question?

      Welcome to the World of Agile! Letting users find your embarrassing bugs in the name of "flexibility".

    5. Anonymous Coward
      Anonymous Coward

      Re: Question?

      Test "subjects" or test "monkeys"?

    6. Charles Manning

      Is the software really broken?

      The whole point of machine learning software is that it gets fed input, does a classification and generates output.

      The software is not broken, it just has not been fed with good data. It clearly needs more black people in its learning set so it can tell the difference between a gorilla and a black person. This is no different from the recent NSFW classification by, IIRC, FB that classified pictures of girly bits as butterflies.

      But no, the numpties think there's racist software that goes

      if (image_property.black_face) printf("gorilla.\n");

      These classifications come from what people type in. As black people can call other black people "nigger" without the PC alarms going off, we'll also see these classification engines generate outputs like "nigger", "bro"... and no doubt the technically illiterate will think Google added more code that says

      if (image_property.black_face) printf("nigger");

  3. A Non e-mouse Silver badge
    Holmes

    Testing

    ..the question has to be asked: why did Google release such a half-baked app for showtime in the first place?

    Because in the agile web 2.0 world, you are the alpha & beta testers.

    1. Mark 85

      Re: Testing

      I think it goes back further than the web... many companies figured the risk assessment and decided to let their customers/users do their testing. It's just taken the software companies time to figure out how in-house testing affects the bottom line. The difference is that generally software won't kill or maim people compared to say a car or other piece of equipment.

  4. Irongut

    "However, the question has to be asked: why did Google release such a half-baked app for showtime in the first place?"

    Come on Kelly you've been in IT journalism for long enough to know the answer to that question. Everything Google does is a half baked "beta" that may be cancelled at any time and with minimum notice, even services that no longer have the beta tag like GMail.

  5. Anonymous Coward
    Anonymous Coward

    You're not really going to put much development time into what is not classed as your target audience. Which is wrong on quite a few levels.

  6. Bob Wheeler

    AI is hard

    How do you define an algorithm to describe a chair, something to sit on. Is that algorithm good enough to correctly distinguish a dining table chair, a stool, sofa, a park bench?

    1. Turtle

      @Bob Wheeler Re: AI is hard

      "How do you define an algorithm to describe a chair, something to sit on. Is that algorithm good enough to correctly distinguish a dining table chair, a stool, sofa, a park bench?"

      That's a good question. And what's particularly interesting is that scientists do not even know how the human mind is able to distinguish the incredible variety of things that are subsumed under the heading "chairs".

      Because intelligence of any sort, artificial or otherwise, is hard.

    2. Anonymous Coward
      Anonymous Coward

      Re: AI is hard

      Does that mean that white people are polar bears? What about people after a night on the town, are they pandas? AI is hard. I'd understand if Cruela was identified as a skunk.

      1. JoshOvki

        Re: AI is hard

        I wouldn't be offended if I was tagged as a panda. Probably be fairly accurate actually.

        1. Anonymous Coward
          Anonymous Coward

          Re: AI is hard

          Question : Why is being called a Gorilla an insult ? What's wrong with Gorillas ?

          And the AI appeared to do a reasonable job in identifiying that the iamge was indeed an animal, with a black face, eyes mouth and nose.... The mistake was just the type of animal..

          So everyone decides that this should be treated as racist, or as an insult, cmon folks, the problem lies with the meatbags not with the AI.

          1. Mark 85

            Re: AI is hard

            Well... there's a whole lot of racial slurs involving apes and monkeys that still show up in the US. So yes, I can see where it's an insult. Unintentional from an AI standpoint, but... unintended consequences and all that.

            1. TeeCee Gold badge
              Thumb Up

              Re: AI is hard

              Unintentional from an AI standpoint...

              Yup, the important bit about an insult is the intent. A "perceived insult" isn't actually an insult at all, just a misunderstanding.

              Trouble is that the legal system tends to reward people for acting pigshit-thick and misunderstanding as much as possible.

          2. Graham Marsden
            Boffin

            Re: AI is hard

            > Question : Why is being called a Gorilla an insult ?

            Seriously? Try looking at some history and how people who don't have white skin have been constantly and repeatedly dismissed over the centuries as "less than human" and then you may find the answer to your question.

          3. Anonymous Coward
            Anonymous Coward

            Re: AI is hard

            In the absence of the context of the treatment of those of African origin - you would be quite right to point out elements of faux-outrage, or victim identification.

            But as they were often portrayed as animals/less than human, there is good reason that it may make some bridle at the inadvertent linking.

            1. Anonymous Coward
              Anonymous Coward

              Re: AI is hard

              Bananas are thrown at players on the pitch all over Europe. It's maybe not reported so much, having become de rigeur.

        2. Charlie_Manson
          Joke

          Re: AI is hard

          JoshOvki

          Is that because you eat, shoot and leave?

    3. Anonymous Coward
      Anonymous Coward

      Re: AI is hard

      Yes, AI is hard. Google excels at coming up with fast and useful answers to questions of nearly impossible scale. It's in their designs, their software, and their mentality. As a result, Googlers have trouble comprehending situations where one imperfect answer may have dire consequences.

      1. Anonymous Coward
        Anonymous Coward

        Re: AI is hard

        Many of the above comments are in direct correlation to what I initially wrote.. The problem is with the meatbags not the AI.

        The AI has no concept of "insult", it did not intentionally create this situation. The AI analysed an image and found a match for what is characterises as a Gorilla, nothing racial here..

        The only time that it becomes a racial problem is when the PC start shouting, because up until that point it was , and is, just a computer algorithm trying to determine the real-world equivalant object from a bitmap.

        By reading anything more into it than that should really make you should think about your own mind's processes..

        Unless of course certain amongst the El Reg forum believe in a more biblical approach to evolution.....I can hear Darwin sobbing to himself with his face cupped in his hands...

  7. Crisp

    Stuff kids say

    To be fair to Google that machine can only be a few years old, and what toddler hasn't said something inappropriate?

    1. TRT Silver badge

      Re: Stuff kids say

      My daughter reached the linguistic ability of being able to put adjectives and nouns together quite early on, but her speech was somewhat indistinct. "White car" sounded more like "Whan Car"... which she shouted very loudly whilst pointing at at a passing BMW with windows wound down, lowered suspension, trailing "aromatic" smoke like a Pacific 4-6-2 and at an gap between tracks 19 and 20 of the album "Murder Junkies" that had been pounding out of the ICE as it approached... one can only be grateful that the driver was probably momentarily suffering hearing loss as a result of the incredible volume.

      1. Sir Runcible Spoon
        Coat

        Re: Stuff kids say

        She sounds quite advanced to me, can't find fault with either her timing or what she said.

      2. Anonymous Coward
        Anonymous Coward

        You *sure* she meant "white car"?

        @TRT; Sounds like your daughter hit the nail on the head with "whan car". Perhaps you're just not giving her enough credit? ;-)

        1. TRT Silver badge

          Re: You *sure* she meant "white car"?

          Accuracy and not getting your dad gunned down in a drive-by for teaching his kids to 'dis the local drug dealer are two different skill sets.

      3. Dan 10

        Re: Stuff kids say

        Genius. I once blagged a ticket for a Liverpool-Fulham football match, separated from my friend and in a stand with the opposing Fulham supporters. Liverpool won 2-0, met with much hurling of obscenities by those around me, including the man next to me who was with his son (I know, great example!). At the end, this lad, probably about 7 yrs old, turned to his Dad, and pointing to me, says "he's not made a sound for 90 minutes, do you think he's a Liverpool fan?" His Dad said something like "No, don't be silly, be quiet", and I just thought was that he was one of the most observant people there. I wanted to tell his Dad, but chickened out!

      4. Anonymous Coward
        Anonymous Coward

        Re: Stuff kids say

        Why 2 year old likes watching Play Doh videos on YouTube (don't ask) and has seen me use Google Now. So the other day she was trying to tell my phone 'Okay googoo.... paedo videos'. Also she likes 'surprise egg videos' (again, don't ask), Google Now interprets 'egg videos' as 'xvideos', so thats one feature that won't be returning to my phone any time soon.

        1. Sir Runcible Spoon

          Re: Stuff kids say

          "so thats one feature that won't be returning to my phone any time soon."

          Superb :)

          I think all* products should be tested by toddlers.

          *There will obviously need to be *some* restrictions!

    2. Anonymous Coward
      Anonymous Coward

      Re: Stuff kids say

      Very true....When there were no black people where I lived,......when I did talk to a guy (I must have been 6 or 7 ) I asked why the palms of his hands were white (Mum and Dad gobs wide open!!!).... he said it was due the spray paint job he had from god.....he had to put his hands on the wall.....Fab that must have been great carried on been a 6 year old.......only a polnker would see this as racist.....work in progress....I would love to be compared to such a great animal!!

  8. TRT Silver badge

    Reminds me a bit...

    of that scene in Die Hard With A Vengeance where John McClane has to walk around Harlem with a sandwich board...

    Except Google don't have any sort of noble rationale behind why they are doing something so utterly stupid and offensive.

    1. Turtle

      @ TRT

      "Except Google don't have any sort of noble rationale behind why they are doing something so utterly stupid and offensive."

      Look, no one hates Google as much as I do, but they didn't do this intentionally. And when they say "We’re appalled and genuinely sorry that this happened" I actually believe them. And I don't believe much of what they say, I promise you.

      1. Anonymous Coward
        Anonymous Coward

        @Turtle - Re: @ TRT

        You believe in Google ?!! Good, how about the Tooth Fairy ?

        Sorry, pal, I simply can't stop my guffaws.

  9. Mephistro
    Happy

    On the other hand, if I am included in their images database, ...

    ... I'm probably classified as an albino silverback.

    This is clearly NOT a case of racism. It's plain stupidity without any additives!. ;-)

    1. Brewster's Angle Grinder Silver badge

      Re: On the other hand, if I am included in their images database, ...

      The charge of racism is directed towards the programmers for not having used enough photos of black people. But, obviously, none of us are in a position to judge since we don't know what it was trained on.

      1. Destroy All Monsters Silver badge

        Re: On the other hand, if I am included in their images database, ...

        The charge of racism is directed towards the programmers for not having used enough photos of black people

        So "racism" now extends to using a biased training set? Oh brave new world.

        "Be offended often. It helps in not noticing the real problems."

        1. Anonymous Coward
          Anonymous Coward

          Re: On the other hand, if I am included in their images database, ...

          Implicit racism informed the selection bias so subtly that those developing the tool failed to notice that the samples fed into their tool were not representative of the extent of variation existing outside the confines of the Silicon Valley/Bay Area.

        2. Brewster's Angle Grinder Silver badge

          Re: On the other hand, if I am included in their images database, ...

          >So "racism" now extends to using a biased training set?

          Let's read the first sentence of Wikipedia's current entry on racism: "Racism consists of ideologies and practices that seek to justify, or cause, the unequal distribution of privileges, rights or goods among different racial groups."

          So we have the right---or perhaps privilege---of being recognised as an instance of H. sapiens sapiens apparently being caused to be distributed unequally via the use of a biased training set. QED

      2. roselan

        Re: On the other hand, if I am included in their images database, ...

        Don't dismiss the possibility of racists circlejerks tagging pictures in bulk to "train" the system whilst it is young.

        I doubt these idiots are capable of such insight thou. Google only mistake was probably to only use pictures from Google+ ...

        1. Triggerfish

          Re: On the other hand, if I am included in their images database, ...

          This link shows some of the ways they are teaching the image recognition software, ends up with some surreal photos.when they try and get it to interpret some images, clouds seem to give it some problems it see faces and things in the patterns.

          http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html?m=1

    2. Anonymous Coward
      Anonymous Coward

      Re: On the other hand, if I am included in their images database, ...

      This is clearly NOT a case of racism. It's plain stupidity without any additives!. ;-)

      That depends on who happens to be reading it at the time.

      Someone will always imply something that was never there in the first place..

      you watch.....

      1. Maty
        Headmaster

        Re: On the other hand, if I am included in their images database, ...

        'Someone will always imply something that was never there in the first place.'

        I think you mean 'infer something'.

        When you imply something, you suggest an idea without stating it directly. If you infer something, you imagine that an idea has been suggested but not directly stated. That's why when you 'infer' something you bring your own idea into what you heard or read. (Latin inferre - 'bring in')

  10. ilmari

    The problem with machine learning is that once you run out of material to teach it, you wont make any further progress. And, of corurse, you can never be sure what the machine has learned, exactly.

    The classical example (whether true or not) is the military attempt to teach a computer to spot tanks hidden in bushes. So they photographed lots of bushes with tanks, and lots of bushes without tanks. After some crunching, the computer was able to tell the difference.

    Real life tests, however, failed utterly. Eventually someone noticed that all the pictures with tanks were taken on sunny days, and the other pictures on cloudy days. The computer had learned to tell the diference between nice weather and cloudy weather.

    This is why you need a tremendous amount of data to train the machine with.

    1. glen waverley
      Facepalm

      AI revisited

      Along similar lines re computers "learning" the wrong thing. I once heard a story about war game software related to convoys and navy escorts*. One of the important things about a convoy is that it travels at the speed of the slowest ship in the convoy. The navy escort of course has weapons and can shoot and sink ships - to guard against attacks on the convoy by the enemy.

      In one simulation of an enemy attack, the navy ships started shooting at the slowest members of their own convoy, causing them to sink, and thus speeding up the whole convoy.

      Not quite the real world example one wants.

      *Might have been US software.

    2. Daniel Hutty

      @ilmari

      I heard a slightly different version of that story - in the version I heard, the US tried to teach an AI system to tell the difference between US and Russian tanks. However of course all their images of their own tanks were taken from close-up, whereas the images they had of Russian tanks were naturally taken from a distance, then enlarged; naturally the system learned to differentiate between high-quality and low-quality images, rather than between tank types...

      1. Triggerfish

        WW2 the Russian tried similar tricks with dogs, they would strap explosive to the dog and they trained them to run under tanks in the hope of taking out the German tanks.

        They had some issues though as the tanks the dogs had been trained on, and so recognised the first was Russian tanks.

    3. TRT Silver badge

      Initially it was trained in a landscape of spheres, cuboids and pyramids.

  11. Anonymous Coward
    Anonymous Coward

    "Offensive"?

    Could someone who really actually finds this "offensive" (not just thinks that other people might find it "offensive") please explain why? Does your religion view gorillas as "unclean", for example?

    In my part of the world gorillas are seen rather positively: strong, noble, vegetarian, ...

    1. Anonymous Coward
      Anonymous Coward

      Re: "Offensive"?

      Hmm, you have a point. There are few things I find as objectionable as 'forced outrage', where no one is really that bothered, but people feel the need to be offended (for political / ideological reasons), even if they are not. Is this one of those times? Is anyone really that worried what some kind of image analysis software did? I'm sure it didn't mean to offend.

      1. JP19

        Re: "Offensive"?

        "but people feel the need to be offended"

        People don't feel the need to *be* offended - they feel the need to ^claim^ to be offended on someone else's behalf and so demonstrate they are more noble than anyone who claims to be offended less. This of course makes them dishonest slimeballs - that or just too stupid to understand what they are doing.

      2. Destroy All Monsters Silver badge

        Re: "Offensive"?

        THIS IS AMERICA.

        They are just now discussing the "Confederate Battle Flag" while "Gay Marriage" is higher on the menu than a burning Middle East.

        Enough said.

        1. Anonymous Coward
          Anonymous Coward

          Re: "Offensive"?

          I have always considered this "Gay Marriage thing", and now the "Confederate Flag idiocy" as issues that keep the great unwashed busy with unimportant matters whilst the governments continue to hide far more important issues...

    2. Anonymous Coward
      Anonymous Coward

      Re: "Offensive"?

      > "strong, noble, vegetarian, ..."

      Prone to violence...

    3. Anonymous Coward
      Anonymous Coward

      Re: "Offensive"?

      positively: strong, noble, vegetarian

      You mean like this

    4. Anonymous Coward
      Anonymous Coward

      Re: "Offensive"?

      In the US, blacks have been compared to monkeys and apes as an insult in the past. If you called a black man a gorilla in the US, you might as well have used the n-word as far he's concerned.

  12. RISC OS

    well...

    ...I guess if there employ as many black people as facebook, there were not so many test photos around with black people on.

  13. Britt

    Ducks

    The amount of young women making faces that have been misidentified as ducks has not been released.

    I'm sure its a fair few.

    1. Destroy All Monsters Silver badge
      Coat

      Re: Ducks

      "How do we know she is a witch?"

      ".... Er... Goggle image processing algorithm identifies her as ... a duck??"

      "EXACTLY!"

  14. JimmyPage Silver badge
    FAIL

    So how behind *are* Google

    Having seen IBMs take on artificial vision (http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=PM&subtype=SP&htmlfid=YTD03119USEN/ ) at Hursley, it seems Google are still in the 1960s when it comes to processing.

    Most impressive things I saw, when they ran the network over a video of a park scene, were:

    1) Although it had never been told what a skateboard was, it correctly labelled a skateboader in the same box as "cyclist". So it had worked out that "cyclist = human on wheels" and then re-applied that to the skateboarder.

    2) Correctly following a cyclist dismounting, and changing the label from cyclist to pedestrian.

    3) Correctly identifying a static shape (person sitting on wall) as human (technically very high probability of being a human).

    Spoiler alert: some if not all of this project is funded by the DoD ...

    1. Anonymous Coward
      Anonymous Coward

      Re: So how behind *are* Google

      "Spoiler alert: some if not all of this project is funded by the DoD ..."

      Correctly classified objects as either:

      (1) Targets,

      (2) Collateral

    2. Sir Runcible Spoon

      Re: So how behind *are* Google

      *voice* I have detailed files on human anatomy. */voice*

  15. Anonymous Coward
    Anonymous Coward

    Given that an algorithm has no sense of morality, political correctness, or apathy, I can't really fault Google for hosing this one.

    Is it wrong? Yup. Definitely not gorillas.

    Would it be just as messed up if it tagged a fat, white, moustachioed, speedo-wielding guy as a walrus? Yup. How about if it tagged an anorexic person (or the average model) as a 'stick figure' or a proctological exam photo as 'middle management?'

    I doubt the tagging engine is racist. It made a best guess, and that guess was, unfortunately, wrong. It needs to be corrected, and with luck it won't do the same thing again.

    But how do you draw the line? Should it be able to perceive the difference between a person (white or black or any other color) in a gorilla suit, vs a real gorilla? What about at a distance, where features are difficult to distinguish? How could it tell black people from gorillas, or white people from bowling pins, fat people from boulders or skinny people from scarecrows?

    Image recognition is built into our brains, and it's a skill most of us use all the time. Since machines don't 'see' the same way we do, there are bound to be issues describing what an image contains to them. Want a good example? Ask a person who was born blind to describe what a fire looks like. Or ask a lifelong deaf person to describe the sound of a distant sporting event's crowd. Or ask Paris to explain simple mathematics.....

    That the software can fairly reliably tag images correctly is impressive. That it makes mistakes shows it's faults. If it learns from it's mistakes, it's time to unplug it....

    Mine's the one with the bacon, ham, cheese and sardine sarney in the pocket...

    1. sisk

      a proctological exam photo as 'middle management?'

      In some cases that one would be right.

      1. Mark 85

        Only "some"??????

    2. Gene Cash Silver badge

      > Would it be just as messed up if it tagged a fat, white, moustachioed, speedo-wielding guy as a walrus?

      Don't pick on Jamie Hyneman like that...

  16. Hasham

    Google Photos labelled me Cunt

    It was an image of me on a lounger, beside the sea.

    1. Anonymous Coward
      Anonymous Coward

      Re: Google Photos labelled me Cunt

      Was it a nudist beach, or did they just place the label where they expected it to be?

    2. king of foo

      Re: Google Photos labelled me Cunt

      Was it accurate?

    3. Mark #255
      Coat

      Re: Google Photos labelled me Cunt

      It was an image of me on a lounger, beside the sea.

      And were you, perchance, surrounded by underlings who refused to believe that you couldn't order the sea to stop rising?

  17. Anonymous Coward
    Anonymous Coward

    Not surprised...

    If this is anything like the quality of the "similar" images shown up in Google's reverse image search, it doesn't entirely surprise me. It often gets things completely wrong.

    You can see why it thought an image was "similar", but it's often because the "similar" image has broadly the same shape layout and/or colour scheme even if the subject itself isn't similar.

    It usually matches photos of people with other people but it's no better than that; for example, photos of adult women might return "similar" images of men, children or even babies.

    I'm not even sure if that general algorithm is actually tuned for people and/or faces, or it's just passable at (e.g.) matching people with other people because they have lots of skin and the same basic structure. My hunch is that it's the latter.

    Given how creepily efficient some modern recognition software can be, it's slightly surprising that the Google reverse image search is "only" that good, if one considers it good in the first place.

    ((*) i.e. upload an image, find all instances of it indexed by Google as well as allegedly similar ones)

  18. hi_robb

    Google photos has a bug?

    Well I'll be a monkey's uncle...

  19. john 103

    Monkey Bars

    Reminds me of the Old Clip Art Search functionality in MS Works

    Searching for Monkey brought back a Black family in front of a set of Monkey Bars

  20. sisk

    This isn't that surprising a mistake really. It probably uses a couple hundred reference points compared to the hundreds of thousands a human would use to determine what it's looking at. Mistakes are to be expected. Honestly, it's not like the machine is actually capable of racism. I mean it mistakes white people for dogs for crying out loud. How does that happen? At least gorillas are primates.

    Anyway, I'll bet if you got the same people with a different angle or different lighting it'd be able to identify them with no problem.

  21. Stevie

    Bah!

    Offense can only be legitimately taken if the assumption of intent behind the error is shown to be true.

    Until then it is an embarrassing thing to have happened, and absolutely requires correction lest the image and the label it was erroneously tagged with become widely dispersed among those who *would* use it with intent to offend or denigrate, but is in and of itself absent malice.

    Of course, this dissemination of the image and tag is more likely to happen if one takes to twitter to complain instead of contacting Google directly.

    And while I can see the point of a search engine company wanting to figure out how to index images without metadata, I can lament it doing so as another brick in the wall.

    1. <shakes head>

      Re: Bah!

      i think you will find that uk law makes it offencive the the person is offended.

    2. John Brown (no body) Silver badge

      Re: Bah!

      "Of course, this dissemination of the image and tag is more likely to happen if one takes to twitter to complain instead of contacting Google directly."

      Yes. I don't doubt the person who posted to Twitter was outraged and/or offended but instead of complaining to Google he decided to stir up more outrage by Streisanding the incident. Now that poor woman has an unflattering picture scattered all over the place when it would normally have only been seen by her own circle of friends.

  22. Destroy All Monsters Silver badge
    Holmes

    But Jacky Alciné seems to be cooler about this than the drama-filled article or even Google's puritanian reaction suggest.

    Being a programmer has advantages.

  23. Lord Lien

    Anyone remember when....

    .... HP had the same problem with one of their products.

  24. Anonymous Coward
    Anonymous Coward

    Can a computer be racist?

    Obviously this was an innocent mistake, but it raises interesting questions, I think. Suppose one of those horrible self-checkout machines has been designed to spot suspicious activity and suspend the transaction until a human cashier can confirm it's legit. It's pretty advanced and can learn based on whether the transactions it flags prove to be fraudulent or not, it also uses a camera to read "micro expressions" of the customers. At least that's what it's supposed to be doing.

    But supposed it just so happens most of the people trying to rip the store off are in fact Black. So the machine begins to associate African facial features with fraud. At that point, would it be fair to say the AI has become racist?

    1. Anonymous Coward
      Anonymous Coward

      Re: Can a computer be racist?

      No, it only has extrapolated correctly.

      Racism is erroneous extrapolation. And then mouthing off about it. At least, that's what they say.

  25. T. F. M. Reader

    While tagging African-Americans as gorillas is clearly unacceptable in any human/social context, especially for a company of such prominence, I am pretty sure there was nothing premeditated about it, and I suspect that proper classification by AI/ML is a very tough technical problem, including the following consideration.

    Any AI engine is fuzzy to some extent, and in this context you need to design and "train" it to discriminate (in the technical sense only, the words "discriminator" and/or "discriminant" are used in the field) between a dark primate-like shape that is a gorilla and a dark primate-like shape that is a human. I suppose one can do it rather well in most cases, but then there will inevitably be false negatives and false positives. One does not expect 100% accuracy from AI, ever.

    So suppose some rare cases are found - and mercilessly denounced in the press and social media as unspeakably and unforgivably offensive, with at least some justification - where a large African-American is tagged as a gorilla. The boffins quickly get to work, and tweak the parameters of the AI engine in the "right" direction, effectively moving the discriminator surface a bit in the parameter space, reducing the "gorilla" region and expanding the "human" region. I can easily imagine that the adjusted engine may now err by very occasionally tagging a gorilla as an African-American, which will be just as offensive for exactly the same reasons... Ouch...

    One cannot afford to err in either direction in this context, can one? I suspect this cannot be expected of AI with a 100% guarantee. Anything less than 100% will eventually offend, though.

  26. John H Woods Silver badge

    What neural networks think ...

    Check out this Google Research Blog where they get some insight into what ANNs (artificial neural networks or "AI") have actually learned by feeding them random noise or images of clouds and (simplifying here) "asking them" to identify buildings or animals.

    The identification is, without the G-word, one of dark skinned higher apes and, on a naive level, this is not really a failure: the gorillas, the chimpanzees and the bonobos are our closest living relatives. And by close, I mean really close, on a deep genetic level. The connotations of the word are terrible, but that is because of centuries of human racism, not because ANNs (or Google) are "racist". The reason white people aren't identified as such is because we are the mutants who lost our ability to produce large amounts of melanin, resulting in a very obvious visual difference: one which, to ANNs, can appear much more significant than it really is. In fact, it just means we can tolerate cooler climes somewhat better and intense sunlight a hell of a lot worse. They'll have pulled the ANN now, but I'll bet that a 'negative' of a group of white people would also have produced the same result.

    Where other visual indicators are more significant, the ANN picks that. Note that, despite the subject not being white, the last picture in the tweet is correctly identified as being one of a graduation.

  27. Fink-Nottle

    Calm down

    It's just a guerrilla marketing scheme that went wrong.

  28. John Lilburne

    Hubris

    Hubris. The Yahoo app ob flickr has had similar problems. You'd have thought they would have learnt from that experience:

    https://www.flickr.com/help/forum/72157653088504775/

    No doubt they'll be labelling concentration camps with 'sport' too.

  29. Anon Adderlan
    Childcatcher

    Get Smart

    Inference engines (I refuse to call them AIs) are rather likely to make inferences we may not like as they develop. But when that happens, do we accept the results and refine the feedback based on that, or do we deny the results and give the engine feedback based on our own biases?

    And while this is indeed an embarrassing result, the only way to improve things is to gather more data from users, so it has to be in play while it's making these mistakes. Ever notice the kind of conclusions a young child will make before they 'know' better (or acquire prejudices) and start watching what they say? Yeah, we're at that stage, and Google will get better at discerning man from beast as time goes on.

  30. james 68

    Maybe it got it right...

    Now hear me out, this isn't a racism thing (no, really).

    Assuming that this "machine learning" software gets its info from the interwebs, then it could have come to the conclusion that "gorilla" ≈ "attractive".

    The confusion arising from this: http://goo.gl/IcM7UI

    The more it sees the term "gorilla" equating to "attractive" then it associates the term but without the context to differentiate. And importantly - it has nothing to do with skin colour, just word meaning.

    Then the story becomes something else entirely. We should perhaps be more worried that a piece of software has gained enough sentience to develop preferences regarding how attractive it finds people.

  31. PNGuinn
    Facepalm

    And for the next SNAFU...

    So... we have the big G panicing. SOMETHING MUST BE DONE. QUICKLY.

    Cue frantic tweaking of algorithms.

    Cue over rapid adoption of fix.

    Cue lots of pictures of Gorillas, Baboons etc catagorised as...

    POPCORN!

  32. Anonymous Coward
    Anonymous Coward

    So many white men explaining why this is not racist. "No, wait, listen....."

    Shaking my head and stepping away from the computer for a while.

  33. Manolo
    Joke

    Taxonomical...

    If the AI previously labeled Caucasians as seals and dogs, the at least from a biological standpoint it is improving. Evolutionary an African is closer to a gorilla than a Caucasian to a seal...

  34. Anonymous Coward
    Anonymous Coward

    Well I'd Date that ape !

    Really....do a pout with a dark face and expect an AI to discern like a Human ?

    This is embarrassing but taking offense is duJour, and quite boring.

    They need to step up their software or pull it.

    Mind you DO we REALLY want Face recog to get so good we have zero privacy left ?

    Can I get it to recognize my mulatto face as "Dog" so that it goes unnoticed by the Thought Police ?!

    She's a real looker so I don't think she'll suffer in the long run, and the racist angle is just lame.

    The AI just calls out what it thinks it sees, and it's as stupid as calling that Corgie a Cat.

  35. Maty

    http://www.theguardian.com/world/shortcuts/2015/jun/29/shabani-gorilla-internet-heart-throb-japanese-women-higashiyama-zoo

    Some people find gorillas highly admirable.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like