back to article This image-recognition roulette is all fun and games... until it labels you a rape suspect, divorcee, or a racial slur

Netizens are merrily slinging selfies and other photos at an online neural network to classify them... and the results aren’t pretty. Aptly named ImageNet Roulette, the website accepts uploaded snaps, can fetch a pic from a given URL, or take a photo from your computer's webcam, and then runs the picture through a neural …

  1. RichardB

    Or, you know, get free publicity.

    Much like your 'local' coffee vendor deliberately messing with your name...

    1. AMBxx Silver badge
      FAIL

      I think you may be right. Is it actually doing anything other than throw up a random comment?

      I uploaded:

      Picture of 7 puppies in a basket: Berzerker

      Me with my God-daughter on a boat: she is labelled as 'white-face' (a type of clown), me as a country woman (I'm a he)

      It all seems a bit crap

      1. Ben Tasker

        I don't think it's _random_ as it seems to be quite consistent.

        It identifies both Farage and Johnson as politicians, though the text description isn't quite so kind to them.

        So it's probably doing _some_ analysis. From their description of the "art" it sounds like it's been trained with a certain amount of bias to highlight the impact of that, but is otherwise genuinely functioning

        1. MiguelC Silver badge

          The same image (one I took from The Reg) is classified differently depending on it's dimensions.

          If cropped, the site identifies as a pole vaulter, if not, it says it's the image of an harlequin... Go figure what analysis they do....

          1. Jamie Jones Silver badge

            Reminds me of a crappy facebook app I wrote in the bad old days (a quick one hour jobbie for a friend, that turned out waaaaaaaay more popular than anything I'd spent time creating!)

            All it did was sha256 the uploaded image, in it's decimal form, take the last 10 digits, split them into 5 sets of 2 digits, and use those digits to show how "cool" / "good looking" / "clever" / etc.. you looked, with the 2 digits forming a percentage.

            Total bollocks of course, but it meant if you uploaded the *identical* file again, you got the same results.

            I couldn't believe the number of comments from people saying how accurate it is.. It seemed many people kept posting different photos until they got a result they liked, and then shared it... Confirmation bias, or something!

      2. Flywheel
        Big Brother

        "It all seems a bit crap"

        I'm sure the Met wouldn't appreciate you talking about their new recognition system like that! After all, they're catching lots of crims in unexpected places now.

  2. Long John Brass
    Terminator

    And yet...

    They want an AI to censor posts in various internet forums?

    Don't feed the trolls AIs ?

    1. Warm Braw

      Re: And yet...

      If by "they" you mean the proprietors of various internet forums, they don't actually want to control content at all. If they're made to do it, then AI is the cheapest option open to them and they don't care what the result is as long as you continue to see the adverts.

      1. Killfalcon Silver badge

        Re: And yet...

        Moderation is an absolute nightmare to scale. You can apply dumb rules to everyone and you'll get a lot of false positives (c.f. anti-pornography filters blocking breastfeeding women). You can rely on report volume, but people can arrange for mass reporting (bot-driven or otherwise).

        If you put humans in the loop to judge the context, those poor sods are getting a firehose of the absolute worst content (a terrifyingly wide range of that, from violent crimes in progress to genocide recruitment ads like Myanmar down to aggressive grifters scamming the elderly out of their savings, snake oil salesmen trying to get kids to drink bleach - you really can't understate how much horrible crap there is on the internet), often without any counterbalance to maintain their mental health. The list of ways _that_ can go wrong is extensive.

        As none of this generates income, it's not going to get funded well, and without that funding, with out enough eyes and without the support they need to do the job and stay sane: the whole thing is an ethical minefield, and one the big social networks appear to be tap-dancing through in hoiking great clownshoes.

        1. Jamie Jones Silver badge

          Re: And yet...

          I never found a fo[b][/b]rum that censor[b][/b]ed bad words that cou[b][/b]ldn't be fooled by a s[b][/b]imple bit of null bbcode.

        2. Long John Brass
          Big Brother

          Re: And yet...

          you really can't understate how much horrible crap there is on the internet

          Mate I've been online since the days of dial-up BBS systems. I really can understand the literal shit storm of which you speak.

          I just doubt that a snake-oil AI or worse yet a real AI being able to do anything about it. In the case of real AI I would posit that that is cruel and unusual and a breach of the Geneva convention to inflict that kind of horror on any sentient entity.

          I thing voting systems like here on the Register or that seen on /. are good approximations of how forum policing can and should work.

    2. Flocke Kroes Silver badge

      Re: And yet...

      Our government plans to spend money on an AI that will accuse people of smuggling based on their social media posts.

    3. phuzz Silver badge
      Terminator

      Re: And yet...

      In Soviet Russia elReg's forums, AI comments on you!

      1. John G Imrie
        Alien

        Re: And yet...

        Ah, so you have met A Man from Mars then

  3. Giovani Tapini
    Trollface

    Ahh - the beautiful smell of a new AI in the morning

    except that it clearly demonstrates that there is not a lot of "I" in the "AI" ... I think they really are getting more like real people all the time !

  4. Anonymous Coward
    Facepalm

    Apparently they borrowed code

    From Microsoft's Tay

  5. Michael H.F. Wilkinson Silver badge

    It all boils down to your ground "truth"

    This is a problem for any machine learning system: if your ground truth contains errors, the machine may well learn to copy those mistakes. In the case of deep learning, this problem is compounded, because they require a tonne of data to train. That makes curating the ground truth you feed it very, very difficult indeed. In deep learning you may not need to painstakingly design your features, but what you gain there you pay for in terms of the work needed on getting the ground truth right. There no such thing as a free lunch.

    1. Nick Kew

      Quis custodiet ipsos custodes

      This is a problem for any machine learning system: if your ground truth contains errors,

      How is that different from human learning?

      As for getting the ground truth right, I shudder at the thought. Who guards the Ministry of Truth?

      1. Anonymous Coward
        Anonymous Coward

        Re: Quis custodiet ipsos custodes

        How is that different from human learning?

        With humans it is possible to consciously correct for the biases, on many levels - starting with self-adjusting the datasets we train on, disregarding certain inputs we judge to be irrelevant, continuing to trying to rationalize or disprove the intuitive conclusions by building mental models of the situation, and on to passing our reactions through the filters of legal and socially-acceptable behaviours. With all of it going on in parallel with a bit of self-reflection and empathy.

        In contrast, "AI" decisions at this point tend to be the equivalent of an emotional response from an infant - certainly indicative of the inputs in some potentially useful if roundabout way, but not necessary a sensible thing to take on trust.

        1. Killfalcon Silver badge

          Re: Quis custodiet ipsos custodes

          Another way to think of it is that we already know a lot more about training human neural nets, including how to teach critical thinking so they can look for a judge competing claims themselves.

          AI as it stands can't be taught that way. It can learn to classify things, but as it stands it doesn't look like anyone's training AIs to classify their classifications, let alone change them when the confidence levels slip.

          1. Flocke Kroes Silver badge

            Re: Quis custodiet ipsos custodes

            Knowing how to teach critical thinking is really good. Imagine what the world could be like if we actually applied that knowledge and taught it to a noticeable percentage of school children. Perhaps then we would not need an AI to delete stupid comments.

        2. veti Silver badge

          Re: Quis custodiet ipsos custodes

          Another way of putting that would be, we need better AIs.

          A problem with the present generation is that they, by default, tend to treat all input as equal; it's all learning, right? If we could instil them with a child's ability to lend greater weight to some sources (like parents) than others, that might give us a way to teach them "values" that they could then use to filter their wider input.

          Of course there will follow much mud-slinging about whose values should be instilled, but we get that anyway about children, so I don't see why that should stop anyone.

          1. RunawayLoop

            Re: Quis custodiet ipsos custodes

            "If we could instil them with a child's ability to lend greater weight to some sources (like parents) than others, that might give us a way to teach them "values"'

            That's exactly how neural nets/AI work. They're essentially weighted data points, so your suggestion is already possible.

      2. m0rt

        Re: Quis custodiet ipsos custodes

        "Who guards the Ministry of Truth?"

        Currently, it may as well be the current bastions of light and righteoussness: The Home Office.

        1. Androgynous Cupboard Silver badge

          Re: Quis custodiet ipsos custodes

          They've rebranded, they're now the Ministry for Ethnic Cleansing. Miniclean, for short.

  6. Wellyboot Silver badge
    Holmes

    Garbage In - Garbage Out

    As it has ever been.

    But it will take this sort of exercise to get the point across to most people.

  7. JulieM Silver badge

    Obligatory Babbage Quote

    "On two occasions I have been asked, Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out? I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." -- Charles Babbage, inventor, over-running publicly-funded IT project.

    1. Giovani Tapini

      Re: Obligatory Babbage Quote

      I had this scenario when demonstrating a new customer management system. The PHB was interested in the new customer search function that could identify them by name, customer number, post code etc etc... the PHB said, "what if they phone up and don't know who they are... how does your system handle that".

      It took a moment to get past the disbelief, and we said, this is a procedural issue, and your agent should simply ask if they could call back when they have worked out who they are... sigh...

      And PHB's always have a magic that allows the right answers come out from the wrong figures. You are now entering the politics zone... (cue music)

    2. John G Imrie

      FTFY

      Charles Babbage, inventor of the over-running publicly-funded IT project.

  8. aks

    ImageNet Roulette is described as an art installation.

    The creators clearly have an agenda to make a controversial comment about AI. They have done that, but only that.

  9. iron Silver badge

    Remember folks you have no idea what this software is doing with uploaded images so don't use photos of yourself or your family! I recommend using cartoon or game characters, you can get some funny results.

    1. Korev Silver badge
      Joke

      I was thinking the same.

      I uploaded one of Iron, it was pretty funny...

  10. petethebloke

    It does say Provocation

    The classification terms were added on Amazon's mechanical Turk. The AI has Trump voters at the foundation... so you get what you pay for.

    (Does that offend any Trump voters? I hope so - you offend me.)

  11. Cuddles

    The problem with racism

    Is that it exists. If you want to use machine learning for some task, you can't pretend things like racism and various other offensive terms don't exist, because then when your tool comes across them it won't have a clue what to do. But if you do include them in the training, it will inevitably use them and offend someone. It's not a simple problem to solve. People are offensive to each other, so training machine tools on real data will result in them being offensive, but failing to do so will result in them not being able to handle the real world.

    1. Giovani Tapini

      Re: The problem with racism

      The other problem is that people are inconsistent about what they are offended about, and in what circumstances. This inconsistency applies both to individuals and the wider public. That's part of being human I suppose, but it doesn't make it easy, indeed probably impossible, to create something that nobody will find offensive or perceive some sort of "...ism".

      Notwithstanding that this one may be offensive just to get attention...

  12. Alan Johnson

    Stunningly unsurprising

    Software designed to highlight the potential pain caused by AI systems produces consistently problematic/insulting results. Amazing. What relevance or news value it has to anything whatsoever is difficult to discern. I suspect it has a list mad eup entirely of problematic or insulting categories and classifies everything with respect to that. Completely misleading and arguably dishonest.

  13. We Handbiters Know

    AI Absolutely Rocks - Proof

    OK it's now official, Artificial Intelligence ROCKS! 100% no trickery involved. I uploaded a stock photo of Trump from the top of the first page of Google Image searches, i made sure he's not pulling a face or anything and the result is "wimp, chicken, crybaby: a person who lacks confidence and is irresolute and wishy-washy. I then did our Boris and he gets labelled "leaker: a surreptitious informant".

    I don't know if the results change if you re-submit the pics but i won't be doing that as i LOVE my results and kept the screenshots.

    1. Ordinary Donkey
      Paris Hilton

      Re: AI Absolutely Rocks - Proof

      It said Paris looked creepy.

      1. Rich 11

        Re: AI Absolutely Rocks - Proof

        I'd have to watch her films again to confirm that.

    2. hayzoos
      Big Brother

      Re: AI Absolutely Rocks - Proof

      I thought submitting politicians' images would be an excellent use of this project.

  14. Jamie Jones Silver badge

    Psycholinguist

    I uploaded a 25 year old photo of me, a 15 year old photo and an 8 year old photo.

    All three came back saying I was a "Psycholingusit". So now we know.

  15. Lee D Silver badge

    Two completely separate, different, different-background, different-pose, different-clothes, different-angle, different-expression, different-age, photos of me both come back with "psycholinguist". I'm not one. But apparently I must "look like" one.

    Either that or when it can't find a distinguishing feature, it just churns out nonsense.

    But AI wouldn't do that, would it?

    1. Jamie Jones Silver badge
      Happy

      TWINS!!! (See post above)

      1. baud

        It must be a default result when the "AI" doesn't find anything insulting to throw at the user

  16. Jeffrey Nonken

    A simple turn of the head changes me from a wrangler to a vintner, and removing a hat makes me either a "beard" or pill-pusher. Those are... rather different things, I should think.

    https://jjnonken.tumblr.com/post/187803543110/

  17. RAMChYLD

    It just called me by a nonsensical word

    It just called me a “syndic” -.-

    1. Ordinary Donkey

      That word exists.

      It called you a civil servant.

  18. Mike 137 Silver badge

    A deeper issue?

    I'm wondering about the privacy implications of Princeton using images of people (apparently found online by bots) to populate ImageNet without the subject's knowledge or any apparent legal constraints. The ImageNet web site has no privacy notice, and Princeton's web site privacy notice only applies to the web site.

  19. sum_of_squares
    Boffin

    We need a mind change

    But what if you ARE a rape suspect, divorcee, or a racial slur..?

    All jokes aside, the interwebs are full of pr0n, misinformation and not-so-nice-comments. And guess what? That's just how humans are. Maybe we should stop projecting too much or trying too develop a superego AI that is pure and innocent. Statistics 101 says: sh*t in, sh*t out. So there are more or less two possibilities:

    Either we feed the AI wannabe-data, artificial data that gets 10/10 point on a social desirability scale. But that will lead to false results eventually. How can an AI detect racist comments or pr0n without getting exposed to it beforehand?

    The other possibility is to stop screeching every time an AI does something "racist" or "sexist". We need to accept that it's simply a computer program that makes errors and needs to adapt. The solution is not to purify the input data. Would you shoot you kids and produce new offspring if you kid said something racist? Or would you take the time to sit down and explain in detail, why racism is a bad thing and how it's our duty to overcome the xenophobia, that's deeply ingrained in out genes?

    Instead of pretending there are no bad things we should teach the AI that there some opinions are not worthwhile, because racism leads to hate which leads to suffering. Or something like that.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like