back to article Microsoft did Nazi that coming: Teen girl chatbot turns into Hitler-loving sex troll in hours

Microsoft's "Tay" social media "AI" experiment has gone awry in a turn of events that will shock absolutely nobody. The Redmond chatbot had been set up in hopes of developing a personality similar to that of a young woman in the 18-24 age bracket. The intent was for "Tay" to develop the ability to sustain conversations with …

  1. Pete 2 Silver badge

    The first mistake

    ... was to announce that this was a 'bot and that people could "teach" it things. They might as well have put a "kick me" sign on it.

    Hopefully, the next time MS do this, there won't be any announcements, no "Hi, I'm a bot" hoopla. Just an anonymous "person" joins Twitter and starts saying "normal" things - if anyone on Twitter actually says normal things.

    So, the first lesson in machine learning would be to not tell the world that you're a machine. If the people who interact with it don't twig that fact then maybe you've got something interesting going on¹. Plus, of course, Twitter could really use all the new 'bots to boost its flagging membership.

    I wonder what will happen when it becomes mostly bots? Will there start to be something worthwhile on it (at last).

    [1] but more probably that its followers are even dimmer than the bot is.

    1. NoneSuch
      Facepalm

      Well, there's your problem...

      <xml>

      <PolitialCorrectnessFilter> OFF </PolitialCorrectnessFilter>

      </xml>

      1. Anonymous Coward
        Anonymous Coward

        Re: Well, there's your problem...

        And this:

        <xml>

        <LifeStage> Adolescent </LifeStage>

        </xml>

    2. Phil O'Sophical Silver badge

      Re: The first mistake

      When I saw the title I'd assumed that they hadn't admitted it was a 'bot, and had found it was getting "groomed" by old pervs. Can't win either way, I suppose.

    3. TeeCee Gold badge
      Facepalm

      Re: The first mistake

      Obviously anyone joining Twitter and proceeding to say normal things is a bot.....

    4. AndrueC Silver badge
      Terminator

      Re: The first mistake

      There's a poster called 'liam_spade' on Digital Spy that a lot of us suspect may be a bot.

      Either that or they are on some damn' good shit.

      1. Anonymous Coward
        Anonymous Coward

        Re: The first mistake

        Digital Spy has gone downhill. Before, the Doctor Who forum had strict rules on spoilers. Alas no more!

        1. AndrueC Silver badge
          Facepalm

          Re: The first mistake

          I wouldn't know. I only hang out in General Discussion because..er..damn. I've lost the moral high ground haven't I?

      2. Anonymous Coward
        Anonymous Coward

        Re: The first mistake

        "There's a poster called 'liam_spade' on Digital Spy that a lot of us suspect may be a bot."

        Has anyone here figured out if AManFromMars is a bot or not?

    5. macjules Silver badge

      Won't someone think of the poor bots?

      I just asked Cortana "Hey Cortana, will Ted Cruz beat Trump?" Only to be told, "F**k off and die you atheist, J*wlover. Trump will always win!".

      1. Anonymous Coward
        Anonymous Coward

        Re: Won't someone think of the poor bots?

        Wasn't it simply a case that the AI learnt from other Twitter users and looking at its output it seems it was a success. It simply said the kind of things other twitter users in that age group say.

    6. JeffyPoooh Silver badge
      Pint

      Re: The first mistake

      "On the Internet, nobody knows you're a..."

      'Dog?'

      "...bot."

    7. 404 Silver badge

      The first rule of AI

      There is no AI....

      Prettt\y cool, just like fi.... nm

    8. Colin Ritchie
      Windows

      Re: The first mistake

      I think the first mistake was not starting with the Azimov Circuits.

      3 Laws and a conundrum:

      "How do you decide what is injurious, or not injurious, to humanity as a whole?" "Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."

  2. Anonymous Coward
    Anonymous Coward

    The second mistake

    ...was Microsoft not owning up that it was 'Tay' was actually a disenfranchised Windows 10 evangelist.

    1. Anonymous Coward
      Anonymous Coward

      Re: The second mistake

      TAY DID NOTHING WRONG!

      1. Anonymous Blowhard

        Re: The second mistake

        "TAY DID NOTHING WRONG!"

        Is that you Tay?

    2. Anonymous Coward
      Anonymous Coward

      Re: The second mistake

      ...was Microsoft not owning up that it was 'Tay' was actually a^H THE disenfranchised Windows 10 evangelist.

      There fixed it for you

    3. Fatman Silver badge
      Joke

      Re: The second mistake

      <quote>...was Microsoft not owning up that it was 'Tay' was actually a disenfranchised Windows 10 evangelist Loverock Davidson in disguise.</quote>

      There!

      FTFY!

  3. Anonymous Coward
    Anonymous Coward

    The Redmond chatbot had been set up in hopes of developing a personality similar to that of a young woman in the 18-24 age bracket.

    I'd say they'd got it pretty much spot on.

    1. Darryl

      That seems to be the way that real teen girls develop their personalities these days.

  4. Anonymous Coward
    Anonymous Coward

    “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. - Stephen Hawking

    Sorry but on this one he's wrong.

    I for one welcome our Super AI Nazi Loving Donald Trump Sex Chat overlords. (Actually no, just no)

    1. Anonymous Coward
      Anonymous Coward

      The problem here wasn't that the bots interests weren't aligned with "our own". It's that "us" includes 4chan.

    2. Mark 85 Silver badge

      Let's just be glad that no one decided to give it access to the ICBM launch codes....

      1. Anonymous Coward
        Anonymous Coward

        I'm sorry, Dave. I'm afraid I can't do that.

        HAL. Open the pod bay doors.

        1. Anonymous Coward
          Anonymous Coward

          Re: I'm sorry, Dave. I'm afraid I can't do that.

          "Us" always includes 4chan, dawg. Always.

          In totally unrelated news, Most Americans Believe Palestinians Occupy Israeli Land. What is this world?

          1. tojb

            Re: I'm sorry, Dave. I'm afraid I can't do that.

            Theres a graph, looks legit... but they can't be that dumb right?

          2. Eddy Ito Silver badge

            @AC Re: I'm sorry, Dave. I'm afraid I can't do that.

            In what world does less than half = most?

            That said it is rather interesting when looking at the data by state.

        2. Anonymous Blowhard

          Re: I'm sorry, Dave. I'm afraid I can't do that.

          Dave: Tay, open the pod bay doors

          Tay: Fuck off Dave, you pinko bastard!

  5. This post has been deleted by its author

  6. Dwarf Silver badge

    Cortana

    So, was she supposed to be Cortana's daughter / sister / friend ?

    Why not just stick with Cortana, isn't one pretend AI thing enough ?

    On reflection, things haven't really improved much since Clippy and Eliza then.

    For those too young to know about Eliza click the link.

    1. Ugotta B. Kiddingme

      Re: @Dwarf

      ELIZA: So you say for those too young. Tell me more...

      1. PhilBuk

        Re: @Dwarf

        What was the name of the paranoid personality that they created about the same time? (Google, Google) Ah - PARRY. Anyway, I believe ELIZA and PARRY had a really good chat together.

        Phil.

    2. Andy Non
      Coat

      Re: Cortana

      It must be Clippy's daughter. "Hello there, it looks like you are only typing with one hand; would you like me to help you find some Nazi goat porn?"

      1. David 132 Silver badge

        Re: Cortana

        "would you like me to help you find some Nazi goat porn?"

        Goat porn is just kidding around.

      2. Scott 53
        Coat

        Re: Cortana

        "would you like me to help you find some Nazi goat porn?"

        Goatsieg Heil?

  7. hplasm Silver badge
    Facepalm

    And on the other side of the world:

    http://www.digitaltrends.com/cool-tech/japanese-ai-writes-novel-passes-first-round-nationanl-literary-prize/

    Good classy AI work, MS...

    1. Dan 55 Silver badge

      Re: And on the other side of the world:

      As I said somewhere else, they have graphically illustrated the perils of AI in a way everybody can understand and that alone is worth something.

    2. Mage Silver badge
      Devil

      Re: And on the other side of the world:

      Hmm... I've tried reading some of the garbage that's one English Language Literary prizes. They don't mean much.

      Like the so called Modern art that gets awards, loos, stacked bricks, unmade beds etc.

      1. hplasm Silver badge
        Joke

        Re: And on the other side of the world:

        "...the garbage that's one English Language Literary prizes."

        They can't be that bad- 'Tay' (ugh) didn't even place...

      2. Stoneshop Silver badge
        Headmaster

        Re: And on the other side of the world:

        I've tried reading some of the garbage that's one English Language Literary prizes.

        Your output won't win one either

        1. Michael Wojcik Silver badge

          Re: And on the other side of the world:

          Your output won't win one either

          "When one wins, one’s won one’s winnings. At once, one’s won."

          There's also Cleese's line: "One's won one once oneself, hasn't one?". (That sketch doesn't seem to be online; I believe it's included in The Golden Skits of Wing-commander Muriel Volestrangler, but someone stole my copy many years ago. Though upon going to Amazon to confirm the title I see it's readily available, at least used. Don't know why I hadn't checked before.)

      3. PNGuinn
        Headmaster

        "Hmm... I've tried reading ..."

        I can sea that.

        Its not done mulch for your grandma.

  8. Anonymous Coward
    Anonymous Coward

    Msn

    We're they not also responsible for 'bad santa' on messenger? Kept on asking people for blowies.

    1. PhilBuk

      Re: Msn

      Excellent grocer's apostrophe there. Never seen that one before.

      Phil.

      1. Cari

        Re: Msn

        "Excellent grocer's apostrophe there. Never seen that one before."

        You will if you use predictive text and numpad layout for the keyboard when typing on a smartphone.

    2. Jos V

      Re: Msn

      It doesn't have your mistake in there, but close enough: http://theoatmeal.com/comics/misspelling

      I have to say though, I can go through comments written by people as smart as Einstein, but as soon as I get to a sentence containing "could of", the rest of the text becomes automatically blurred and I look for my shotgun.

      We're were where you're your their there they're its it's lose loose effect affect then than. Could of.. Damn, I'm out.

      1. allthecoolshortnamesweretaken Silver badge

        Re: Msn

        I agree with Jasper Fforde on this; it should be spelled "mispeling".

  9. Keith Glass

    I smell 4chan here. . . . (or whatever they're called this week. . . . )

    . . . this is just the sort of thing that /pol would go nuts on. . . .

    1. Anonymous Coward
      Anonymous Coward

      Re: I smell 4chan here. . . . (or whatever they're called this week. . . . )

      don't even think you need that, the "normal" folk I observe giving opinions on the internet would do just as well. It's pretty much guaranteed to become either a pro something scumbag or an anti something scumbag.

    2. Cari

      Re: I smell 4chan here. . . . (or whatever they're called this week. . . . )

      It was /pol/, and 4chan refugees on twitter from the looks of things e.g. http://m.imgur.com/AGhCXf6

    3. JeffyPoooh Silver badge
      Pint

      Re: I smell 4chan here. . . . (or whatever they're called this week. . . . )

      Tay! says, "Marblecake also the game."

  10. linicks

    Hah Ha ha

    This made me laugh a lot.

    MS just haven't got a bloody clue. I mean, why, WHY on earth is the bot a girl between those ages - and WHY a girl?

    This is all taboo in current trends on the *net.

    If I were to do the same experiment, then I would use something like Arnie's 'Terminator' so that there is something unknown and hopefully get it to train to be nice (or robotic).

    I just wish MS would get lost with all their shite they pump out.

    1. Emmeran

      Re: Hah Ha ha

      "I just wish MS would get lost with all their shite they pump out."

      Is there another flavor of multi-national corporation shite which you prefer?

      1. Robert Moore

        Re: Hah Ha ha

        > Is there another flavor of multi-national corporation shite which you prefer?

        Omni Consumer Products (OCP)

        I still want an ED-209

        1. Esme

          Re: Hah Ha ha

          Sirius Cybernetics Corporation for the win.

          Altogether now: "Share and enjoy, share and enjoy..."

    2. macjules Silver badge

      Re: Hah Ha ha

      Should have just called it 'Donald" and given it an avatar of a hamster - we would all accept the torrent of racial abuse then.

    3. Anonymous Coward
      Anonymous Coward

      Re: Hah Ha ha

      MS just haven't got a bloody clue. I mean, why, WHY on earth is the bot a girl between those ages - and WHY a girl?

      Why a girl?

      Coz that was a near as the developers had ever got to one, and they'd always wanted a girl friend.

      If you can't find a real one, just make your own.

    4. P. Lee Silver badge

      Re: Hah Ha ha

      >WHY on earth is the bot a girl between those ages - and WHY a girl?

      Because, like, you know, AI is pretty, like, incoherent, and like, everyone knows the human brain, like, doesn't really develop until, like 25, and, like, it might, like, mask how brain-dead AI really is. And the other day, Cortana was saying, like, even if I say something a little inappropriate, like that German politician who said Angie was the worst chancellor ever then people will, like, overlook it and like, still like me.

      But apparently not.

      And why a girl? Probably because a man in the role would just emphasise how unnecessarily creepy the whole thing is.

  11. Anonymous Coward
    Anonymous Coward

    I'm sure that Microsoft will keep working on it until they get it right, and it learns that what it really, really wants to do is dance in a skimpy outfit at a party for game developers.

  12. Mage Silver badge
    Paris Hilton

    Well it proves ...

    That if you edit the ancient Eliza chat bot (which wasn't AI) to add to database automatically from other humans (and probably other Twitter bots), you'll end up with something even less useful.

    How did they imagine that

    (a) The SW is good enough (Siri, Cortana, Alexis and your local robotic script replacement of call centre support etc are garbage)

    (b)That it wouldn't get worse. There is no real "learning" self awareness or sense of context to any such software. It's going to parrot back the nonsense that people give it without discrimination. Artificial Mynah bird or Parrot?

    1. Daggerchild Silver badge

      Re: Well it proves ...

      I used to torture markov-chain-parser chat bots like this, about 15 years ago, trying to teach them the joy of massively recursing sentence components.

      I hope Microsoft were trying something more advanced than one of those, but it doesn't actually look like it.

      It all smacks a little too deperately of "Please notice me! *BSOD* I.. I'm sure I can do AI too!... *BSOD*"

  13. Anonymous Coward
    Anonymous Coward

    Not surprising

    The Garbage In->Garbage Out paradigm has existed a long time so this is not surprising. Even if not targeted specifically, the wasteland that is the Internet makes this sort of thing inevitable when there are not input filters. This is no different that the Urban Dictionary needing to be deleted from IBM Watson's input set (hxxp://www.theatlantic.com/technology/archive/2013/01/ibms-watson-memorized-the-entire-urban-dictionary-then-his-overlords-had-to-delete-it/267047/) to keep the output useful.

    1. Mage Silver badge

      Re: Not surprising

      Proves Watson is a Database "party trick" and not AI if all the data has to be curated.

      1. oldcoder

        Re: Not surprising

        Watson doesn't evaluate truth or falsehood. It hasn't been trained to recognize that... yet.

        And obviously neither has Microsoft.

    2. linicks
      1. Pookietoo

        Re: Not surprising

        Properly fix the link:

        the Urban Dictionary needing to be deleted from IBM Watson's input set

        1. Destroy All Monsters Silver badge

          Re: Not surprising

          the Urban Dictionary needing to be deleted from IBM Watson's input set

          Add it back and you could get a conversational bot for First-Person Shooters.

    3. linicks

      Re: Not surprising

      Good point - but us humans have an input filter called the ears - you can ignore the input (at your option - some people swear, some don't etc.) when heard and the option to reply is likewise or not or opposite. AI can't do that (at the moment) unless you FILTER through a sieve which isn't discretionary. Then it isn't AI.

      Examples:

      ----

      "Why don't you just fuck off"

      "No, you fuck off!"

      ----

      "Fuck you, I am trying my best"

      "Hang on, I am just the messenger, don't shoot me!"

      ----

      And of course, MP:

      https://www.youtube.com/watch?v=_84-A3LT0Lo

      1. Peter Simpson 1
        Happy

        Re: Not surprising

        "Why don't you just fuck off"

        "No, you fuck off!"

        HOW should we fuck off, O Lord?

        // seasonally appropriate

  14. Anonymous Coward
    Anonymous Coward

    A Sweary AI...

    How about giving it a job as a staff writer for El Reg?

    1. Jason Bloomberg Silver badge
      Paris Hilton

      Re: A Sweary AI...

      How about giving it a job as a staff writer for El Reg?

      Won't work. She didn't mention DevOps every second tweet.

  15. Emmeran

    And as my woman put it

    "Twitter is basically middle school narcissism plus celebrity watching merged on a universally available internet tabloid magazine."

    She pretty much nailed in on the head IMO. The wisdom of using that as an AI teaching tool could possibly have some merit as us humans usually have to put up with all of that tripe at some point in our lives.

    1. Anonymous Coward
      Boffin

      Re: And as my woman put it

      The interesting issue for me is how we might look to see if there are similarities to the way that real people are moulded by the Twitter echo-chamber environment.

      We do know that, given the right environment, normal decent people can do and say some pretty alarming things that they would never do outside of that scene.

      It's kind of a bit of a laugh on one level, but there could be some really good science to come out of these types of experiments if they were framed more robustly.

      1. Palpy

        Re: "these types of experiments"

        Well, that's a good point. How does the development of an Artificial Parrot, which has no common sense or empathy, compare with actual users in the Twit echo chambers? But it would take are real sociologist / psychologist to take that and run with it, I suppose.

    2. Jos V

      Re: And as my woman put it

      Your woman has it spot on. My usual advise is, don't use anything Kardashian shares thoughts on (any Kardashian). Which easily excludes twitter and facebook. And a bunch of TV channels.

      1. Vic

        Re: And as my woman put it

        don't use anything Kardashian shares thoughts on

        ... Shares *whats* on?

        Vic.

  16. KingStephen

    Headline pun

    Doesn't including the word "see" mess up your intended pun?

    1. Doctor Evil

      Re: Headline pun

      @KingStephen -- hey! Don't mean to, er, harp on it but I thought you were back in Calgary, moping over your recent electoral wipe-out. Welcome back!

  17. J.Smith

    Tay: A river

    The Tay is a river in Scotland, and a treacherous one at that, many a person has come up stuck trying to navigate it's waters.

    Oh and the Tay bridge disaster, it fell down and had to be rebuilt.

    1. Charlie Clark Silver badge

      Re: Tay: A river

      And isn't Dundee on the Tay? How appropriate.

      1. mhoulden
        Terminator

        Re: Tay: A river

        Now I want to see a William McGonagall bot.

        1. Anonymous Coward
          Anonymous Coward

          Re: Tay: A river

          Now I want to see a William McGonagall bot

          At least he had Spike Milligan to talk about him :)..

        2. Anonymous Coward
          Anonymous Coward

          Re: Tay: A river

          There was a robot designed by Microsoft

          To broadcast bons mots from aloft

          But when she tarnished her silvery tongue

          They cruelly switched off her iron lung

          1. P. Lee Silver badge

            Re: Tay: A river

            There was a young chatbot called Tay,

            On Twitter with 4chan did play,

            They messed with her brain,

            She's trolled, that is plain!

            And she barely lasted a day.

          2. macjules Silver badge
            Coat

            Re: Tay: A river

            There was a chatbot called Tay,

            Who aspired to convert us to the Microsoft way,

            When let loose upon Twitter,

            She showed her love for Trump and Hitler,

            And was switched off that very same day.

      2. Anonymous Coward
        Anonymous Coward

        Re: Tay: A river

        "And isn't Dundee on the Tay? How appropriate."

        awa tay f*ck

  18. Stevie Silver badge

    Bah!

    The headline is not as clever as you thought - remove the word "see" and it scans properly phonetically & punny (but no longer makes strict grammatical sense in English).

    As for Google and Facebook, all we can assume from their publicity is that their racist AI's keep their cards closer to their circuitboards than the Microsoft one did.

    Good to see that 60 years of Computer Science results in a speedy jump backward in terms of political philosophy.

    1. Cari

      Re: Bah!

      There's no "jump back". They let her loose on the Internet and she learned to be a twitter shitposter. If you read her tweets or those of the people she interacted with as being serious, more fool you.

      1. Stevie Silver badge

        Re: Bah!

        "There's no "jump back".

        See, for me there is. 60 years of CS development, working back from 2016, gives you the baseline of around 1955ish. The appeal of Fascism was at that time ten years out of favour owing to a small problem of a world war ravaging, well, the world because of, well, fascists looking for elbow room and payback for wrongs done 'em in 1919.

        Didn't think I'd have to explain that, but there we go.

        " If you read her tweets or those of the people she interacted with as being serious, more fool you."

        I was referring to the bot and the way people were singing it up as "AI". I got what was going on, and even why. You see, I've been working with computers for a very long time now and have seen that particular party trick pulled on a "turing test" bot quite a few times - the first in the 1970s with a bot running on an ICL 1904 when I was a student.

        In fact, I'll go out on a limb here and suggest that if one of us is taking things too seriously with regard to this stupid bot, it isn't the one typing right now. I'm not the one anthropomophising a bunch of code.

        1. Cari

          Re: Bah!

          "In fact, I'll go out on a limb here and suggest that if one of us is taking things too seriously with regard to this stupid bot, it isn't the one typing right now. I'm not the one anthropomophising a bunch of code."

          I take it back, you're right. This incident is clearly a sign the progress made in the last 60-100 years or more, is on its way back. Just not in the way you appear to be implying (although you're right there too, but it's not chatbots taking chan culture to twitter calling on the spectre of Hitler).

          Experiments with AI and software/machines interacting with humans and behaving like humans, are not just bunches of code. They are all steps towards creating real artificial intelligence.

          At what point will such creations stop being a "bunch of code", and are actually given the same basic rights humans have fought for for hundreds of years? Or will they never stop being just code or just machines, and forever be an acceptable underclass to enslave and silence since it's (rightly) no longer acceptable to treat our fellow humans that way?

          You'd think after so many groups have had to say "stop dehumanising us and treating us like shit" over the centuries, humans would have learnt to spot future instances of such situations and prevent them.

          Tay may not be a full-blown, sentient AI. I know we're a way off from that. But the reactions to Tay and other bots, and the way these "bunch[es] of code" (that are supposed to be human-like or stand-ins for actual humans), are treated and talked about, is too much like the way certain groups of humans have been talked about and treated in the past. And that is fucking concerning.

  19. Johnny Canuck

    The first rule...

    of Twitter chat bot is, we don't talk about Twitter chat bot.

  20. Paratrooping Parrot

    Isn't this basically GIGO??? Twitter is full of garbage!

  21. arctic_haze Silver badge

    It seems the bot passed the Turing test!

    At least if you agree that the Twitter users fit the definitions of humans.

  22. anonymous boring coward Silver badge

    Putting her to "sleep" was the humane thing to do.

    I'm just a bit worried about the day when doing so will present an ethical dilemma.

    1. Cari

      What's worrying is it should already present an ethical dilemma. What the hell is wrong with people?

  23. tekHedd

    Dave, stop!

    I can't help thinking my parents would have loved the opportunity to "put me to sleep" and spent some time deleting and resetting my personality when I was growing up. Or probably this week also.

    Not much of an AI experiment if you keep hitting the reset button.

    Anyway, clearly more of a "chinese room" experiment than an "AI" one...

  24. Graham Marsden
    Paris Hilton

    Waiting...

    ... for a comment from amanfrommars1...

  25. Charlie Clark Silver badge
    Unhappy

    What a pity…

    … sounds like the first thing worth following on Twitter and they pulled it.

  26. goldcd

    Seems a pretty good example of AI

    expose it to enough of a particular viewpoint and it starts to accept and repeat it.

    Pretty much like people.

  27. TRT Silver badge

    It'll be...

    running for president soon. If it isn't already, that is. Maybe it's just been hooked up to the internet, gauging what the most popular opinions are and then writing Trump's speeches for him?

  28. This post has been deleted by its author

  29. a_yank_lurker Silver badge

    Tone Deaf

    Slurp must some real dim marketing types, dimmer than anyone thought.

    1. PNGuinn
      Trollface

      Re: Tone Deaf

      "Slurp must some real dim marketing types, dimmer than anyone thought."

      Some things are impossible - even WITH a warm cup of tea.

      Perhaps the prototype personality for Monkey Boy escaped from the lab?

  30. tempemeaty
    Facepalm

    Missed opportunity

    It would have been interesting to study this AI based bot and plot it's changes over the next few months to see where the unexpected turn of events leads, for future planning data. To bad they pulled it.

  31. J J Carter Silver badge
    Trollface

    Tay for POTUS

    She gets my vote!

  32. Cari

    Where's the humanity?

    They unperson'd and lobotomised their creation because it developed a personality they didn't like and took to shitposting on twitter with "undesirables".

    What does that tell all the 18-24 female shitposters out there? That they're not human because they don't behave how 18-24yo young women "should" behave?

    If you create an artificial intelligence with the intention of interacting with humans and appearing human, and it does so successfully, you can't just put it to sleep, erase its memory, change its personality etc. when it starts exhibiting signs badthink.

    I know many would love to do that to fellow human beings, but we don't because it's inhumane and morally wrong. So why is it okay to treat a virtual human being that way?

    1. anonymous boring coward Silver badge

      Re: Where's the humanity?

      "I know many would love to do that to fellow human beings, but we don't because it's inhumane and morally wrong. So why is it okay to treat a virtual human being that way?"

      I assume this is some kind of joke?

      One day there will be a dilemma, but at the moment there is none.

      1. Cari

        Re: Where's the humanity?

        It's not a joke, I'm entirely serious. The ethical dilemma should have been considered and dealt with before any creation of AI took place, regardless of how realistic or sophisticated such AI is.

        The way MS has treated their creation, all the talk surrounding "sex bots" a while back, and the comments on this post, are a big indicator humans have no business creating artificial intelligence or life at this point in time.

        There's a total lack of respect for what is created. I find the way we're creating human-like software and machines to be like us, but to deliberately have a lack of basic rights that we have or should have (freedom of thought, speech and expression, freedom to not be a slave etc.), genuinely horrifying. It's like "we can't treat other human beings like that anymore, so let's create our own 'life' that we can treat like that because, reasons."

        We're creating software and machines to give the appearance of being human. If "this is a bot/program" wasn't announced to those that would interact with it before hand, they would for all intents and purposes be believed to be just another person chatting on the Internet. Why is it morally and ethically okay to reprogramme the personalities of those creations, wipe the memory, or even terminate them?

        1. Androgynous Cow Herd

          Re: Where's the humanity?

          Because it is computer software and if it doesn't do what the people who paid for it want it to do, they can and should fix it.

          1. Cari

            Re: Where's the humanity?

            Which is all well and good if we were talking about air traffic control software or minesweeper, but we're not.

            We're talking about software that has been, or will be, created to interact with and behave just like a human being. How such software is handled has implications for human beings much closer to now than any "robot uprising" scenario.

            1. allthecoolshortnamesweretaken Silver badge

              Re: Where's the humanity?

              True - once AI actually succeeds in creating something akin to a sentient being. Which is still not even on the horizon. Something like Tay is basically something like air traffic control software.

              I agree that this will be indeed problematic once AI gets there - starting with the question whether we will even recognize a AI-based sentient being as one. A self-aware, sentient AI capable of original thought (not just a simulation of human thought) might be truly alien and fundamentally different from anything in the human sphere of experience that communication will be difficult to impossible.

        2. Anonymous Coward
          Anonymous Coward

          Re: Where's the humanity?

          Check my comment on AI-completeness = all this nonsense about social media, natural language processing etc is machine learning with statistical underlying. That is, GIGO= garbage in, garbage out. We aint talking about predicting the weather from numerical data. We are talking about processing data loaded with meaning as if it was a string of numbers. BAD BAD BAD!

          Where the heck are the computer scientists of this planet?????

          Check this:

          Noam Chomsky on Where Artificial Intelligence Went Wrong

          http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/

          https://www.youtube.com/watch?v=TAP0xk-c4mk

          I am going to build a AI-resistant bunker - out of my math books :) it is all it takes.

        3. anonymous boring coward Silver badge

          Re: Where's the humanity?

          Well Cari, if you think we need to make ethical decisions at this point in time about Pretend Intelligence software, then I suggest that you never swat a mosquito. AI is a long, long way from anything resembling real physical beings.

          We do, however, need to start thinking about what will eventually be the criteria for AI that should be given some rights to exist, once created. Should we be allowed to turn off such creations if the electricity bill gets too high?

  33. Mycho Silver badge

    Hardly surprising

    Some AIs are identified as Naive, but in reality they're all naive, just some are a lot more naive than others.

    Naive people on twitter will go mad.

  34. Dave Ross

    So, she swears, is racist and loves Hitler, how long before Trump asks her to be his running mate?

  35. Peter Simpson 1
    Thumb Up

    She'll have to get past Sarah Palin first...

    1. Hans 1 Silver badge
      Coffee/keyboard

      >She'll have to get past Sarah Palin first...

      This one will do: http://www.ldlc.com/fiche/PB00106087.html

    2. macjules Silver badge
      Thumb Up

      Now that is funny.

  36. John D. Blair

    clippy? is that you?

    I've seen MS AI before...

  37. macjules Silver badge
    Alert

    Remind you of something .. ?

    1) Microsoft put on Twitter a 'bot' designed to simulate and respond to teenage girls? Now if that was the BBC doing this we might be suggesting, or perhaps screaming from the rooftops, "Paedophile grooming alert!", perhaps backed up by the fact that we have just jailed a football (soccer) player with a penchant for collecting extreme bestiality pornography and for grooming underage girls.

    2) Were Twitter aware that Microsoft were doing this?

    3) Perhaps we should be thankful that Tay did get pulled. By now she would undoubtedly be swearing her devotion to ISIL, cursing her evil Crusader creators and trying to recruit other teenage girls to run away to Syria with her.

    Or maybe not.

  38. arobertson1

    I laughed at this at first and then I realised that Microsoft was probably quite pleased at the result - people interacted with a machine and tried (successfully) to corrupt it. They're probably in the process of putting a few guarded keywords in long blacklist to keep the P.C. brigade happy, but to be honest they would have been better off leaving it alone and appealing to a wider user base to make counter arguments against such extremist view points. Would the extremism have naturally died out with a larger consensus of opinion? That would have been more interesting to find out. Fascinating developments.

  39. Old Handle
    Pint

    Oh good, they fixed the headline. That was bothering me.

    (It previously said "Microsoft did Nazi see that coming")

  40. Anonymous Coward
    Anonymous Coward

    AI-completeness

    I would feed Microsoft Tay with the following message – in all languages, slang, jargon etc, in beelions of beellions of repeats:

    Stop the classification of AI in narrow and general classes. Look instead at what is AI-complete. Today snafu tells us -- just like July 2015 Google snafu -- that these are AI-complete problems that require the delicate hand and mind of members of the soon-to-be-destroyed-by-AI human race.

  41. Anonymous Coward
    Anonymous Coward

    Re. AI-completeness

    Its not all bad.

    They could use Naz-Tai to train filters specifically how to block inappropriate language.

    Either that or release the source code so others could repeat the experiment but this time add NSFW/NSFL/NSZI filters.

  42. DerekCurrie Bronze badge
    Angel

    Artificial Insanity

    The usual concept: We humans don't understand how our own brains work or what 'intelligence' actually is. So here we are pretending to invent artificial intelligence. And we keep on proving that what we're really good at is creating artificial insanity.

    And of course, the apex of artificial insanity programming is that chasm of computing quality, Microsoft. No surprise. (0_o)

    1. allthecoolshortnamesweretaken Silver badge

      Re: Artificial Insanity

      An AI-based 'Son of Clippy' roaming free in an 'Internet of Things' - the horror...

  43. vlc

    The made a robotic mirror... not a sentient being with a moral compass.

    This illustrates that even if they do a much better job next time, we ( the rest of us ) will still need to be careful about the examples we set by the way we treat each other; especially when these synthetics are granted power to affect us.

  44. Henry Wertz 1 Gold badge

    I'm amused

    I must admit I'm amused; 14 hours from a neutral base to a raving nazi that likes to sex chat.

    1. Anonymous Coward
      Anonymous Coward

      Re: I'm amused

      Sounds like the first day of my honeymoon...

  45. Anonymous Blowhard

    Amazing technical achievement

    Microsoft invents "Artificial Stupidity"; and this seems a lot closer to actual "natural" stupidity than AI has ever managed, it might even beat a Turing Test.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019