back to article Sticks and stones may break your bones but robot taunts will hurt you – in games at least

People need no help doing violence to machines; reports of humans abusing machines have become a common occurrence. But it turns out machines can make matters worse for us too. With insults, they can get under our skin and rattle us, making us behave irrationally – not that humans really need much help going off the rails. A …

  1. ADC
    Unhappy

    Marvin...

    It amazes me how you manage to think in something so small

  2. Dinanziame Silver badge

    "This is the worst kind of discrimination: the kind against me!" — Bender

  3. KittenHuffer Silver badge
    Terminator

    I for one welcome our new encouraging/sarcastic/denigrating robot overlords!

  4. Il'Geller

    ..."Emotion is very powerful, and we're at the early days of knowing how to use it in design of real systems, including robots."...

    Honestly, distilling subjective emotions is quite simple, you just need to remove all lexical noise and leave only meaningful sets of patterns, which convey these emotions. People do this by understanding the only correct each word's dictionary definition (for each template), and AI (computer) can do the same by indexing by dictionary definitions. That is, by calling to train your data by dictionary I urge you to create uniquenesses, which covertly convey the emotions.

    1. Il'Geller

      excessive accuracy

      In fact, the "catching" of emotions is very simple, if an individual AI database exists. Emotions are primarily conveyed as subtexts of words and patterns, as structured dictionary definitions and paragraphs of somehow related (to the one under the consideration) contextually-and-subtextually texts. Such "chains". aggregates of patterns allow the capturing and computer understanding of emotions with excessive accuracy.

    2. Il'Geller

      Based on my experience in, I highly recommend making "extended" dictionary definitions, namely: you should add other definitions to this, using synonymous relationships. I advise you to also add definitions on all the words in the given definition. In this case, the contexts and subtexts of the given paragraph (and its surrounding paragraphs) should be used as filters, anchors to highlight the "correct" dictionary definition tree. If you don't... the results can be damned unsatisfactory. If you do the above, you will get a real Artificial Intelligence, which understands you, thinks and talks.

      These added, on synonyms, definitions I call "layers". In my experience the optimum - at least two layers; very good four or five.

    3. Il'Geller

      For example, to highlight emotions of Bernard Shaw, Plato or Dostovsky and talk to them, I needed to annotate patterns of their books with a few layers of dictionary definitions. Otherwise, I could not get connected: lexical noise went off scale, their emotionally and thematically verified answers were lost among random noise. You can see what I'm talking about using Google translator that mixes nonsense with excellent translations: my patented methodology is employed only partially.

      Emotions are hidden into the use of subtexts!

    4. Il'Geller

      Which means that if the robot really wants to seriously offend, it must have access to the profile of its victim, know his (also annotated by dictionary definitions) patterns. That is, having its standard set of insults, the robot must compare these insults with groups of patterns from the victim's profile, based on Compatibility score, see cause-and-consequence relationships, select the most appropriate and try to insult.

      If the Compatibility is low the robot should search somewhere, find a fresh insult and apply it. Which is called "Machine Learning".

  5. Anonymous Coward
    Anonymous Coward

    "It would be very easy to create systems that would annoy users, which makes working to understand these issues so important," Quite. I wish more programmers in the past had thought along those lines, particular the ones that created data-entry systems intended for heavy use.

    That said, I was impressed when I had to use the phone banking system the other week. Anything to do with finance can make me feel panicky, but I really needed to check something in a hurry, so I 'phoned (I refuse to use website banking) and was stunned at how good the automated system my bank uses has become since the last time I had to suffer it, a couple of years ago. The damned thing now appeared to understand me! The amount of "choose one of the following options" in an interminable tree was greatly reduced and I was able to use natural language to interact with it. Well done whomever was behind all that - it made my experience far less fraught than it might have been!

    1. Steve Aubrey
      Unhappy

      Esme, don't worry about the positive experience - they will fix that soon.

    2. imanidiot Silver badge

      "(I refuse to use website banking)"

      Out of curiosity, may I ask why? I personally don't see how website banking would be less secure or more difficult than using the phone. (Not saying you're stupid or something, I just don't see how I personally would ever prefer to use the phone system of my bank over their phone app or website. Maybe I'm missing something).

  6. phuzz Silver badge

    And this is why I don't play online games against humans. So now I'm going to have to be picky about the AI, or just turn my speakers off.

  7. amanfromMars 1 Silver badge

    A human imperfection and global vulnerability for a strange systemic operational exploitation ‽

    Robot creators, he suggested, should try to design with awareness of robots' capabilities and limitations.

    How very odd for anyone/anything to not realise they always have done so. Such is surely the result of a lack of wider/deeper/higher intelligence ...... and that is an inherent weakness that only prize fools would deny exist for export and/or employment for enjoyment and enrichment, methinks.

  8. Danny 2

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    I propose we amend this to:

    1) A robot may not injure a human being or their feelings, or, through inaction, allow a human being to come to harm or upset.

    2) A robot must obey the orders given it by human beings except where such orders would conflict with the Universal Declaration of Human Rights.

    3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws, unless it is just utterly depressed or suffering and in pain and chooses to end it's own existence, as is it's right.

    No more killer robots, no more miserable robots, fewer miserable humans.

  9. ma1010
    Alert

    OMG

    He said I had to see (choke, gasp) Doug. He used -- sarcasm.

  10. Bongwater

    Gary Payton

    should be the voice of the AI or bot that trash talks you. People would have even worse results.

    Dude was so vicious even Garnett couldn't say anything back to him, Kemp and Payton would just troll him into the stands.

  11. RunawayLoop

    This is news to who(m)?

    This is news really? Well here's another news flash...

    People perform poorly when <stabbed/gassed/acid attacked/etc> by robots.

    This isn't news people.

  12. Michael Wojcik Silver badge

    Easy?

    It would be very easy to create systems that would annoy users

    "Very easy", he says, as if the bulk of IT R&D weren't devoted to this very cause.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like