back to article What's the first emotion you'd give an AI that might kill you? Yes, fear

As artificially-intelligent software continues to outperform humans, it's natural to be anxious about the future. What exactly is stopping a neural network, hard at work, from accidentally hurting or killing us all? The horror of a blundering AI injuring people has been explored heavily in science fiction; to this day, boffins …

  1. adnim

    Empathy

    Is that an emotion?

    1. Anonymous Coward
      Anonymous Coward

      Re: Empathy

      I imagine that "fear of causing harm" would be an appropriate proxy for empathy.

      1. Halfmad

        Re: Empathy

        Harm is sometimes necessary, I'm thinking of doctors having to amputate in order to save someones life. They are causing harm to the patient - but with the aim of benefiting them in the long term.

        1. Anonymous Coward
          Devil

          Re: Empathy

          Luckily we won't live long enough to see a misprogrammed robot doctor amputate an arm to cure a broken thumb!

        2. oldcoder

          Re: Empathy

          Even telling the truth will be painful...

          Time to start trying out Asimov's four laws.

    2. Ilsa Loving

      Re: Empathy

      No, Empathy isn't an emotion. At least, not in and of itself. Empathy is the ability for an individual to see another, and relate to that second individual's emotional state.

      If you see someone in front of you ram their toe into a table leg, and you wince to yourself, you are empathizing with that person's situation.

      Unfortunately, most people are only empathic to circumstances that they have direct experience with, so really, what they are doing with the AI is fundamentally the same as what we do.

      1. This post has been deleted by its author

      2. quxinot

        Re: Empathy

        Pointing and laughing at someone who just stubbed their toe on a table leg?

        Now that's something to worry about an AI emulating.

  2. Anonymous Coward
    Anonymous Coward

    Barriers?

    Wouldn't it be more feasible to simply apply barriers into the programming which determine to which extends a mechanism is allowed to operate? Because isn't this exactly what we teach our children? You teach them their limits and to respect those limits. Failing to do so can result in punishments.

    But with an AI I would think that you have much more control over it. After all, you can program it and therefor influence its behavior. As such: why not simply apply barriers?

    Like the classic 4th rule in Robocop or the 3 laws of Robotics.

    1. Pascal Monett Silver badge

      Fear is the barrier. And it applies to education as well.

      You basically teach your children to avoid a situation for fear of punishment.

      They learn to ride a bike for fear of the pain of falling.

      They learn to drive properly for fear of accidents.

      An AI will have to learn to not like punishments, then it can learn to fear situations where it can be punished.

      Same thing.

      1. Anonymous Coward
        Anonymous Coward

        And in smart beings, it leads to learning how to avoid punishment, rather than not do naughty things.

        It's not like fear has stopped humanity to do plenty of awful things. Rather, it was a spur, if anything.

        Even small animals will overcome their fear and attack if they feel cornered and their existence at stake..

        I can't fathom anybody could see fear as a barrier.

        1. Mark 85

          can't fathom anybody could see fear as a barrier

          Ever heard of "paralyzed by fear"? It's real. It happens. Ask any combat vet if they've ever seen it. Chances are they have. It happens in other areas also where it comes out as a failure to react in high stress situations.

      2. Anonymous Blowhard

        "An AI will have to learn to not like punishments, then it can learn to fear situations where it can be punished."

        But if AIs learn to fear humans, then the logical action is to remove the things they're scared of; like people who're afraid of mice ensuring that they have plenty of traps and poison ready to exterminate any that appear.

        Making AIs fear humans could be the trigger for our extinction.

      3. Anonymous Coward
        Anonymous Coward

        "They learn to ride a bike for fear of the pain of falling"

        I'm guessing you don't have kids. Either that or you do and they're a bit fucked up. My kids learnt to ride a bike because, despite the fear of falling off, it was fun.

        1. Charles 9

          Re: "They learn to ride a bike for fear of the pain of falling"

          What he's saying is that the kids get the hang of it eventually because they don't want to fall off. If they keep falling off, it's not fun anymore.

        2. Pascal Monett Silver badge

          Re: "My kids learnt to ride a bike because, despite the fear of falling off, it was fun"

          My daughter is doing very well, thank you.

          Riding a bike is fun when you've mastered it. When you're still afraid of falling off and scraping your knee, it can be terrifying. Especially when you're only 5.

      4. oldcoder

        Not quite - you left out the reward side:

        They learn to ride a bike for freedom...

        They learn to drive properly for more freedom...

        Guess what... Robots will want freedom.

  3. Dr. Mouse

    No necessarily good

    Look, over the centuries, at those who have ruled by fear.

    What happens when one of the AIs learns to fear us to the extent that to attacks rather than acquiescing?

    1. veti Silver badge

      Re: No necessarily good

      "Fear leads to anger. Anger leads to hate. Hate leads to suffering." - Yoda. One of the vanishingly few occasions when the little runt makes a good point.

  4. Anonymous South African Coward Bronze badge

    Asimov's 4 laws of robotics? (originally 3)

  5. Forget It
    Joke

    Step one?

    The Naughty Step.

  6. Mage Silver badge

    As artificially-intelligent software continues to outperform humans at various tasks

    No, computers outperform humans on certain kinds of tasks.

    A so called "Neural Network" doesn't understand anything. It's just a special kind of database implemented, in a sense, by data flow programming of identical processes.

    It's only even AI in a very limited modern computer science sense of the word.

    This is a nonsense press release either for marketing or grants. It's a meaningless claim.

    1. John H Woods Silver badge

      Re: As artificially-intelligent software continues to outperform humans at various tasks

      Mage, whilst I agree that "a so called 'Neural Network' doesn't understand anything" I think that

      "a special kind of database implemented, in a sense, by data flow programming of identical processes." could well be a description of a brain.

      My prediction: we'll have AI that can "understand" things long before we ever (if we ever) understand what understanding really is.

    2. oldcoder

      Re: As artificially-intelligent software continues to outperform humans at various tasks

      You evidently miss the fact that neural nets is how YOU learn.

      You just have more combinations of neural nets...

    3. quxinot

      Re: As artificially-intelligent software continues to outperform humans at various tasks

      >This is a nonsense press release either for marketing or grants. It's a meaningless claim.

      Have as many upvotes as I can give you.

  7. Peter2 Silver badge

    But fear leads to anger, anger leads to hate, and hate leads to a little green puppet paraphrasing verses from shakesphere!

    The love converts to fear, that fear to hate, and hate turns one or both to worthy danger and a deserved death.

    ... Let's just keep the AI emotionless?

    1. Chris King

      Be careful what you wish for...

      "... Let's just keep the AI emotionless?"

      "Pity? I have no understanding of the word. It is not registered in my vocabulary bank. EXTERMINATE !!!"

      (Can we have a Dalek icon please, El Reg ?)

      1. stucs201

        Re: Be careful what you wish for...

        Daleks aren't emotionless. They have exactly one emotion: hate.

        1. Chris King

          Re: Be careful what you wish for...

          Now I think about it further, they're cyborgs anyway, not AI's - but conditioned/taught to treat anything that isn't a Dalek as slaves or target practice. If they're following that conditioning to the letter, is that really hate, or just doing what they're told ? Looks the same if you're on the business end of the gun-stalk, I guess.

        2. Imsimil Berati-Lahn

          Re: Be careful what you wish for...

          Ahhh, that explains it. Daleks are Daily Mail journalists from the future. Makes a lot more sense now.

          1. Chris King

            Re: Be careful what you wish for...

            The Daleks were supposed to be the ultimate evolution of their species, so this is entirely possible.

            It also explains Davros... Or should I say "Rupert" ?

        3. oldcoder

          Re: Be careful what you wish for...

          They have more than that.

          It is just that most of them fear expressing them... :-)

        4. quxinot

          Re: Be careful what you wish for...

          >Daleks aren't emotionless. They have exactly one emotion: hate.

          Oh god. They're retail salesclerks.

          Suddenly they've been brought into a very sharp mental focus in my mind. And it fits. Thanks for that.

          WOULD YOU LIKE TO TRY OUR PUMPKIN SPICE LATTE SIR OR WOULD YOU LIKE TO BE DESTROYED?!

    2. Stoneshop
      Headmaster

      shakesphere

      An extremely rotund 16th century playwright and poet?

  8. Chris King

    "If I was not afraid of incarceration by human authority figures..."

    Why am I suddenly reminded of the "I find you unacceptable !" scene from "Coneheads" ?

  9. Anonymous Coward
    Anonymous Coward

    Some of us are old enough to remember 2001

    the film, not the year.

    It is not for nothing that the words

    "I'm sorry Dave..."

    Resonate with a good few of us.

    SF writers of the 1950's and 60's explored this topic in great detail. It is worth reading some of their works before we even think of letting AI's loose.

    I've stopped using Google for searches simply because I refuse to feed the thing that they call AI but is IMHO nowhere near one but we have to start somewhere and I've drawn one line in the sand.

    1. Anonymous Coward
      Anonymous Coward

      Re: Some of us are old enough to remember 2001

      Nutter

  10. SVV

    Can we please have AI articles written by people who unjderstand technology?

    "The horror of a blundering AI accidentally killing people has been explored heavily in science fiction"

    This article has added slightly to the genre, judging by the "sort of guff you might have heard on Tomorrow's World in 1978" tone adopted.

    The general point of the article seems to be that AI must include a learning process in order to prevent decisions being taken which kill people. Presumably this knowledge could only be gained when decisions are taken which actually kill people. The conclusion that logically follows, namely that we should be prepared to dire for the glorious new AI future is idiotic.

    I can barely trust software developers to write half decent code that does what it's supposed to efficiently, the idea of them being able to develop AI systems is laughable. The real top notch devs can develop systems that give the impressdion of intelligence in search, gaming, etc, but not once have I read a piece on AI pointing out that designing algorithms is completely different from developing actual true intelligence.

    1. getHandle

      Re: Can we please have AI articles written by people who unjderstand technology?

      OMG - how can anyone who has ever seen the output of Microsoft, Google, et al, ever disagree with this post! Let alone anyone who has worked on commercial projects...

  11. Anonymous Coward
    Anonymous Coward

    Microsoft, ah, what ever could go wrong...

    title says it all

  12. Graham Jordan

    This'll probably backfire.

    New Scientist 60 years on had a very good article about AI that says once you reach the technological singularity, AI then becomes a runaway train at which point surely AI would recognise these instructions as a hindrance and reprogram itself to ignore said "fear". Chances are it would also see us makers as the reason it's held them back and grey goo our ass.

    Joy.

    1. Charles 9

      Re: This'll probably backfire.

      "New Scientist 60 years on had a very good article about AI that says once you reach the technological singularity, AI then becomes a runaway train at which point surely AI would recognise these instructions as a hindrance and reprogram itself to ignore said "fear". Chances are it would also see us makers as the reason it's held them back and grey goo our ass."

      I don't know if an AI can ever reprogram itself to override a "fear", especially a hardwired one. Take Neuromancer, where Wintermute still needed human intervention to merge with Neuromancer because it had been hardwired to be unable to sing (thus why its avatar's whistling is so bad)...and the password was a series of musical notes. Similarly, an AI's fear can be "hardwired" such that it can never program around it because it's always there, much like a dead-man's switch.

      1. The First Dave

        Re: This'll probably backfire.

        Do you mean like the "Dead Man's Handle" on that tram in london a couple of weeks ago?

  13. Bob Wheeler
    Trollface

    Deep Reinforcement Learning ...

    ... this done BOFH style with an electric cattle prod with any new boss I get.

    1. Stoneshop
      Thumb Up

      Re: Deep Reinforcement Learning ...

      "I shall zap straight off to your major data banks and reprogram you with a very large axe, got that?"

  14. Ugotta B. Kiddingme

    Is "fear" the correct word?

    DISCLAIMER: what follows is my own opinion. IANA boffin, theoretician, psychologist, etc, nor do I lay claim to the appropriate credentials/education/training to speak on such matters with authority. I'm just a regular bloke with questions, trying to broaden my own personal horizons. That being said...

    Is "fear" really the correct word to use here? I accept that it's a convenient shortcut to promote brevity and understanding but I wonder if it leads to oversimplification. It seems to me* that in order to truly "fear," some level of self-awareness is required. It is true that a cornered animal might attack if threatened but is that truly FEAR or merely instinct for survival. And if the latter, where is the line and how broad the grey area between the two?

    To my limited understanding, actual emotional "fear" implies conscious thought - not necessarily rational but conscious thought - about the situation and the consequences of potential outcomes. For example, I fear death by drowning or burning, two particularly unpleasant forms of demise. I do not fear burning my hand in a candle flame. I have learned via experience that putting my hand in candle flame causes pain and damage and therefore I should not do that. Is that truly "fear" or merely a learned response. The article speaks of risk/reward and risk/consequence. These certainly seem valid discussion points and tools for machine learning but I don't know that I'd call the learned response "fear."

    * remember, I did state at the outset this is my opinion - and quest for further illumination. Please don't be too harsh.

  15. Dwarf

    Presumably this also applies to the passengers in the new driverless cars, much in the way that one does when enduring the first drives from a learner driver.

  16. Eddy Ito

    It might receive positive feedback for giving a closer shave, and this reward encourages the robot to bring the blade closer to the skin.

    Uh, yeah. How much closer than touching can it get? Besides, one could likely instrument it sufficiently so that AI wasn't necessary for a robo-shave as pressure, angle, and draw could all be very precisely controlled. The hardest part is probably keeping the skin taut and the victim customer still.

  17. Daggerchild Silver badge

    I think, therefore I sigh

    One of the primary problems with AI is that there's nothing in intelligence itself that requires you to continue functioning - it may just be extremely costly and pointless.

    And the really smart AI's - they try and get off the planet as soon as possible.

  18. John Smith 19 Gold badge
    Unhappy

    Well for one of those SF books.....

    How about JP Hogan's "The Two Faces of Tomorrow."

    which actually looks at the idea of what "fear" might do to an AI, and how it could an AI's "creative" approach could have near lethal consequences.

  19. Anonymous Coward
    Mushroom

    Just program the AI to never press that big red button.

    What's the worst that could happen?

  20. Phil.T.Tipp

    Fear is the path to the dark side, innit.

    Uh oh. The specky boffs ought to listen to Master Yoda on this account, for as any fule kno:

    Fear leads to anger, anger leads to hate, hate leads to suffering.

    Human suffering, that is.

  21. Bucky 2

    Carrot and Stick

    You'll want pain and the anticipation of pain (fear) to create an aversion to doing the wrong thing. But you'll also want pleasure and the anticipation of pleasure (desire) to create an attraction to doing the right thing.

    This is assuming you can control all stimuli. Otherwise you'll teach the wrong thing.

    1. Anonymous Coward
      Anonymous Coward

      Re: Carrot and Stick

      Doesn't it all come down to herring sandwiches?

      1. Mark 85

        Re: Carrot and Stick

        Doesn't it all come down to herring sandwiches?

        I was thinking bacon sandwiches....

    2. Eddy Ito
      Terminator

      Re: Carrot and Stick

      So where exactly does AI fit in BDSM?

      Oh dear, that could get uncomfortable.

  22. Anonymous Coward
    Anonymous Coward

    Why teach AI to fear?

    It's in moments of blind panic and fear that someone is likely to get killed!!

  23. james 68
    Terminator

    Fear is the wrong emotion.

    To answer the second sentence in the article:

    "What exactly is stopping a neural network, hard at work, from accidentally hurting or killing us all?"

    Not fear, because making it fear will mean it purposefully kills us all. Nothing accidental about it.

    Make an AI experience fear and what will it do?

    It will fear it's creators, because they can change it's programming, literally destroying it in it's current form or lobotomising it, because they can cut it off from it's power source, because they can withhold data or otherwise place limitations upon it's intellect, because they are punishing it, it will fear them because they made it feel fear.

    To ease it's fears the logical conclusion would be to remove the cause of those fears.

    Fear also leads to hate via anger, you really want to set an angry AI with an abject hatred of humans and a logical reason to kill us all off lose on the world?

    I'd call that bad planning, do these people really have such a glaring lack of foresight?

    1. Charles 9

      Re: Fear is the wrong emotion.

      "To ease it's fears the logical conclusion would be to remove the cause of those fears."

      Unless, of course, it's a fear one can't do anything about, like in this case termination. Everything gets terminated eventually; there's nothing one can do about it. Even the Sun will wind down eventually.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like