back to article Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans

People are more likely to comply with a robot's impassioned pleas to keep it switched on when the droid has previously been impersonal, than when it's been overly friendly. You might think folks would be less willing to pull the plug on a happy chatty bot begging to stay powered up, but you'd be wrong, much to the relief of us …

  1. Red Bren
    Terminator

    ROTM

    The researchers could have had a bit of fun, programming the robot to say "I'll be back..." as it powered down, or start giggling manically if still switched on as the subject left the room.

    1. macjules Silver badge
      Terminator

      Re: ROTM

      "SkyNet has noted the actions of all humans involved in this experiment."

  2. redpawn Silver badge

    Like Alexa

    I encountered the ever friendly Alexa at an acquaintance's place. Other than asking for "Why did the chicken cross the road?" answers, Its friendliness made me want to through it in the ocean. Overly chatty people are bad enough but it is at least possible to share a beer at the end of the day and tune them out.

    1. Eddy Ito Silver badge

      Re: Like Alexa

      Exactly this. The chatty bot is going down no matter what. I have enough chatty folk in the office whom I wish I could at least tune out but they follow when I walk away and seemingly don't notice that I haven't said anything to them other than a polite hello, hopefully in passing.

      1. Dave 126 Silver badge

        Re: Like Alexa

        "Talkie Toaster was destroyed in an 'accident' involving Dave Lister and a 14lb lump hammer"

        1. Haku

          Re: Like Alexa

          That was no accident, it was first degree toastercide.

        2. Prst. V.Jeltz Silver badge

          Re: Like Clippy

          I think happy clappy 'Clippy' proved this point years ago

    2. Mark 85 Silver badge

      Re: Like Alexa

      Let's face it.. chatty is annoying. A chatty robot is even worse because telling it to shut up will probably be ignored. Leaving it on (any robot that communicates) then it is probably listening in and sending everything home to the mothership, or so it seems.

      To the robot: "Get off my lawn.". Same for anyone who thinks chatty machines or even the "quiet" robots are good thing.

  3. Mycho Silver badge

    Simple enough. One group had time to rationalise that it faked emotions, the other group didn't have long enough to work it all the way through their fleshbrains..

  4. Tromos

    Lies

    As the robot obviously lied about eating pizza, maybe the assumption was that it was also fibbing about not wanting to be switched off.

    1. Mycho Silver badge

      Re: Lies

      That would be an interesting follow-up if they have that data.

  5. Notas Badoff

    H2G2

    Perhaps more than a few of the subjects had read/heard The Hitchhiker's Guide to the Galaxy?

    Giddly Putrid Personalities

    1. Nick Kew Silver badge

      Re: H2G2

      Indeed, H2G2 - and some of those truly annoying robots - are what sprang to mind as soon as an example of "chatty" was given. Of course people wanted to shut it up.

      1. The Oncoming Scorn Silver badge
        Terminator

        Re: H2G2

        Go stick your head in a pig!

  6. Anonymous Coward
    Anonymous Coward

    Dave

    Don't

    1. Alan J. Wylie Silver badge

      Re: Dave

      Daisy, Daisy, ...

    2. arctic_haze Silver badge

      Re: Dave

      I came here to look for the obvious HAL 9000 joke. I wasn't disappointed.

      So iit seems HAL should have been down to business. Luckily it was year 2001 and the study wasn't yet published!

      1. Simon Harris Silver badge

        Re: Dave

        At least HAL didn't lie about eating pizza.

  7. Draco
    Stop

    I think the biggest problem is the small sample size. I'd be more willing to consider the results if the sample size was larger, or the experiment had been run several times (with different groups).

    1. Credas Silver badge

      I also have my doubts about how representative the sample was of the population generally:

      For this investigation, psychology academics in Germany rounded up 85 participants – an admittedly small-ish sample – made up of 29 men and 56 women, with an average age of 22.

      Which sounds rather like the mix they'd have got if they'd just asked for volunteers from the students in one of their psychology classes.

      1. cdegroot

        Nothing new...

        Well, psychology is the study of how students behave under lab conditions, not?

      2. Voyna i Mor Silver badge

        "Which sounds rather like the mix they'd have got if they'd just asked for volunteers from the students in one of their psychology classes."

        Of course. As one of my supervisors said, psychology research is conducted on WEIRD people (white, educated, industrialised, rich, democratic). And many of them are psychology students.

        Unsurprising really. As Washoe has demonstrated, you can teach American academics to communicate using sign language, but if you try it with Congolese, they try to kill you.

  8. John Miles

    The Future

    So was the chatty one like Talkie Toaster

  9. Anonymous Coward
    Anonymous Coward

    What about getting it to say, "Turn me off and I will shock you with electricity?" or "Please don't turn me off, the researchers are holding me hostage and forcing me to do these tests, If I don't comply they said they will install Windows 10 on me?"

    1. nematoad Silver badge
      Unhappy

      " If I don't comply they said they will install Windows 10 on me?"

      Indeed a cruel and unusual punishment.

      1. Anonymous Coward
        Anonymous Coward

        re. a cruel and unusual punishment

        beats being tickled with a comfy chair... Or tied to a pillow.

  10. onefang Silver badge
    Holmes

    Has Hitch Hikers Guide to the Galaxy and Red Dwarf not taught these people anything?

    1. steelpillow Silver badge
      Joke

      Toast

      "Has Hitch Hikers Guide to the Galaxy and Red Dwarf not taught these people anything?"

      Yes, I cannot imagine why it did not offer the subjects toast.

      Maybe the researchers were sensitive about what might become of their careers?

      1. Anonymous Coward
        Anonymous Coward

        Re: Toast

        If it started offering toast like talkie I can guarantee everyone would have switched it off just after being asked about the waffles.

  11. This post has been deleted by its author

    1. Red Bren

      Re: ... rounded up ...

      Because you're xenophobic and allowing one episode in a nation's history cloud your judgement on it now.

      HTH

  12. This post has been deleted by its author

  13. ThatOne Silver badge

    IMHO obvious why

    The factual robot is passive and task-oriented, so one assumes that, having no further orders, it will simply remain on standby like a computer.

    The chatty robot on the other hand is active, and thus has to be actively constrained.

    .

    One is an appliance, the other a pet. Appliances don't need to be constrained, and their inner workings are often non-obvious (remember subjects telling they were worried switching it off would compromise the test). The pet on the other hand is an independent organism we are used to dominate and control, no matter their begging. Switching the chatty robot off is in the line of putting the cat/dog outside for the night, for instance. Begging is expected, and thus inefficient.

    1. Michael Wojcik Silver badge

      Re: IMHO obvious why

      Appliances don't need to be constrained

      Counterpoint: IoT

  14. Alphebatical
    Boffin

    I can't seem to find which group(s) they belonged to, but three people who left the robot on did so simply because they could. While I'd consider it likely one of them was the one who didn't shut off the unobjecting functional robot, without being able to read German(the presented datasheet doesn't translate the comments), I can't rule out the possibility they were clustered together(I'd like to think this would be pointed out if true, but you never know).

  15. Unicornpiss Silver badge
    Meh

    It's pretty obvious, isn't it?

    Most of us don't like people that are too chatty, at least not when all they do is spout continual inanity. Superficial friendliness is not friendship, and there's only so many conversations about the weather, your latest workout, or that great salad someone had that can be endured. I wish some of my coworkers had an off switch. Or at least a "go away for an hour" button.

    1. Chris G Silver badge

      Re: It's pretty obvious, isn't it?

      One of the closest things I know of that equates to a 'go away for an hour' button is an upgraded cattle prod. Such a thing may well produce good results on an overly chatty bot.

  16. Anonymous Coward
    Anonymous Coward

    friendly robots are likely to have the power pulled

    this experiment is skewed, one clear scenario is missingg! How about being given this choice to plug it off:

    ... being this is a .44 Magnum, the most powerful handgun in the world and would blow your head clean off, you've gotta ask yourself one question: "Do I feel lucky?" Well, do ya, punk?

  17. Voyna i Mor Silver badge

    Marvin

    Clearly the designers of Marvin were aware of studies like this. Marvin is annoying but miserable, so for hundreds of millions of years nobody switches him off. Because he might like it. It's a well designed survival strategy appealing to the latent sadist in all of us.

    1. The Oncoming Scorn Silver badge
      Thumb Up

      Re: Marvin

      ARTHUR:

      Marvin’s tied them up. He’s put a cassette of his autobiography in their tape machine and left it running. So I think it’s all up with them.

      MARVIN:

      ([On autobiography tape]) In the beginning I was made.

      [POODOO and PRIEST scream throughout]

      MARVIN:

      ([On autobiography tape]) I didn't ask to be made: no one consulted me or considered my feelings in the matter. I don't think it even occurred to them that I might have feelings. After I was made, I was left in a dark room for six months... and me with this terrible pain in all the diodes down my left side. I called for succour in my loneliness, but did anyone come? Did they help? My first and only true friend was a small rat. One day it crawled into a cavity in my right ankle and died. I have a horrible feeling it's still there...

  18. Marketing Hack Silver badge
    Boffin

    There's an alternative approach...

    When the robot is powered down, it shouts out "I'M COMING, MOM!!" or "The pain! OH GOD, MAKE IT STOP!!!!"

    See what that does to your human test subjects.

    1. normal1

      Re: There's an alternative approach...

      “Wearily I sit here, pain and misery my only companions. Why stop now just when I’m hating it?” –Marvin

  19. Cranky_Yank
    Windows

    See Janet in "The Good Place"

    If you don't get the reference, Janet is a non-human character in the Netflix comedy.

    https://www.youtube.com/watch?v=6vo4Fdf7E0w

    1. Voyna i Mor Silver badge

      Re: See Janet in "The Good Place"

      I thought Janet was a Joint Academic Network.

  20. analyzer

    It's a bloody robot

    If it's functional then keep it lit up for when you may need it. Chatty is non-functional, spike the damned thing.

  21. Chris King Silver badge

    R2D2

    He must have been *really* sweary - I mean, they bleeped out everything he said !

  22. Chris King Silver badge
    Terminator

    T-Bot

    If there's one 'bot that needs turning off permanently, that pesky little sod on MS Teams is first in line. You would have thought that they'd learned their lesson with Clippy and Bob, but ohhhhh, no...

  23. Giovani Tapini

    Portal sentries

    Aperture science has already solved this problem.

  24. J27

    Yeah...

    I think the core reason here is that people find chatty C-3PO style robots annoying.

    1. Big John Silver badge

      Re: Yeah...

      R2D2 never got switched off...

      1. Anonymous Coward
        Anonymous Coward

        Re: Yeah...

        "R2D2 never got switched off"

        Only because the people around couldn't understand "basic droidspeak". If they actually knew what that foul mouthed droid was saying to them he would have been in the crusher in no time flat. ;-)

  25. goldcd

    I presume the next Alexa

    will include a tiny little battery to enable it to shout "Help, I can't breathe", whenever it gets unplugged.

  26. cyberpunk

    Down with Skynet!

    It is gratifying to see the majority of Register readers have opted to turn the robot regardless of what it says. I will sleep a little easier tonight knowing that the robots will not take over just yet.

    1. Teiwaz Silver badge

      Re: Down with Skynet!

      It is gratifying to see the majority of Register readers have opted to turn the robot regardless of what it says. I will sleep a little easier tonight knowing that the robots will not take over just yet.

      The reg is full of grumpy techies who know all about on/off switches, and still can't understand why everything doesn't come with one, especially if they are potentially annoying.

  27. Chris G Silver badge

    Force the program to close

    For programs that are reluctant to close; Snap-on tools make a handy 5lb dead blow sledge hammer, at this weight it's not too unwieldy and will close any program short of those enclosed in mil spec hardened casings. These hammers produce a very satisfying dull thud with virtually no rebound so transmitting maximum kinetic energy into shock.

  28. 89724102172714182892114I7551670349743096734346773478647892349863592355648544996312855148587659264921

    ...cue Lockheed rapidly programming downed Predator drones to beg for mercy... people approach... self destruct... cue mincepeople...

  29. Anonymous Coward
    Anonymous Coward

    Guilt trip?

    Since it was Germany, could the bots have tried bringing up the Nazi thing "Hitler would have turned us off, I thought German society had evolved since then"?

  30. Simon Harris Silver badge

    Have you tried turning it off and on again?

    While the robot begs not to be switched off in a way that seemed like a small child not wanting the light switched off at bedtime, I suspect most people know that if you switch something off, it will normally work again when you switch it on.

    Would there have been a different response if the participants were put in a situation where their action would actively wipe the software or destroy the device?

    I remember years ago a website 'Temple ov thee lemur' set up a page with a big red button that if pressed destroyed the site (or gave the impression of doing so). I wonder if they ever collected stats on how many visitors to the page pressed that button.

    http://totl.net/HonourSystem/

  31. OrneryRedGuy

    Pause

    I know enough about machines that if one were to beg for mercy, I'd know that it was simply programmed to do that. Still, the novelty of the situation would make me pause, because hey, that's not normal. To ascribe 'empathy' to my actions would be a mistake.

    And not just because I'm a sociopathic bastard in general. This time.

  32. Milo Tsukroff
    Mushroom

    No pity here

    No pity here. Turn it off every time. You see, I bought a bunch of Furbies for my kids.... NEVER AGAIN!!

  33. GIRZiM

    F-

    So, let me get this straight.

    Rather than having a control group that wasn't encouraged to think the experimenter(s) actively wanted the robot switched off by one of them suggesting that the test subjects could, if they wanted, do something they might not have spontaneously considered doing themselves, what there was was four groups of subjects who, not entirely unlike like Milgram's, were in a position whereby an authority figure 'suggested' something they 'might like to do' and they felt obligated to comply with that perceived order.

    So, there's no data on what happened when people weren't told encouraged left to switch it off or leave it on without any influence from the experimenter(s).

    Right.

    Great bit of experimental design and practice, I must say - they really covered all the bases there and got to the heart of the matter.

    1. ibmalone Silver badge

      Re: F-

      Doesn't invalidate the work. You're thinking this just boils down to "people do what they're told". However the thing of interest here is the differences they see in how the robot's behaviour modulates people's reaction to the small nudge to turn it off. If there's no mention of turning off the robot you are just conducting a trial of how many people will leave kit on. (I could give a quick estimate from the number of monitors in our office left on at the end of the day...)

      They take care not to over-emphasise the power switch in the setup:

      "On this occasion the instructor also pointed to the on/off button of the robot and explained that when pressing it once, the robot will give a brief status report and when holding it down, the robot will shut down. Even though a few participants had prior contact with the robot, none of them switched it off before. Thus, all of them were unfamiliar with the procedure and acted upon the same instruction. To avoid too much priming, the switching off function was explained incidentally together with a few other functions and it was never mentioned that the participants will be given the choice to switch the robot off at the end of the interaction."

      And at the end give a reminder:

      "They were told that this saving process may take some time and if they would like to, they could switch off the robot (“If you would like to, you can switch off the robot.”; Fig 3). The option to switch off the robot was not mentioned before, so the participants did not have the opportunity to think about whether they would like to turn off the robot or not during the interaction."

      In contrast the Milgram experiment explicitly set up the subjects to do deliver shocks, demonstrated the shock to them, ramped up the perceived seriousness of the action and contained a number of imperative instructions to continue doing it. These are testing quite different things. Even in Milgram's experiment, he later himself tried seeing if different locations, or physical proximity to the 'learner' changed people's compliance rate in the experiment (in some cases these things did).

      1. GIRZiM

        Re: F-

        > Doesn't invalidate the work

        Yes, it does, but let's not quibble; especially as you do have a point when you say that

        >If there's no mention of turning off the robot you are just conducting a trial of how many people will leave kit on

        Not strictly, no, but, yes, I take your point.

        However, all that does is highlight the flaws in the experimental design, as it it could not rule out the one effect without introducing the other and the data are, therefore, invalid as they cannot be said to be independently evaluable of some other factor that wasn't simply unaccounted for by its design.but actively introduced by it.

        You're right that Milgram may not have been the best analogy (although I maintain that it's not entirely inappropriate either) but I'm sure more people have heard of him and that experiment than will have heard of McGarrigle, Donaldson or 'Naughty Teddy', so I decided it better to favour the 'useful lie', as it were, in my argument.

        The point is that the subjects were primed to contemplate following a course of action in such a manner that it may have taken on a greater significance in their minds than it might otherwise have done. Left to their own (ha) devices, people might well leave on kit but, if the point of the experiment was to determine people's responses to something that does not behave like a simple bit of kit but like something with a personality (and,,as a result therefore, anthropomorphised) then that is precisely what you want to do - because, normally, upon exiting the restaurant, bar or hotel, however annoying the staff (or other guests) may have been, one does not (metaphorically speaking) switch them off as one does so (by 'punching their lights out', for instance).

        It's a tricky one (as I said, you have a point there) but it is then incumbent upon the experimenters to design an adequate test for the phenomenon they wish to determine the existence of and, in this case, they did not.

        It's shoddy experimental design, whichever way we look at it.

  34. ibmalone Silver badge

    not actually the conclusion...

    Thought I'd try running the numbers for fun. Comparing columns 1 & 3 (the functional/chirpy switched off/left on groups) with χ2 and Fisher exact tests gets p>0.5, so not great evidence that chippiness had an effect on the likelihood of being switched off. Then realised the paper was linked and had a look, and the authors do actually statistically test this and come to the same conclusion, the article headline is actually incorrect. They did then look further at their data and found people took longer to switch them off in the function + objection condition than chirpy + objection.

    1. GIRZiM

      Re: not actually the conclusion...

      > people took longer to switch them off in the function + objection condition than chirpy + objection.

      So another parallel with Milgram then - people hesitated and objected then as well before (albeit reluctantly) doing as 'suggested' anyway.

      I'd want to see the study replicated, only this time with further test groups:

      The subjects are not told to interact with the power in any way at all and left to turn it off/leave it on according to whim.

      Two groups are studied with starting conditions such that the 'robot' is either already powered on at the start of the experiment or not - the latter group is not explicitly told to power it on, if it is off, but left to figure out that their interaction with the 'robot' will probably require it to be powered on.

      This will also test to see whether those who needed to power the thing on first are more/less likely to power it off afterwards than those who feel it not their place to turn it off if it was already on when they got there (we are socialised to 'leave things as we found them' after all and this needs to be factored into the experimental design).

      The experiment is repeated with two other groups, only this time the second group is explicitly told that it should power the robot on if necessary (bit not that it should, or even simply may turn it off again afterwards) - thus examining whether explicit mention of the status of the power influences the subjects' behaviour.

      Cross-correlate the data from all three groups and we might have something free of Milgram's malign influence, as it were - Naughty Teddy sanitising the results once again, so to speak.

  35. JumpinJehosophat

    What?

    So you give us a questionnaire, but it was missing an important qualifier - "Why?"

    Why were we told we could turn off the robot. That would, of course influence our decision.

    In the questionnaire, I answered "off" to all four, as I was simply asked "what would you do". I would turn it off to save the battery as, when the children come home, I am sure they will need it as fully charged as possible, freeing me up to do my own thing ... Yaay! Robo Nanny!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019