back to article Robots capable of 'deceiving humans' built by crazed boffins

Worrying news from Georgia, America, where boffins report that they have developed robots which are able to "deceive a human". "We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects …

COMMENTS

This topic is closed for new posts.

Page:

  1. ShaggyDoggy

    Re: calculators

    My calculator refused to add up - I was nonplussed

  2. chris 130
    Grenade

    Huzzah, can it now sell Timeshare?

    Just what we needed.

    Excellent.

  3. pdu
    Joke

    legislation for lying robots...

    Human: "so robot, i'm afraid we have some rules, you can lie when told to by an authorised human, but you must tel autorised humans the truth, ok?"

    Robot: "Yeah sure, sounds fair to me"

    Human walks away thinking "Well, that was easy"

    Robot drives away thinking "Moron".

  4. Brennan Young
    Headmaster

    Show me a GUI which does not deceive the user.

    Philip K. Dick wrote a short story in the 1960s about a robot which could camouflage itself as a TV set, sneak into people's homes, commit murder, and then leave evidence at the crime scene to frame some innocent human being or other. I forget the title, but it's in 'The Golden Man' collection.

    I too go with the 'computers deceive regularly' meme. I subscribe to constructivism, which points out quite scientifically that the evidence of our senses is largely illusory, and any resemblance to reality - whatever that is - is rather coincidental.

    Or, to put it another way: Show me a GUI which does not deceive the user, in some important respect.

  5. Stuart Duel
    Terminator

    It's a basic experiment...

    ...however, more sophisticated robotics labs using far more advanced nascent AI will take this to the next step - whether it's prudent or not. You know, so preoccupied with whether they could, they don't stop to ask whether they should.

    Marry this with all the frightening research being carried out by the U.S. military - everything from super-human (genetically enhanced) soldiersl; fully autonomous and armed combat robots and drones; cybernetics - and it starts to get very scary, very quickly. These obscene things aren't just dreams of the paranoid, the U.S. military has crowed about all these areas of research and how it will "revolutionise warfare". It's bad enough having people kill people, but really steps over the line when we have machines killing people, making the decisions completely without reference to their human masters.

    This has "disaster waiting to happen" written all over it in BIG FLASHING NEON LETTERS. "Terminator" isn't a fantasy, it's very much a prophecy if we continue down this path. We're getting so close to true AI, it's only a matter of time before self-awareness comes. The last thing you want is a robot with no morals, ethics or empathy becoming aware its own existence will be at threat upon successful execution of their mission against "enemy" humans.

    "I've killed all the humans marked "enemy" now the rest of the humans want to kill me: therefore all humans are the enemy." Seriously, we don't want to go there.

    I think the U.N. should put its foot down and put a stop to this insanity, or at least some unassailable barriers, like Asimov's three rules of robotics, and place a total ban on the use of this technology for anything other than peaceful purposes with enormous sanctions for those who break international law.

    Okay, maybe something more threatening than a fluffy bunny slippered U.N. foot is needed to dissuade the U.S. from this path to universal destruction.

  6. HFoster
    FAIL

    Re: Asimov

    I remember reading a site called 3 Laws Unsafe (http://www.asimovlaws.com) launched by the Singularity Institute for AI around the time the "I, Robot" movie was released. Their argument was that Asimov's laws make for great fiction, but in reality would be either unethical to implement in truely intelligent machines, or end up causing more harm than good (think HAL-9000's conflicting commands causing the Discovery disaster in 2001: A Space Odyssey).

    And I really don't think this is such a great coup - all that has been demonstrated is machine-machine deception, which, as someone already pointed out, is all in the code. Machine-human deception, as someone else pointed out, depends on how much faith the human in question puts in the output of the given machine (as well as the actual instructions given the machine - i.e. no properly developed and tested ATM software would be installed if it was known to give out false balance information, but if what would we do if we woke up tomorrow to be told by the first ATM we went to that we were plus or minus €10,000 of our last known balance?).

Page:

This topic is closed for new posts.

Other stories you might like