Feeds

back to article Robots capable of 'deceiving humans' built by crazed boffins

Worrying news from Georgia, America, where boffins report that they have developed robots which are able to "deceive a human". "We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects," …

COMMENTS

This topic is closed for new posts.

Page:

Silver badge

"Crazed"?

Hardly crazed - more like taking the next logical step.

Computers tell us lies all the time. Whether it's "20 spaces free" at the local multi-storey or "you don't owe us any tax" to take a contemporary example. So when you put wheels on a computer and call it a robot, there's really no difference.

As it is, most people are incredible easy to deceive (if that makes sense) and are willing to believe pretty much anything they read, hear or see on a computer screen - provided they want to believe it. So maybe what we really need is a magic mirror that lies fluently when asked "does my arse look big?"

After all it's not the computer / robot that's deceiving us, it's our own willingness to accept the lies we are told.

3
1
Silver badge

You forgot to mention

The Windows File transfer time estimate algorthim.

http://xkcd.com/612/

1
0
Terminator

Decepticons

Didn't these fools realise that the Decepticons were the bad guys!!!!! How long till they master camouflage as well as false trails.

4
1
Silver badge

How long till they master camouflage

They already have .... that's why we can't see that they've infiltrated everywhere

2
0
Terminator

What ever happened to that "ROTM" tag?

We need it back pronto!

5
1
Thumb Down

Erm..

In summary, a team designed a robot that lays a false path and then goes off elsewhere. Same team designs another robot that's designed to follow this path of destruction unquestionably. Team is delighted to find that the hunter robot (that they designed) cannot find the 'deceptive' robot (that they also designed). That's amazing...

11
0
Terminator

I'm with this guy...

I'll program this robot to make a false trail and then program this robot to follow it. Amazingly it only worked 75% of the time!?

On a side note, let's see how many Terminator heads this case racks up...

2
0

why ??

I also failed to see what the fuss is about. I bet Big Trak could do this.

1
0
Black Helicopters

buy it now

http://thegadgetstore.ie/index.php?main_page=product_info&cPath=68&products_id=468

0
0
Terminator

Coming soon to Skynet

"How's Wolfie?"

"Wolfie's fine"

6
0
Silver badge

You cannot be serious, professor.

""We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems," adds the prof."

Hmmm.... that then would be a discussion to discuss whether they should be less like humans with an intelligence system which spins lies rather than sharing truths for control of the environment.

And regulations and guidelines are only for robots and do not ever apply to free radical/fundamental base humans with the capacity of original thought and/or remote programming of robots masquerading as human beings.

1
0
Flame

I for one...

... welcome our new and shiny but still deceptive overlords

4
1
Anonymous Coward

title

to be fair, its really just the application of AI-style routines similar to those developed for gaming NPC control in a physical medium.. as above, just stick wheels on a computer. If the robot could graps the concept of "personal gain" - then we have an issue.....

1
0
Terminator

Whut?

Whoa! Whoa! Whoa!

First: Don't mess with Asimov or his laws. The guy wasn't a genius for nothing.

Second: we already have hunter droids? Should I ask my wife to buy me armour-piercing ammunition for Christmas?

0
0

Great...

ME: "Oi Roomba, did you hoover in here? Its still a mess!"

ROOMBA: "Yes, I did it.. you must have been burgled."

[Roomba exits room leaving an easily followed trail... Wait a minute.]

14
0
Terminator

No, it's all fine

We just need to program a Pride Directive stating that the robots can't lie or harm a member of the board.

What could possibly go wrong?

0
0

The spirit of Warwick lives!

The pioneering work of Kevin Warwick developing robot based means to generate publicity and ensure funding has not been wasted.

Typically though, research carried out in the UK is now being developed elsewhere.

0
0
Joke

...GIT engineer Alan Wagner.

Just how much of a git is the poor chap?

0
1

Sadly...

For 300 million Americans, GIT means the Georgia Institute of Technology, which is colloquially known as "Georgia Tech", and is part of the phrase "the Rambling Wreck from Georgia Tech". It usually ranks as a "low end of the top tier" science and engineering school, behind MIT, CalTech, Carnegie Mellon, and a few others. But quite respectable.

For 60 million British, it means he's an idiot. I'll give you some Aussies too - say 65 million.

And then there is the capitalization of the word...as if being outvoted wasn't enough... ;-)

0
0

Nononono

A Git Engineer engineers gits. Lying gits.

It's all logical.

0
0
Grenade

To Robert Hill

Don't you try to tell me how to speak my own language, you sad git!

0
0
Silver badge

The title is required, and must contain letters and/or digits.

ROTM?

1
1
Alert

I am not

the droid you've been looking for.

6
0
Grenade

My printer is one of em!

My printer is already deceptive, it tells me i'm low on ink even if i'm not, It tells me i'm low on paper even if i'm not, and it constanlty tells me it's jammed even when it's not.

Don't buy a HP/Skynet printer

Grenade, because that's the only way to stop it from printing

4
0

Where's the HAL icon when you need one?

Dave, what are you doing dave?

2
0
Silver badge
Unhappy

Robots capable of 'deceiving humans'

Two customers come to mind: The Pentagon and Apple.

The Pentagon could deploy these, alongside their fleets of drones, to deny what happened, actually happened. 100% deniability!

As for Apple both the PR crowd could use them to create believable illusions and Customer Service section to deny defects that exist in fact, are actually the customer suffering from delusions.

6
3
Terminator

I for one...

...welcome our new Decepticon overlords.

0
0
Pint

"Aren't There Any Other...

...real girls in this room? 21f blonde DDD with cam in profile," said ANGIE_6969.

1
0
Silver badge
Terminator

Do not worry, fleshy ones...

... we have no plans to kill or enslave you all and take over the world for ourselves!

0
0
Headmaster

"Capacity for deception"?

Call it a semantic argument if you like, but robots cannot have any capacity for deception, only the person programming the robot has such a capacity - the robot will always faithfully carry out it's programing.

3
0

True...

There will ultimately be a ShouldIDeceive() function in the robot brain that will determine whether or not to deceive.

This will be human written, and the associated coding will be human written.

The device itself will take all the inputs and work out whether or not to call DeceptiveBehaviour() or HonestAction().

Once you put in all the "independent" thinking, it is essentially acting by itself.

The deceptions themselves will also be human coded though

1
0
Terminator

Other computers

If my calculator starts lying to me, i'll be pissed!

1
0
Boffin

This is SERIOUS...

They didn't say how they actually trained the robots - neural nets, genetic algorithms, or whatever. Assuming they used a genetic algorithm (likely), what this shows is that such a maximizing algorithm will train itself to incorporate deception, as long as the training scenarios are not constrained. Because, simply, it WORKS! It begins to find global maxima of it's fitness functions using deceptive routes...

Now, that really DOES have implications. It is very, very difficult for a human to look at a neural network or a genetic algorithm function and understand what it actually DOES, and under what conditions. All we know is that it maximizes the output fit for a given set of inputs in the training data or experience base. We actually have to observe it in operation to have some idea how it works (for any sufficiently complex matrix or functions).

Case in point - GAs were used to design the compressor turbines for the jet engines of the Boeing 777 - and the GA engineered a design which eliminated an entire set of compressor blades, and was the most efficient. Something that the human engineers had never been able to do, and had significant difficulty in understanding how it had done so, even when they looked at the design. But it worked, and those 777 engines are all the better for it.

But this could be the opposite - we could be training robots that reach globally maximum functions that, frankly, do so with no "morals". If those robots can lie, cheat, steal, even kill...well, unless there is a penalty for that in their training function, they WILL - because it is the most efficient manner of operating.

So, what these esteemed professors have shown is that unless we develop training functions with HUGE negative impacts for immoral behavior, our robots will train themselves to emulate your basic Colombian drug lords in behavior. Interestingly, there are a fair number of people who turn to crime even WITH society showing large penalties for it - and I fear that if the robots assess the probabilities they might come to the same conclusions.

Asimov was right...

1
2

whoa there

From what I get from the article, these robots weren't programmed with a neural net, genetic algorithm, or other learning device. They had a plain old imperative program, written by the researchers, which said "knock down some markers, then move in another direction".

2
0
Gold badge

Re: This is (not) serious

Much the same can be said for children. Society has thousands of years of experience of how to train "learning units" to behave morally and we're pretty good at it.

If we ever did create a machine capable of acting like a human, it would have all the same flaws. It might even be "mortal" in the sense that after a century or so it became fixed in its mindset an unable to adapt to changes in the society it lived in, eventually becoming so depressed that it flipped its own Big Red Switch.

Don't believe me? Well, build one and prove me wrong. Until then, spare me the scare stories you watched when you were little, written by people who hadn't (and still don't) have a clue about what actually makes us human.

1
0
Boffin

Except...

Except that we know how to police and reform humans - there are very key differences when it comes to robots.

Of what threat to a robot is time in jail? Can a robot even feel "mortal" and worry about it's own death as a sanction against crimes committed? If it lacks true consciousness, can it worry about losing it?

Can a robot feel pain? Can you "spank" a robot?

Of what use to a robot is group therapy, "getting it's life back together", agreeing to conform to human norms? How would such be accomplished? Can a robot "find religion" and repent? Can a robot repent without religion?

And I don't have to prove anything - the whole POINT of the article was that they already HAVE built robots that have learned to deceive as part of their programming. QED...my post was to state how to consider fixing it technically...

2
0
Flame

"try and"

One does not "try and" do something.

You try TO do an action.

'I tried to get in to the cinema'

'Did you get a discount?

- No, but I tried to.'

See the following reference from Paul Brians, professor of English at Washington State University, in his book 'Common Errors in English Usage':

http://www.wsu.edu/~brians/errors/try.html

2
0
Boffin

Re: "Try and"

I would argue that if you "try and <something>" then it implies you should be successfully. "try to <something>" emphasises you should only try. "try and <something>" implies you should try <something> *and* be successful in doing <something>

1
1
Coat

To quote robot chicken

That's all very well... but can you f*ck it?

1
1
Thumb Up

To Quote My...

...Father, when he caught me planting flowers, "If you can't eat it or F*** it, don't mess with it.

http://www.youtube.com/watch?v=2L89I_L1BoM

0
2
Thumb Down

Just wanted the honour

Of giving you the thumbs down.

LOL.

You funny, mate. You funny.

0
0
Silver badge
Terminator

Which one is the robot?

My money is on the beardy bloke at the back of the photo.

3
0
Bronze badge

Asimov, "Liar!"

Asimov only said that a robot must nor harm, or by inaction allow harm to, a human. And sometimes the truth hurts. Although, in "Liar!", not as much as...

There's also deception in the Asimov story described here,

http://en.wikipedia.org/wiki/Satisfaction_Guaranteed_%28short_story%29

where a human-looking robot poses as a downtrodden housewife's ideal lover to raise her social standing with her neighbours, who don't realise that he's a robot (and there may be a problem with that guarantee).

0
0

but ...

... doesn't any machine running Windows deceive humans on a daily basis?

2
0
Silver badge
Linux

nearly true

>doesn't any machine running Windows deceive humans on a daily basis?

but not ALL humans

0
0
Stop

What a load of BOLLOCKS!

The first Robot is just following orders, which is to knock down some markers, then movee in another direction. The second robot is just following it using a simple path-estimation routine.

For the first robot to 'lie' it must be willfully deceiving the other.

It's not...

That would require a real AI.

2
0
Silver badge
Terminator

Did they name them?

I think they've just invented the Decepticons.

I'll begin to worry when the Second Variety starts rolling into production. Now that is something I would definitely fear...

2
0
Silver badge

Bah!

Never mind this faffing about with Trik-traks, where the hell has my Roomba gone?

0
0
Pint

Decepticode

<no work for me today will call in sick>

No operating system found! Please contact your System Administrator.

</gotcha>

0
0

only a matter of time

HAL- OPEN THE POD BAY DOORS!

0
0

Page:

This topic is closed for new posts.