The South Korean government is working on a set of guiding principles to outline how people and robots should interact. A panel is working on the commandments Robots Ethics Charter and is due to report back later this year. The five member team includes a science fiction writer and could well mirror Isaac Asimov's three laws of …
asimov's laws don't work
Asimov's himself showed how the 3 laws don't acutally work in a number of short stories. It was a romantic but unworkable notion. It is a parlour game amongst sci fi bufs to propose situations where the 3 laws lead to the wrong result.
I might also mention that obedience to the 3 laws is a function of the firmware. Asimov resorted to magic to ensure that one simply didn't download a soldier bot program into a domestic robot. We don't have the luxury of that comforting magic.
Android robots are the dumbest idea ever invented by man; essentially the design of our replacement, without designing a replacement for the need to eat. Robots will always be expensive, being assemblies of many moving parts (even if robots build robots). They will belong to the rich, and will replace the working classes. They will be soldiers and police for their owners, the rich, in the riots that result from the economic displacement they cause. And yet hundreds of engineers beaver away trying to make them work. It's embarrassing.
What, no Robo-Take-over conspiracy Theories?
Err, hello, this is El Reg right? Where's the spin saying how this action proves that Robots are already inteligent, and using our own legal systems to ensure their perminent superiority?? Has El Reg been infiltrated too??
The ROTM(tm) is over, they've already won!!
Asimov his own fiercest critic
The major point of most of "I Robot" was that the "Three Laws" were so full of holes as to be largely useless, no matter how lovely they sounded at first statement. Define "human". Define "harm". Both somewhat tricky.
Lots of guff
These don't look like laws to govern robotic behaviour but rather to define a system of how and why robots should be used and how they should be interacted with.
The famous 3 laws were an inherent property of the 'positronic brain' of the robot - built in by the mathematical systems that lay down the initial pathways, built into the presumably fractal formulae which control the behaviour of the robot (programming by any other name)
As someone above pointed out, we don't have that capability at the moment, anyone could programme a system deliberately free from constraints that allowed a robot to shoot a person, or stand by if they were in danger.
Since "the three laws" are going to remain a complete fairy tale for the foreseeable future, we would do well to change society in such a way as robots aren't placed in a position where they would have to make that decision. The same kinds of decisions were made with the introduction of every wide-reaching technological breakthrough, for example changes in the law so that a facsimile document can't be considered a legal copy unless the provenance can be proven. You can look back at all the big technological leaps and see similar changes to society have taken place - people who throw their hands up now and declare the apocalypse because we've "designed our replacement" are basically talking crap.
UltiMate Games .......... are Real Enough 42 Control Mind Games Research Interest
"It is a parlour game amongst sci fi bufs to propose situations where the 3 laws lead to the wrong result.".
It may be/is a boudoir game amongst Pi sci fi buffs/buffers to propose situations where the 3 laws lead to the right results. A Paradigm Shift in EmPhasis..... http://www.reghardware.co.uk/2007/03/08/intel_pram_production_plan/
I've been working on HP T5000 boxes all day .. I'm sure they are eating and replacing users but I cant prove it .. Obviously South Korea has already been taken over so the robots can sneak into North Korea and pinch nuclear weapons .. I dont know about some kind of robot hugging policy but we do need a robot extermination policy
Don't forget Dave Langford's alternative Three Laws of Robotics:
(1) A robot will not harm authorized government personnel but will terminate intruders with extreme prejudice.
(2) A robot will obey the orders of authorized personnel except where such orders would conflict with the Third Law.
(3) A robot will guard its own existence with lethal anti-personnel weaponry, because a robot is bloody expensive.
Surely you jest. Any decent Asimov reader knows that there are 4 Laws of Robotics: The Zeroth Law was postulated as "No robot can harm Humanity or through inaction allow Humanity to be harmed."
As for the "What is human, what is humanity", this was also discussed by R Daneel Olivaw as a failure of the Law (and in fact used by him/it to order another robot about).
Yes, it is true that the 4 Laws leave a lot of jiggle-room for writers to play with. Asimov himself said that, having written the Laws, he now had to find ways around them for his stories. Personally, I think those made for his greatest Robot stories.