A new set of laws has been proposed to govern operations by killer robots. The ideas were floated by John S Canning, an engineer at the Naval Surface Warfare Centre, Dahlgren Division – an American weapons-research and test establishment. Mr Canning's “Concept of Operations for Armed Autonomous Systems” presentation can be …
Make more law !
While we are at it, why not introduce a low for politicians, binding them to care about their constituency and forbidding take bribes, and for criminals to present themself to law enforcement after committing crime, and for Pi to be equal 3.0
And who do we get to sue?
Since there's no point in hauling a robot before the court, the law needs to specify whose neck is on the line when it transgresses. The owner? The programmer? The manufacturer? The country that pays the private-sector contractor that deploys them? If you can't make someone responsible then it's a waste of time finessing around with what they are allowed and not allowed to do.
The three laws are *fictional*.
Why do people discuss the Three Laws as if they have any bearing in the real world? They aren't real! At the very most, you could call it an interesting philosophical discussion, but it has no bearing on real life.
Discussing changing the Three Laws in a real life combat situation is about as relevant as deciding which Harry Potter spells should be allowed and disallowed in Iraq. Should we ban the use of the Cruciatus curse for extracting information from captured insurgents? Or would it be too useful as a torture technique that does no actual physical damage?
These truly are questions we *must* consider.
This is exactly like ice skating
And about just as useful. Come on, does anyone think that killer robots are going to be programmed to avoid killing humans ? We can go on and on about how to make war a more pleasant experience, but the stark reality is that war is hell and, if you're in one, the only way to win is to kill the enemy before they kill you.
If you can't handle that truth, then don't start a war !
As for these namby-pamby rules, don't make me laugh. In a world as competitive as the weapons industry, where the norm is defined by what makes the most mayhem the most efficiently, the first company to market killer 'bots that DO target humans as well is going to reap the rewards, and the others will be reduced to vying for a spot in the United Nations pseudo Security Force.
Enough to make a 'bot worthy of the name want to fry its own circuits.
War Robots and friendly fire
I agree with Tom and Pascal - the idea that combat robots should differentiate is just silly.
Its like saying anti-tank weapons should have been designed aim at the tracks to avoid hurting the occupants.
If you use robots to fight then people get killed - thats what the 'war' part of war robot means.
And they might even be a lot better at avoiding friendly fire incidents because they'll actually rigidly follow rules of engagement and take notice of incoming data (like those big orange 'friendly' panels on the target vehicles).
The big issue for me is who controls automonous combat robots and what are they used for. Robots programmed to round up dissidents or shoot at protestors will be very marketable in many parts of the world. And they won't have a conscience, and won't be recording their activities on video for their mates.
But hey - lets not be paranoid - surely nobody in the defence community would be that cynical?....
Smart WMDs too?
Maybe next they will make nukes that only toast those who don't put their hand up and surrender? Now that's a thought, the robo-killers could be programmed to detect palms using existing fingerprint reading technology. If there are no palms the robots go weapons free!
Tik-Tok (Sladek not Baum)
So all we need is for them to decide that the "laws" (whaever they end up being) are a collective delusiong and we will elect tem to public office.