A London-based anti landmine and cluster bomb charity has now widened its remit and is calling for a moratorium on the use of killer robots. "Our concern is that humans, not sensors, should make targeting decisions," said Richard Moyes of Landmine Action, quoted by New Scientist. "We don't want to move towards robots that make …
Automated systems for defence
It simply is not possible to have humans making targetting decisions about some types of threats e.g. an incoming anti-ship missile. By the time a human has assimilated the threat information and made a decision, the weapon would most likely have already hit the ship. For this reason point defence and area defence systems tend to be completely autonomous - or they would not work.
However, it is commonplace for such systems to have the means for human intervention to abort an engagement.
Computer says no
"A robot could well, in fact, be better at applying rules of engagement which may feel - may actually be - suicidal if followed strictly by human operators."
Those rules come from a human, no?
That human wrote the rules in the past, no?
That human tried to predict the future in order to determine those rules, no?
Do humans see into the future, no!
That human see with eyes not sensors no?
So that human writes the rules based on his 'heresay' interpretation of the sensors available to the robots, no?
Computer says no.
The problem here is that the anti-landmine campaign has been too successful for its own good. Pretty much everyone who is ever going to listen agrees that landmines and cluster bombs aren't such a good idea, so if they don't find another target to rail against they'll have to shut up shop - adapt or die.
It's a bit like all the rabid anti-smoking fascists who have now started attacking those of us who like the odd pint or 10 on a Friday night...
Re: The Bootnote
Which is entirely the problem. You can have all the treaties in the world, but if no-one ratifies them - or bothers sticking to them- they're entirely ineffective. Plus, even if it had signed up to it every country, citystate and principality in the world it doesn't affect stateless terrorists, who'll get better and better armed as we artificially limit our military development.
What they should be doing is trying to impose minimum standards- out of a sample of 1000 different people - 999 civvies and 1 threat - it should be able to notice (and eliminate) the one threat. Conversely out of 999 threats and 1 civillian, it should be able to notice (And NOT eliminate) the non-threat.
Or just deploy them in places where their environment can be controlled- with checkpoints for example anyone who enters a certain boundary at speed could be classed as a threat and shot/shelled/microwave-ray-gunned/tazered/whatever.
Or the government could have bluetooth "VIOLENT CRIMINAL" or "NON CRIMINAL" things in place in its ID cards- if it detects a VIOLENT CRIMINAL, it'll apprehend them. With guns.
Whatever happens, I think we can agree that we need little-girl-holding-ice-cream detecting sensors before we start rolling Sgt Bash's bigger brothers. And intra-ice-cream weapons sensors in case she's hiding something in there, of course.
Black Helicopter cos those guys are probably already churning out the T-850s...
Old sea mines are a bad example...
as they had a tendency to float off their anchors and blow up civilian ships, mostly red cross and hospital ships. While human operators also tend to fire at civilians because they just want to kill them, the number of such cases is relatively low. One of the main reasons is the level of intelligence is tend to be higher in humans than in automated systems. A robot can only operate correcly if there are no neutral (civilian) targets in it's area. While this is also a problem for humans, they usually have more 'common sense', that robots are lacking.
Anti-shipping missiles are different in the degree of civilian casualties they are likely to cause. Most shipping is going to be classes as legitimate targets -until a cruse liner gets hit anyway. I think the worry here is the armed robotic vehicles (ARV). There is stuff like this www.lockheedmartin.com/data/assets/12822.pdf and others from BAE http://www.baesystems.com/ProductsServices/l_and_a_gs_black_knight.html which are designed for supporting troops.
I predicate the first civilian will be killed by the robotic transport vehicles that automatically defend themselves. These are the aim of the DARPA grand challenge aren't they?
@Lewis Hitler Page
"We here on the Reg killer-robot desk maintain a guardedly open mind regarding autonomous lethal systems"
Yet that is not the tone of this article. You're a cold heartless person.
I question the points you raise about the flaws in human beings due to their emotions on the battle field etc. To deprive another human being of their life takes a lot more than it does not to.
And how can you morally justify a machine killing someone? Even if it is a soldier taking someones life under the most inexcusable of circumstances, surely this panic has some moral ground in the soldier fighting for his/her own survival.
What can be said about the automated death trap?
And the more we head toward automated killing, the more it abstracts away any moral obligation of the people responsible. Would the person who arms the auto-kill machine, pull the trigger in its place?? Perhaps if they were trying to save themselves sure. But otherwise maybe not. And that is important, even taking into account mistakes that might be made by real people.
And you seem to be saying that just because there are already forms automated killing machines in existence, means that we should do nothing to kerb the trend.
And to argue that we should refine the killing process, by making the robots more sensitive to data, is surely pushing the limits of even your own coldness?
Killing machines should not exist. We should not be attempting to make them more efficient. Yes a more optimized killing algorithm eliminates the "mistakes", but it also ensures the fatalities of more targets. Is that really a good thing Lewis?
Robots should be left to kill other robots.
Would you, Lewis, take a baby animal, and for its whole life keep it in a box or cage, not allowing it enough space to turn around, to see its parents, to walk anywhere. To torture it basically. For its whole life?
I doubt you would. (well I doubt most decent people would). But that is what happens in the another industry, where we let the automation of things take over, and abstract away moral responsibilities.
(Yeah im talking about factory farming and stuff.)
Things that are clearly wrong and morally abhorrent are allowed to slip by in the name of efficiency, because there is no human there responsible for pulling the trigger. Yes this example isnt the same, I know, but Im sure you can see the moral argument I am making. It is a lot more moral for people to kill their own meat. That way at least they have to deal with the consequences and confront the moral decision.
Yes I know youre a bomb so-and-so, and have served in this place and that place. I dont care.
(Paris, because I'd like to impregnate her)
is now my favourite word.
PS Auser: "While this is also a problem for humans, they usually have more 'common sense', that robots are lacking."
Should I make the obvious comment about "common sense" when talking about American military who can't tell that a bloody great Union Jack or Canadian flag means that a verhicle etc is friendly?
Straight from "The Day Today"?
> "a robot arms race that will be difficult to stop ... I can imagine a little girl being zapped because she points her ice cream at a robot to share".
I can just see Chris Morris spouting off in a Paxman-esque style with this kind of stuff, although he'd probably say something like "having her face zapped off".
Am I the only one who laughed at this mental image?
"I can imagine a little girl being zapped because she points her ice cream at a robot to share"
I mean it sounds a lot like a scene from any movie Simon Penn would make about killer robots.
I've heard of far more humans shooting the wrong targets then automated weaponry.
Guns don't kill people. People kill people and robots kill people and automated defence systems kill people and guns kill people; oh bugger!
Paris, cos she could kill a man. It would just take a little time.
Would they be American robots?
Would they be friendly robots firing at people?
In Robocop the Part Man Part Robot was far more effective than the pure machine. Except for the criminal Man Robot cop with a drug adiction, he was worse than the pure machine.
But then who would put criminals in positions of authority in real life?
Switching on the Fan ..... Stand clear or you'll get covered.
"Paris, cos she could kill a man. It would just take a little time." .... By Anonymous Coward Posted Wednesday 2nd April 2008 09:55 GMT
No shortage of volunteers for that crusade, AC, I'll wager. :-) And if Paris is any Good, she'll ensure that it takes a long time and best forever.
Spookily enough, how absurd and perverse is it of those who profess and are embraced by a religious order/crazy sect, which pimps "Thou shalt not kill", and then orders Wars? Whatever in in their heads, it surely aint brains.
Put down your ice cream....you have 20 seconds to comply....
Has someone been watching a bit too much Sarah Connor lately?
@AC - Am I the only one who laughed ...?
No you most certainly are not the only one
The reason land mines ought to be banned is that after the conflict has ended, the civilian survivors like to go back to the blackened shell hole they used to call home and try to get on with their lives. Stumbling on a burnt out jeep in your field is a bit of an annoyance. Stumbling on a landmine is a bit more of an issue.
Landmines remain just as dangerous after the conflict has ended, whereas I think we can presume the Americans will take their extremely expensive killer robots home with them. The killer robots will also likely have *some* kind of AI to differentiate between hostile and non-hostile targets. Anti personel mines tend to differentiate between small mammals and humans and leave it at that.
Finally, the current batch of killer robots are either aerial or ground support. If you're standing around with an ice cream in the middle of a combat zone where missiles and large amounts of automatic fire are being thrown about, some robot confusing your ice cream for a hand gun is the very least of your worries.
Loss of accountability and discretion
Given the outstanding record of government for flawless and secure IT projects why not give them control over a growing arsenal of robot killing machines?
Just imagine... it won't just be muddled appointment and incorrect tax demands that civil servants could avoid accountability for ... "sorry there's a problem with computers today"
Hum, technology and war
Back in the first world war around 1 civilian was killed for every 100 soldiers, these days it's closer to 1 soldier to every 100 civilians, how technology has changed things aye Lewis.
I really wish I could understand this good people v's bad people dyeing business, but I honestly cant see any difference between wicked Saddam killing his own people, and good old Blair killing the same people who are not his own (whatever that was supposed to mean). As a non politician my people are all other non politicians which ever nationality the accident of birth made them.
Perhaps we should extend the concept a little further and develop a 100% efficient way of doing things, that just kills all the people who disagree with those who create or buy the technology. After all it appears that international agreements are only relevant when they suit the Anglo American empire anyway. Then it might dawn on you that perhaps the technology to support this new fascism isn't quite so cool.
Where you been at bro?! GLC said it, load and clear, "Guns don't kill people. Rappers do!"
loss of check and balance
Today any government that wants to wage war or repress its people has to have the cooperation or at least acceptance of a fairly sizable chunk of the population (the armed forces to some extent their friends and families).
This creates a check and balance on central government, especially as access to global media becomes more and more widespread.
A robot army completely removes that check and balance and puts total control in the hands of a small group of people that we hope are not incompetent and not evil... do you think those are rare traits amongst politicians?
If US soldiers can't differentiate between hostile and non-hostile targets (because of the targetting system no less, ie the stuff that would power the AI, not even because of plain stupidity) why the bloody hell should we expect their robots to?
I'm aware that this has been bubbling under for years, when I was at Reading Uni Prof Kevin "cpt Cyborg" Warwick spent a lot of time saying that there should be legislation put in place as soon as possible to regulate the use of AI, especially when linked to military systems.
At the time a lot of people thought he had been watching too much Terminator 2, but in retrospect probably quite ahead of the game.
As Ross pointed out, price is a huge differentiating factor.
The major problem with landmines is that they aren't economically worth removing once they've served their purpose, unless someone sees some real financial value in reducing subsequent civilian casualties.
Unless/until autonomous weaponry gets into the 'disposable' price range, it's generally going to be treated rather more conservatively
First of all, and most important, HAL never said a thing about 20 seconds to comply. That was ED-209.
2nd, Jesus Puncher? You are seriously taking the the position that it's somehow more morally justified if a Human kills a Human then if a Robot kills a Human?
In both cases, a central command sets ROE's, Rules of Engagment. People who meet certain criteria get shot. People who do not, do not.
One of the main differences? Speaking as a former soldier who has operated under a set of ROE's where the right of self defense was specifically denied, our response was "Better to be judged by 12 then carried by 6". Yep, we specifically decided, beforehand, that we would not obey this order.
Another is that sensors and subroutines have to be good enough to identify the targets properly, but that's an implementation detail that can be worked out. In either case, the leadership that sets the ROE is ultimately responsible.
You lose the safety check where the individual soldier is called upon to disobey unlawful orders, but since the ROE's for robots would be decided in an Air-Conditioned building in Tampa, Florida, with the word CENTCOM over the door, well, no illegal orders would be sent.
0 - Yes, I know, a Court Martial doesn't actually fit this statement, but the sentiment is the same.
1 - Well, that's a likely place for Leftpondians to make that decision. Fill in appropiate locations for your own military leadership.
Are Humans better than Robots?
As long as its only an Ice cream not a table leg then? and as long as she's not Brazilian?
Why not just enforce the laws against murder?
The long-term solution (and the only one) is to abolish war. If a civilian kills someone (whether a soldier or not) it is murder, ditto if a soldier kills a soldier from another country without a war being declared. Why suddenly is murder allowed and encouraged when there is a war on??
Just the same as with war crimes, orders from above are no excuse - if you kill or maim a person, you will go to jail / the electric chair / whatever... Of course, if an enemy soldier is trying to kill you, that is self-defense and acceptable.
Robots could capture the enemy soldiers and/or break their weapons, but not kill them...
Killing a robot - that's not murder, so war would become basically an economic exercise in who can best keep up the production of robot cannon fodder.
If all (or most) countries ratified and actually enforced this, soldiers on all sides would think twice about pulling the trigger, and eventually no country would be able to go to war offensively.
Yes I know it will need a sea change in society and civilization. But look at what happened to duelling - up to 150 years ago it was still acceptable, now it is considered pretty stupid in most places.
No, I'm not smoking anything. Perhaps I should be :-) Yes I know it will need a sea change in society and civilization. But look at what happened to duelling - up to 150 years ago it was still acceptable, now it is considered pretty stupid in most places. Yes, I know it won't happen soon. But sometime in the next hundred years machines of war will be so efficient that another solution to international problems will have to be found...
"2nd, Jesus Puncher? You are seriously taking the the position that it's somehow more morally justified if a Human kills a Human then if a Robot kills a Human?"
Yes 100 percent I am.
The human at least has to confront some of morals (whether its the same as yours or mine or not), where as the robot will indiscriminately kill. Lewis is keen to point out that a robot will not care if its failure to kill results in its own demise, well why not point out that the robot also will not care whether it does kills the thing in front of it. It is takes no more 'thought' on the part of the robot to kill or not. it makes no difference. it is an indiscriminate killer following an algorithm.
A person has to make a judgement. A person has to justify it to themselves which is an important moral barrier to stop indiscriminate killing. It is easier for a person to NOT kill another person. Which is a good thing.
Or do you believe that the 'morals' or 'killing algorithms' implemented by whoever programs the robot will be the same as yours?
Would you trust the algorithms made by Boeing or NASA or whatever to make an accurate judgement call on whether to end the life of someone?... Would you trust these algorithms at any point. No matter how advanced? Ever? With your kids?
Go Soldier Boy! Let The Bodies Hit The Floor. Waaaaa....
Quite an unpleasant film that seems appropriate in this case...
@ Jesus Puncher
"2nd, Jesus Puncher? You are seriously taking the the position that it's somehow more morally justified if a Human kills a Human then if a Robot kills a Human?"
Yes 100 percent I am.
Have you ever risked your life (or anything) for someone else? Have you ever done anything remotely heroic? From your hippy-speak it sounds as if they answer to all these questions is a big No. Otherwise you might not feel the same way.
If "our side" gets to use robots to kill the enemy (who have been slated to die anyway) and lessen the chance for loss of life by 50% (our guys are safe in a bunker) it's a win-win situation. The bad guys get killed and our guys don't. This is good. Any argument in which humans must fight humans means that you actually support more people getting killed or harmed - sort of weird logic if you ask me.
Comparing Lewis Page to Hitler is like comparing apples to donuts - it's just silly.
We'll need em now! Sonny hit Bruno Bin Laden tree a clock last night.
Weapons like this are natural selection of those too stupid to make them, are they not?
@ Jesus Puncher
Your argument just basically rehashes dilemnas that have been going on with war since the first guy who ever bashed someone in the head with a rock came up with the brilliant insight that if the THREW the rock instead he might be able to bash someone in the head from 10 yards away.
Personally, remote human operators are prone to all sorts of errors, since they do not have first-hand information on the conditions at the site. Second, communications between the remote operator and the machine may be susceptible to jamming, hacking or other interruptions, which may create dangerous situations with the robot. Third, what if the robo-soldier is hesitates when it has a chance to kill a target, so then that target goes on to kill a number of people that would have otherwise lived--is that outcome morally superior to the robot acting and killing the target based on embedded programming? Finally, what if you have an actual human in the battle and he just freaks out and kills the wrong people? Is that morally superior to a robot doing the same thing?
Killing robots is wrong
Killing robots is wrong.
Paris wouldn't do it.
Does Solomon Grundy dream of electric sheep?
"Have you ever done anything remotely heroic?"
So... are you saying that sending a killing robot into some country hundreds of miles away from your home to protect your "freedom" is heroic??
Well you're obviously some sort of hero that I couldn't possibly be.
"From your hippy-speak it sounds as if they answer to all these questions is a big No"
Look. Let me get this straight. I would happily tear off a persons head with my bare hands infront of their smiling children if it was to protect someone I loved. But to do that, I would have to know that that it was indeed to protect someone I loved.
To have some automated mechanical killing thing deliver me a report of the death would not sit well on my conscience. I do not or will not ever trust a machine with the responsibility over something like taking life, because I do not or will not trust anyone to come-up with a decent enough algorithm to describe the implications of it in 0s and 1s.
"Comparing Lewis Page to Hitler is like comparing apples to donuts - it's just silly."
Yeah, but, come-on, artistic licence.
And what if "their side" is the one dispatching autonomous killing machines in your village whilst "they" watch the football game?
If you and them are busy fighting, hurting, and dying, then you are more likely to reach a diplomatic compromise which is more likely to be fair to both parties. If you send in a team of slaughter-bots then you wont very well care what they have to say.
...But yeah I know, its a war... Do what you can to win... Well sure, so why not nuke the middle east now? I mean, I'm sure babies born there are statistically more likely to become suicide bombers anyway, so who cares? psht. morals schmorals. It's all about winning. If I can press a button to automate the whole thing, then so much the better. Gives me more time to go sun-bathing.
How long until there is an El Register article on the mayhem caused by one of these machines when it completely "goes wrong" and kills someone blatantly innocent whilst on autopilot mode?
And what will happen to the programmer responsible for the algorithm? Jail? A fine? No, probably nothing.
Why don't we spend the death-tech money on taking religion out of the classrooms across the world, stopping kids from inheriting their parents xenophobia?
(That's a rhetorical question, but I wish it was the case)
AC: "...If all (or most) countries ratified and actually enforced this, soldiers on all sides would think twice about pulling the trigger, and eventually no country would be able to go to war offensively..."
Is that it? Sounds so simple. The fastest way is to decapitate the regimes that threaten war, and then rebuild trust of the power(s) that pulled the trigger. Kinda like the U.S., Iraq, and the U.N. today. Or, we could take other risks through less aggressive action, and support the growth and permanence of civil society in chaotic parts of the world. This will work if the extreme poles of the earth's societies finally decide they can get along. Enlightened peoples will have to accept the oppression of women, for example, in some parts of the world until those areas turn from backward religion. That is the sad price of peace. Is there another answer? I don't see one. Love, anger, and shame, expressed through carrots, sticks, travel, trade, and communication are the nonviolent tools we have.
just like stupid @sshats who think that because they get away with kicking cops and American soldiers, they should be able to get away with abusing wild animals (tigers) without consequence. Or think they can abuse police/military in other countries like they do here without something really bad happening. Singaporean caning for "taggers" anyone?
Here's a thought-keep your children away from the weapons bristling, tank treaded, cybernetic killing machine! Do *not* be retarded and offer it a f**king ice cream! Don't offer your ice cream to rabid dogs or aggravated horses either.
When the robot tells you to back the f**k off, do it. Don't try some hippy-dippy "rights" speech. get out of it's immediate range. Works the same with automated manufacturing systems and process management hardware. You don't argue with machines. or Mother Nature. When the rock's falling, get out of the way. Don't complain about your "right" to stand there and "The Man" pushing you around.
But, the real spin is to keep Western technology from becoming the force multiplier necessary to keep up with the massive overpopulation of expendable humans the Chinese are preparing to deploy. Or the increase of breeding angry little human automatons for suicide missions. Robotic combat systems allow the playing field to be level, and that's something the Middle East and Asia simply cannot allow.
Throwing in some bullsh*t about "think of the children" is the attempt to bypass logic and reason with some knee-jerk emotional response.
Who Has It
Dropping the atom bomb on Hiroshima saved the lives of a lot of U.S. soldiers. Using robots to fight wars will also save lives, if they're used by a country that is defending against an aggressor. The problem with the atom bomb was that while we defeated Hitler, we didn't get rid of Stalin before he was able to have the atom bomb too.
So, if we manage to change all the non-democratic regimes in the world before any of them are able to build robot soldiers too, there would be no problem? That would mean a lot more wars, which I can understand most people would be against. But, of course, with any weapon, we can't prevent the other guy from building it first.
someone had to say it
personally i welcome our new robo overlords
(they sound more humane than my manager anyway)
Your arguments start of well-meaning, that is killing people is abhorrent - which no sane person could argue otherwise, but end in ridiculous conclusions.
No killing a person is not better or more morally correct when done by humans. Personally I would far prefer to be killed by machine rather than human - for a start it'll probably be done far more efficiently and therefore much less painfully. But in the end it's not really much consolation to the deceased love ones, knowing that some idiot with a twitchy trigger finger accidentally killed their father/son/husband as opposed to a programming error. In fact it wouldn't be any consolation knowing that the killing was in some small way justified, that if it hadn't been done their loved one would have murdered 1000s of innocents himself. I have personal experience of this. No my father would not have killed 1000s of innocents, but he was killed due to an error of judgment. Believe me, knowing that it was a human error doesn't make his death any more bearable. It might just as well have been some killer robot, and yes thinking about it like that makes me laugh too.
So basically shut the fuck up. The argument that machines that discriminate, regardless of the threat to themselves, is a good one. The idea that good decisions will always be made by people under threat or a variety of other emotional or stressful conditions is absurd. Humans are as prone to error as any machine - but at least the machine will only make the errors it has been told to make.
And if you really want to get philosophical about it, how about this? Machines, or the machines that make machines, are all human creations. Therefore ultimately a human was responsible for pulling the trigger. A human deployed the machine, a human programmed the machine and a human decided what conditions were correct for the machine to take life. The only thing the human didn't do is push the final button - but he might as well have done so, because in essence all he did was left it on an egg timer with a goto loop and an if/else subroutine.
Killing an innocent by pulling the trigger yourself or programming the trigger to fire after all the conditions you can think of have been met, tell me which is better.
The fact is they are identical. On the one hand the computer (brain) is calculating the conditions it's been programmed with (orders), and makes a decision on whether to kill. On the other, you've programmed your gun to fire when all the conditions (orders) have been met, only this time you won't mis-identify those conditions due to fear or poor eyesight, rather you'll do it by poor programming.
You're not addressing a few very significant points made by Jesus Puncher and others:
1. People find it easier to abuse the rights of others when they are removed from the situation in which those rights are abused, so having machines kill people under remote control, or worse, based on some pre-written code removes the safety-check of peoples' consciences.
2. Governments currently need the support of a lot of other people if they are to abuse peoples' rights. Being human, it's possible to appeal to those people (eg the police). Machines do not listen to reason (see Speed Cameras).
3. Legal accountability: decisions made by a machine will be legally attributed to a corporation. Bad descisions might be punishable by a fine (insignificant to the corporation), but essentially they are unaccountable. It's bad enough with the police getting away with murdering an innocent person on the tube without repercussions.
4. Machines are easier to centralise than human beings (armies, police). This makes it much easier for a small group of corrupt individuals to wield a lot of physical power.
I'd have expected an article like this in the Telegraph, not the Reg, but then things are changing around here.
anon because my tinfoil has run out.
maim, not kill
A bigger concern might be that the killer robots purposely try to maim rather than kill. I hear that a badly injured soldier is more of a drain on his military than a dead one. Consider a robot rifleman that shoots only at legs, arms and lower torso. A robot would be good at this because it has good aim, it has no qualms about being cruel, and it doesn't really care if its selective shooting isn't the best way to guarantee its own survival.
Mind you, this isn't anything new with robots. Other technologies have been designed for maim-factor -- landmines, fragmentation grenades, hollow-point ammunition, diseased cows in a catapult etc.
Quick! Get the little ice-cream girl a tourniquet!
@Anon Coward, Andy Bright
Thank you for clarifying some of these thoughts, which I admit have come out a little incoherently in some of what I have written.
Yes accountability. If a soldier walks into a village and kills all the children, he will face punishment, retribution, his own guilt, and things will happen to him to deter others from making that same decision.
If a robot does so, it will be turned off, and the programmer will get a slap on the wrists.
If the programmers / robot-construction people, had the same accountability that a gun carrying soldier would do, then the trend for efficient killing machines would stop.
"but at least the machine will only make the errors it has been told to make"
So the machines are 'just following orders'. Sure, then the person responsible for giving those orders needs to bare the same responsibility as if he carried out the deaths himself, absolutely, with no softening of punishment. If ten people write the code for a robot, that gets manufactured thousands of times, and in the various battlefields 50 innocent people get killed, then that all ten of the programmers need to be punished for murder, man-slaughter, or the equivalent, for 50 lives.
Won't somebody think of the children?
I'm referring to those children who grow up to be soldiers - but are still somebody's son or daughter. Those are the people who - whether we wanted them to or not - have been putting their lives on the line in Afghanistan and Iraq. Anything that makes their lives safer I'm inclined to believe is a good thing.
And please don't argue that these people are volunteers. They volunteered to fight for their country, not to die. Until we get to the perfect world where we do not need soldiers, someone has to risk their lives to defend us. And I'm all in favour of making that job as safe as possible - I don't think anyone mourns a trashed machine in the same way as a son, sister, or spouse.
Damn you beat me to it!
I also welcome our new killer robot overlords. We are unworthy, inferior meatbags and deserve our destruction.
ALL HAIL THE MIGHTY KIILER AUTOMATONS!
shouldn't this article be filed under ROTM?
>We here on the Reg killer-robot desk maintain a guardedly open mind
> regarding autonomous lethal systems"
>"Yet that is not the tone of this article. You're a cold heartless person."
No strictly - the Reg killer-robot desk is, as it says, manned by killer ropots such as the Lewis-Page-9000 model.I believe the Cold and Heartess settings can be adjusted by the editor if required.
"We don't want to move towards robots that make decisions about combatants and noncombatants."
Nor me. I'd stand very, very still, or at least move into cover.
Mine's the kevlar lined one, thanks.
The bombs where dropped weeks after Japan started negotiating the terms of their surrender. The war had been extended in order to show the Russians just what a devastating weapon merca had developed and started the cold war and nuclear arms race.
Perhaps if the military had any honour then they would go back to fighting the old fashioned honourable way. You know where they all meet up in a field somewhere and slaughter each other away from the wife and kids.
In fact that might be the way to do it, both sides create killbots, then round up any one who has ever said "Those people are risking their lives to save your x". No need to train or arm them they just have to form an orderly Q at the enemies killbot. Then which ever side runs out of lemmings first losses, you could even sell the TV rights and have it on pay per view.
The culture of death..snuck in with the cultural revolution
Robot wars.....remote controlled devices are not by definition, robots. Whether one accepts that or not it is indisputable on fact that mechanisation of war and genoicide by the use of remote control devices, including cluster bombs and land mines, has done nothing to make the world safer, rather the opposite. Nothing will stop the killing of humans in war because revenge is a human emotion. Someone on the site said robot will not be vulnerable to emotional errors...but nether also will they be able to exercise pity or kindness or discernment based on inbuilt human detection systems developed over thousands of years. Soldiers acting like robots in Iraq prove that case ebeyond question.
I suppose an unfortunate consequence of computer games is that a susbstantial part of this generation is even sillier than the last one, but with numbed brains when it comes to the culture of death.
Vietnam produced the hippie era and then came the post 80's lack of musical talent and profound 'rock entertainement' amateurism and ugliness , which became the reference point, with the death of culture, came the culture of death as a solution to all problems. In fact TV is now at the point where suicide is a preferred option to watching the American propaganda and stupidity which controls it...if one cannot simply turn it off.
The collusion between politicians and the military against protesters became entrenched as an accepted, if not uncontestable, tragedy in the emerging of the new fuedal system..the New World Order, seriously being constructed in the 1980's when the spying upon everyone of the world's citizens which used electronic devices such as computers, card machines, then electronic faxes and emails and mobile phones became a reality through Echelon and Ins-law "promis'.
As soon as I press the send button this email will be perused and spied-upon by echelon using key word priority. If the key word , or my notified address is there..it will be recorded.I you pay an bill by card or over the phone today the entire process and the situation and your locality will be recorded and available to 'selected' paying customers including your own government, that of Israel and the USA and your tax office,for example.
Resisters to regimes, as shown perhaps by their movement records at Promis and Echelon are then termed 'terrorists" by such vile regimes as invaded Korea, Vietnam, Iraq to mention but three planetary disasters, hunted down and tortured and/or killed, along with selected associates.
Assassination is a culture which has, through Mossadand the CIA alone, done away with scores of thousands of dissenters since the mid 1980's...forgetting the millions assasinated in Iraq, Rawanda, Sudan, Bosnia, El Salvador...so many places...ordinary innocent people slaughtered...and here we have recommendations for even less personal killings. Thoughts that this would do away with the killing of humans is ludicrous..the use of robots will not stop that...it will however be another link in the denying of responsibility or war crimes.
In closing, what would be preferable woud be a moving away from all automated war and the fulfilling of an old adage by sending politicians who desire war to be the front line combatants.
Ony one thing can save this planet, good example...force can change nothing, in fact it exacerbates problems. The invasion of Iraq even though Hussein offered to exile himself has been the ultimate proof of the failure of that option.
It is difficult for me to accept easily how readily long range killing and the accompanying by media lies and political lies on a grand scale have been accepted by the present generation as a part of life. It seems to me the future lies in a return to the better aspects of post WW11 morality, not where we are going at the moment.
- NASA boffin: RIDDLE of vast BULGE FOUND ON MOON is SOLVED
- Apple winks at parents: C'mon, get your kid a tweaked Macbook Pro
- SOULLESS machine-intelligence ROBOT cars to hit Blighty in 2015
- BuzzGasm! Thirteen Astonishing True Facts You Never Knew About SCREWS
- China in MONOPOLY PROBE into Microsoft: Do not pass GO, do not collect 200 yuan