back to article Engineers, coders – it's down to you to prevent AI being weaponised

Debate has raged for months over various internet giants' forays into providing next-generation technology for war. For example, in March, dissenters at Google went to the press about the web goliath's contract with the US military's Project Maven, which aims to fit drones with object-detecting AI among other things. This US …

  1. Voland's right hand Silver badge

    Bollocks

    and is for non-offensive uses only.

    One word: Bollocks.

    1. The Man Who Fell To Earth Silver badge
      FAIL

      Re: Bollocks

      Google AI folks & the like can stroke their egos & delude themselves all they want, but the only way they can avoid contributing to AI military weapons development is to get out of the AI business entirely.

      Anything else, like signing petitions, is just intellectual masturbation and everyone knows it.

  2. Pascal Monett Silver badge

    AI principles, yeah

    I followed that link, and found exactly what I expected : a nice, touchy-feely, heartfelt list of things goody-two-shoes Google promises to do and not do with AI. Nice to see they have found the light.

    But I'm sure they did it with the best of intentions.

    1. annodomini2

      Re: AI principles, yeah

      Until they setup another business unit under Alphabet and transfer knowledge and tech; "but it's not Google!"

  3. This post has been deleted by its author

    1. Pascal Monett Silver badge

      That it may not have, but it curtailed the hell out computers, to the point where they had to invent mentats - human computers (because human, it was okay).

      Because advanced civilizations will always need computers, whatever the form.

  4. Alan J. Wylie

    Dual use is hard.

    Many years ago, I worked on computer aided mapping: semi-automated line following. Measuring the boundaries of all the woodland in the UK to calculate the total area, better 1:1250 maps with accurate buried utilities to stopping backhoes cutting fibre optic cables, what could possibly be wrong with that? Then came the Falklands war. Digitise the contours and produce a wire-frame perspective of Mount Tumbledown as viewed from Port Stanley, please.

    A few years later, I worked on CNC blade tip grinders to make jet engines more fuel efficient. Making 747s greener is great. But what if the US Navy want some for their fighters? Or the Army for an AGT1500 turbine in an M1 Abrams tank?

    1. John Sager

      Re: Dual use is hard.

      +1. This stuff is going to get used for military purposes, just like all the other technical advances going back into pre-history. The better we know what AI can and can't do the better we are in understanding what potential adversaries have available. And it'll also get used by 'our side' - for better or worse it's the politicians who have to make the judgement calls, though no doubt there'll be a lot of screaming and virtue signalling going on in the process.

      FWIW I think AI is pretty shit at a lot of this stuff currently but it will get better. The Met's fiasco of a face recognition trial demonstrates that, but the Chinese seem to be getting much better at it.

      1. James 51
        Big Brother

        Re: Dual use is hard.

        I doubt the Chinese are much better at it, they're just willing to pour more resources into it and no number of false postives is to many as long as there are no false negatives.

      2. jmch Silver badge

        Re: Dual use is hard.

        "And it'll also get used by 'our side' "

        The use of the phrase 'our side' is one of the big problems here. Dividing into us and them is what makes it easy for a half-trained army operative to press the red button. The AI said that it's 'them', and that's that. Being fed over-hyped claims about the AI capabilities doesn't help of course.

        Yes, any technology developed for whatever reason can and will be used as a weapon. But ultimately there is a guy pulling the trigger, a guy telling the guy to pull the trigger, and a guy* further up making damn well sure that triggers get pulled on command, OR ELSE. And these guys specialize in creating an us vs them. Classic divide and rule. Just don't believe the arseholes peddling this shit.

        *It's almost invariably a guy

    2. Korev Silver badge

      Re: Dual use is hard.

      My physics teacher worked in technology used in Head Up Optics; when footage came back from the first Gulf War, it brought home the realities of what he was working on and hence changed careers.

    3. Anonymous Coward
      Anonymous Coward

      Re: Dual use is hard.

      Remnds me when as an idealistic student many years ago I put in the "any other comments" section of a job application form "prefer not to work on military projects" .... and got tackled on this in a first interview by interviewer "so you don't want to work on military projects - like missiles etc ... so what about working on a database system that gets used to ensure those missiles get deployed to the right places with all the right parts - would you want to work on that". Don't think I got much further with that application!

    4. nijam Silver badge

      Re: Dual use is hard.

      Yes, better knives meant better swords, better engineering meant better guns, ...

      Always has, always will. It's human nature.

      1. Alan J. Wylie

        Re: Dual use is hard.

        better engineering meant better guns

        Sir Joseph Whitworth's rifle

  5. James 51
    Black Helicopters

    It's only a matter of time before that kind of pattern analysis gets applied for notionally non-military ends such as advertising and searching for terrorists/subversives/members of non-ruling political parties/not in the right clique of ruling parties etc etc with equally incompetent results. Why have 1984 or Brave New World when you can have both with a side of Brazil as garnish?

  6. Dan 55 Silver badge
    Mushroom

    See also...

    - "Engineers, coders – it's down to you to prevent IoT exploits"

    - "Engineers, coders – it's down to you to make better UIs"

    - "Engineers, coders – it's down to you to increase software quality"

    Oh dear.

    1. deive

      Re: See also...

      Sounds about right - managers get all the extra money for the "responsibility" then shirking it at every opportunity. leading to https://www.theregister.co.uk/2015/10/10/vw_boss_engineers_blame/

  7. Multivac

    ...... and physicists

    It's down to you to stop atoms being weaponized and chemists you do the same for chemicals!

    1. AdamWill

      Re: ...... and physicists

      "It's down to you to stop atoms being weaponized and chemists you do the same for chemicals!"

      Well...uh...quite. Physicists and chemists have been struggling with this for decades/centuries (respectively). Haven't you *read* about how Oppenheimer and the rest of the Manhattan Project folks struggled with the implications and consequences of their work?

      1. Charles 9

        Re: ...... and physicists

        There's another issue, too. What happens when it comes time to feed the wife and kids? If all roads lead to Hell, do you starve your family?

  8. WibbleMe

    I would like to introduce you to AI personalities, my demo will show you Marvin, the Paranoid Android AI, too depressed and timid to do fuck all. For more information contact me at

    Douglas.Adams @ Hitchhiker - Guide - to - the - Galaxy - .co.uk

  9. John Brown (no body) Silver badge

    6000 civilian deaths

    I couldn't help but notice that the article does make the point that the AI is highlighting what may be interesting images or video and that the analyst then examines the images/video to see if they really are of interest. The article also points out that prior to any AI pre-selection, the human analyst would have looked at all of the images/video before choosing items of interest. At no point does the article claim that AI is choosing the items of interest itself and then acting on them.

    It's an interesting hook to the hang the AI ethics debate on, but I don't actually see anything specific in the article which points at the AI as being the cause of civilian deaths. Is the AI making strong recommendations? Is there a lack of context from what the AI is choosing leading analysts to see what might not be there? Is there some human failing by the analysts that "computer say yes" is biasing their decisions? Is AI nothing to do with this and it's pressure from the higher ups, both military and political, to "get results" leading to going after targets with lower levels of confidence?

    1. Mark 85

      Re: 6000 civilian deaths

      First a disclaimer/background. I'm a former US Marine who put time in Vietnam. Read into that what you may but this is from my perspective.

      Maybe instead of blaming the AI, blame the "fog of war"? Crap happens. People die because of being in the wrong place at the wrong time, It goes back to beginning of human civilization and warfare. Yes, war is hard on civilians. Always has been, always will be. The tactics used by Al-Qaeda are no different than that of the honored Resistance and others: Hide in civilian populations and damage the other side as best you can.

      We look back at wars since wars started and civilians have always suffered the most. Ancient times, whole cities were slaughtered and destroyed. Move forward to WWII.. carpet bombing of cities with incendiary bombs, etc. The atom bomb drops. Come forward some more and we have My Lai. Currently, we have terrorists, etc. still targeting civilians.

      Does this justify civilian deaths by either side? No it does not. One would think that AI does have the potential (once it's truly AI and not the current BS) of the AI giving choices and then a human selecting the targets that things might get more finely grained. Having said that.. we still have the a-bombs targeting cities.

      As has been said, "War is hell". What's needed is either we as humans mature and find a way not to need war, or at the minimum, not take out civilians. I don't see the first being accomplished in our live times but maybe the second... maybe.

      1. fajensen
        Flame

        Re: 6000 civilian deaths

        People die because of being in the wrong place at the wrong time,

        Seems to me that: People die because of *us* being in *all* the wrong places at the wrong time!

        Why does a 1'st world nation spend it's youth, talent, ressources and never mind those billions of USD obsessing on the ways of a bunch of 3'rd worlders in far-flung places that are basically no threat at all to anyone?

        Especially IF that other "we", our "allies" and those "intelligence"-community factions bent on regime change did not arm them and point them in the general direction of someone "we" don't like and then every.single.time once again are shocked and surprised (since nobody ever gets fired or even shot for these things) when "our terrorist freedom fighters" turn out to be just "terrorists fighting freedom" - like, egg.actly as it said on the tin!

        Institutionalised stupidity is what this whole "war on terror" has become. And with Mr. Flip-Flop Double-Down on Failure now running the show, it can only get worse. It is not a big wonder that democracy doesn't exactly has the ring of quality that it used to have!

      2. AdamWill

        Re: 6000 civilian deaths

        There is, of course, a significant difference in the current situation vs. the Vietnam War or the Second World War: they were actually wars. The "war" on terrorism is not. The US is not at war with Iraq or Afghanistan or anywhere else. Which only makes this all the worse.

      3. Michael Wojcik Silver badge

        Re: 6000 civilian deaths

        Maybe instead of blaming the AI, blame the "fog of war"?

        Cori addressed this to some extent in the article, and it's been studied fairly extensively by psychologists and others. The issue with AI is that the more decision-making is delegated to technology, the easier human decision-makers find it to initiate or permit violence. That's been demonstrated in everything from organized large-scale combat to one-on-one interactions.

        When drones are using ML to pick "plausible" targets, human operators will quickly adapt to simply confirming every suggestion made by the algorithm. ("Stupid bird! Why did I leave you in charge!")

        Moral, tactical, and strategic decisions are expensive - they require significant conscious higher-order brain activity, which means they only happen when we make ourselves consider them, and they're tiring. They also carry post-commitment costs: second-guessing, guilt, etc. Those are very strong incentives to avoid them. If they can be delegated to a machine, that's exactly what most people will do.

        True, wars are full of decisions - accidental and deliberate - which harm civilians (and unnecessarily harm combatants, for that matter). That doesn't mean there aren't very serious ethical and pragmatic issues with automating that decision process.

  10. Wellyboot Silver badge

    Weaponised?

    Our soft skin, slow running speed, small teeth and almost useless claws didn't make us the dominant species. All that is needed is a little though and anything has a lethal potential.

    Dripping water can be weaponised. Hands up who's not seen or used a bic biro blowpipe?

    1. jmch Silver badge

      Re: Weaponised?

      "Our soft skin, slow running speed, small teeth and almost useless claws didn't make us the dominant species. All that is needed is a little though and anything has a lethal potential."

      It's not just the brain power that made us the dominant species. It's the manual dexterity that allows us to direct the thoughts into physical reality, and most of all, it was thoughtless mindless animal aggression

      1. Aladdin Sane

        Re: Weaponised?

        Also, the fact that humans are nasty, evil, vicious bastards. On a good day.

      2. Michael Wojcik Silver badge

        Re: Weaponised?

        most of all, it was thoughtless mindless animal aggression

        Oh, I think many people have put plenty of thought into their aggression.

  11. Anonymous Coward
    Anonymous Coward

    As if

    Before the 'selective' targeting there were fewer civilian deaths... Killing a house full of the wrong people is a crime, but perhaps not as great a crime as obliterating an entire village. As soon as you have asymmettric warfare where the combatants are mixed in with and indistinguishable from the civilian population instead of in nice military camps and uniforms this is inevitable. The interesting question is who is responsible, and there are no easy answers. In the Korean War my father's contemporaries didn't like shooting up ox carts, but when one in five is full of ammunition and explodes and mixed in with the peasants going to market are peasants carrying supplies for the troops - where does the responsibility lie?

    1. Anonymous Coward
      Anonymous Coward

      Re: As if

      Before the 'selective' targeting there were fewer civilian deaths...

      Could you be more specific, because as I read that, I'm thinking you are very, very wrong?

      1. Michael Wojcik Silver badge

        Re: As if

        Could you be more specific, because as I read that, I'm thinking you are very, very wrong?

        I think we were supposed to read the subject as an introductory adverbial phrase, which inverts the sense: "[It's not] as if before the 'selective' targeting...". But I could be wrong about that; the thesis is not entirely clear.

        It's not what I'd call a well-constructed post. Some folks really need to do some revising before they click Submit.

    2. fajensen
      Terminator

      Re: As if

      Before the 'selective' targeting there were fewer civilian deaths.

      Bollocks! The "terrorist / civillian" ratio in the drone wars is about the same as the "nazi / civilian" ratio that granddad managed to achieve - using far less ressources and gobbledygook to justify it - dropping dumb bombs over Dresden and Hamburg.

      If we wanted fewer civilian deaths we could hang a few of "our side" for the same warcrimes that we hung most of the losers for. I'd bet that would put some much needed discipline and professionalism into the war-"game".

      The tragedy is that there is no personal nor any career risks to droning some 3'rd worlders so, why not?

  12. LucreLout

    Sorry Cori, I respectfully disagree...

    The public needs a healthy dose of realism about how America has used and will use these technologies, and how the war on terror looks on the ground where it is waged.

    They also need to understand why drones are deployed so often. IEDs. If you want to blow up the soldiers while passing, rather than fight a ground war with them, then don't be so suprised when their mates blow you up with a drone. Our countrys first responsibility is to the men & women we send to war - whether you believe they should be there or not is completely irrelevant to that point - and we owe them the very best protection that may be provided them whilest deployed.

    they never said why Salem and Waleed were caught in the crosshairs.

    Well, most likely because of a mistake. Unfortunately in war, mistakes happen - like any other walk of life, just with bigger bangs and worse consequences. If the enemy combatants would stop hiding amongst the civillian population, or if the civillians would simply move away from the men with guns, then collateral damage could be greatly reduced. Expecting one side to not fire back is unrealistic and unhelpful.

    A human fired the missiles, but did so, in part, on the software's recommendation.

    And they did so in part due to standing orders, rules of engagement, and the situation in the given area. I don't follow all of resharpers bat shit crazy recommendations (or all I'd have are untestable static classes), and blaming the software for the human acting on its mistake is missing the point. It's why we don't allow automated firing by the AI.

    in societies where most men are armed, and insurgents are interwoven and married into civilian populations, network analysis will always make mistakes.

    Those societies and men have specifically chosen to have a higher rate of casualties amongst their neighbours and family by living amongst them as enemy combatants. You spend your day shooting at soldiers and blowing them up with IEDs, then seek to complain when a drone takes out your house while you're having dinner? Frankly, that isn't a reasonable complaint to make - you made your bed, now die in it.

    Some of Google's people seemed less concerned about moral balance than they were to avoid public discussion of the contract at all.

    Moral balance doesn't mean anything. You think your morals are the correct set. I think mine are. They won't always align, so whose morals get primacy? Thus, your morals mean nothing to me, in the same way as mine mean nothing to you. You can't expect the rest of the world to work per your own moral framework. It's astounding how many seemingly intelligent people cannot grasp that simple fact.

    Weaponized AI is probably one of the most sensitized topics of AI – if not THE most.

    It is, and rightly so. I'm not sure anyone is yet advocating rolling out Terminator style hunter killers that purge a location of all humans, but that day will come eventually, unless terrorism is knocked on the head as a means of conflict. If you wish to be martyred, stand and fight like a conventional army. If you're frightended of dying, well, stop picking fights with other nations, and stop blowing up their civillians. If you don't care about or are deliberately target their civvies, yours will one day become fair game, or at the very least collateral damage.

    Lets take a moment to review what that phrase really means today. It means your civillians were viewed as being expendable to the achievement of the mission. If that mission is to stop your menfolk blowing up our families, then its wholly understandable why it is considered preferable for our drones to blow up your menfolk. Unfortunately for you, that may be after they pop home for lunch, and while aiding and abetting them, you might get killed too.

    Under President Trump, the targeting rules have been made even looser, with predictable results: over 6,000 civilian deaths last year in Iraq and Syria alone.

    As upsetting as that may be, how many lives were saved due to the deaths of the primary targets, the enemy combatants? Gross numbers aren't nearly so useful as net figures. How many of our soldiers lives are worth sacrificing to avoid what may be more or fewer civillian deaths if we use planes and tanks instead?

    Do we even know if drones kill more civvies than bombers, fighter jets, helicopters, or tanks? Are some of the objections really just emotive, because there's no risk to life of the dorne pilot?

    We all have a role to play in the debate about where AI should be used. But the most important audience is AI developers and engineers.

    We do. And the number of soldiers I've met with serious injuries and dead friends due to IEDs leads me to believe that it is preferable to deploy drones to eliminate the terrorist threat rather than having our guys out their with their ass in the breeze. See, ethical and moral standpoints vary from person to person, so while you may feel they're a great decision filter, the filter comes up short when we account for interpersonal differences.

    This is true mainly for the populations of wealthy nations. While you and I bicker on Twitter, buy crap on impulse, or do any of the things that figure in these TED-talk dystopias, Orwell is out there: for the poor, the remote, the non-white.

    Race may be a correlation of drone strikes, but its absolutely not causal. The cause of drone strikes is terrorists planting IEDs, not prayer books or brown skin.

    That's why some say engineering and computer science should be regulated like the old professions: medicine and law.

    And I'd completely agree with you that they should be. However, don't for a second think that would prevent the development of autonomous drones or weapons.

    Could unethical uses of AI land developers in hot water? Sure.

    Illegal use, sure. Unethical? Not a chance. Your ethics have no bearing upon anyones actions but your own. Just as my ethical framework guides my actions. You've no specific expectation or right to think I'll act according to your ethics than I do of you acting according to mine. Its the main problem with ethics.

    That's what could solve the AI ethics debate – for those with the gift to code to think about what they are building.

    If what "I" build helps save the lives of our soldiers that would otherwise be blown up by a terrorist IED in some godforsaken part of the world, then I could sleep real easy at night. There is, after all, nothing that mandates these clowns to hide behind their wives when the drones come calling - in choosing to do so, they choose to make their families as expendable to us as they are to them.

    I don't build drone software and never have, but I certainly have no moral objection to it. Quite the opposite.

    If they chose to wield their power for good, who knows what they could do?

    Define good.

    This is the point where simplistic and emotive rhetoric breaks down. Is it good that drones save the lives of our troops? Yes, absolutely it is and they absolutely do achieve that. Is it good that drones end the lives of terrorists before they can kill more of us? Yes, absolutely it is, and again they do achieve that. Is it good terrorists hide behind their families in an attempt to avoid the consequences of their actions? No, it isn't, but who made that choice? So whose fault is it really?

    I'll get more downvotes for this than a bacon sarnie in a mosque/synagog, but the point is there is always more than one view point, and a reason why emotion must be kept out of such debates.

    1. jmch Silver badge

      Re: Sorry Cori, I respectfully disagree...

      "Our countrys first responsibility is to the men & women we send to war - whether you believe they should be there or not is completely irrelevant to that point"

      This sentence makes no sense at all to me. If a country's primary responsibility (or one of them at least) in military matters is to take good care of the soldiers who put their lives and limbs on the line to protect our security, then surely the very first duty to those soldiers is not to send them into situations that have zero net benefit (and probably even negative net benefit) to the security of the country.

      Secondly, you make the categorically wrong assumption that drones are supporting soldiers on the ground, but in fact in cases such as Yemen they are replacing soldiers on the ground. WTF is the US doing in the Arabian Peninsula anyway? Oh yes, helping to prop up the corrupt dynasties that sell us their oil by running after a 'terrorist' group that has almost zero international scope and is really is a local insurgency.

      1. LucreLout

        Re: Sorry Cori, I respectfully disagree...

        This sentence makes no sense at all to me.

        That's because you're trying to conflate two seperate and unrelated issues.

        If a country's primary responsibility (or one of them at least) in military matters is to take good care of the soldiers who put their lives and limbs on the line to protect our security, then surely the very first duty to those soldiers is not to send them into situations that have zero net benefit (and probably even negative net benefit) to the security of the country.

        You've assumed, without any evidence, that the current deployments have zero beenfit, which is at best a debatable point, at worse pure ignorance. It simply doesn't matter whether you agree with the troop deployment, the duty once deployed it to protect them until they can be brought home safely. There's no room for variance or lefty whataboutery there. Sorry.

        Secondly, you make the categorically wrong assumption that drones are supporting soldiers on the ground, but in fact in cases such as Yemen they are replacing soldiers on the ground.

        Actually, you have misunderstood - that they replace troops ont he ground is precisely my point. Too many ground troops were getting blown up by IEDs, which led to drone development, investment, and deployment.

        WTF is the US doing in the Arabian Peninsula anyway?

        Utterly irrelevant to the debate at hand.

        Oh yes, helping to prop up the corrupt dynasties that sell us their oil by running after a 'terrorist' group that has almost zero international scope and is really is a local insurgency.

        You've fallen off the fact waggon and into the swamp of your own idealism and poltical views here. Lets staick to the facts that may be established and keep the debate on focus.

        1. jmch Silver badge

          Re: Sorry Cori, I respectfully disagree...

          "That's because you're trying to conflate two seperate and unrelated issues."

          You made the claim that whether one believes the troops should be there or not should not be a consideration with regards to their safety. I'm merely pointing out that the troops being in a war zone is 'de facto' the primary source of danger to them, and therefore it is by far the first duty of army leaders to only deploy soldiers if absolutely necessary. I think that is incontestable.

          "You've assumed, without any evidence, that the current deployments have zero beenfit"

          You seem to be assuming, also with zero evidence, that these deployments do have some benefit. Of course I don't have any proof, because this is one of the things where a counterfactual isn't really possible. But history has countless examples showing that terrorists are more easily beaten by political dialogue than by military force. And those are facts and has nothing to do with my supposed "idealism and poltical views "

          1. LucreLout

            Re: Sorry Cori, I respectfully disagree...

            You made the claim that whether one believes the troops should be there or not should not be a consideration with regards to their safety. I'm merely pointing out that the troops being in a war zone is 'de facto' the primary source of danger to them, and therefore it is by far the first duty of army leaders to only deploy soldiers if absolutely necessary. I think that is incontestable.

            Uncontestable as you feel it may be, it isn't relevant.

            You seem to be assuming, also with zero evidence, that these deployments do have some benefit.

            Wrong. There's plenty of evidence that they do have benefit. How many attacks has OBL launched this year? None, because he's dead. How many of his own people did SH gas this year with chemical weapons? None, because again, he dead. And so it goes.

            But history has countless examples showing that terrorists are more easily beaten by political dialogue than by military force.

            No it doesn't. It suggests that military force to eliminate the threat works best, then dialog with the few remaining survivors - see ISI, AQ etc etc for evidence. Even the IRA were militarily beaten - split top to bottom by intelligence assets and with limited remaining funding, they had no choice to to end their "war".

            1. jmch Silver badge

              Re: Sorry Cori, I respectfully disagree...

              "There's plenty of evidence that they do have benefit. How many attacks has OBL launched this year? None, because he's dead. How many of his own people did SH gas this year with chemical weapons? None, because again, he dead. And so it goes."

              As I said, this type of argument is moot because there is no counterfactual. However it's not too controversial to say that without the power vacuum post SH's removal that ISIS would never have emerged.

              In the case of OBL it was a targeted incursion on a known terrorist mastermind. I fully support his termination with extreme prejudice and body dumped at sea. But that's not the same as lobbing a missile at a house because the occupant fitted a profile based on AI which most probably is not as smart as is claimed.

              *Incidentally* Genuine question that occurred to me - If a target is flagged up as a probable terrorist, and if there is good enough surveillance on the target to know when he's at home (because you wouldn't want to waste a missile on his home if he's not in! ) then why not attack when he's 'at work'?

      2. Cavanuk

        Re: Sorry Cori, I respectfully disagree...

        "but in fact in cases such as Yemen they are replacing soldiers on the ground."

        Thereby preventing them from being killed and maimed.

        Perhaps instead of targetted drone strikes that unfortunately also kill some civilians, we should just take out the entire village, town etc? No, then ground force invasion must be your solution?

        Some of the comments here are ridiculous. No one in the West wanted the ISIS caliphate to be set up and start terrorizing populations. Should we have let them just commit genocide against the Yazidis? Not our business? Charming attitude. Drone killings are a much better option to random bombing or ground invasion.

        As was said above, if you don't want your civilian family killed, don't shelter with them.

        There is nothing wrong with the basic concept of weaponized AI, if it helps to more accurately target the other side (yes, "the other side". It's a valid concept). The alternatives lead to higher casualties and threaten our own military.

    2. Anonymous Coward
      Anonymous Coward

      Re: Sorry Cori, I respectfully disagree...

      yes, the point of all this targeting and intelligence is not to blow up more of the wrong people with bigger weapons, but to blow up fewer of the wrong people using smaller weapons. So if the result of my campaign to block AI weapons development is that more innocent people are killed, and more civilian property is destroyed, then what is my moral position?

      1. LucreLout

        Re: Sorry Cori, I respectfully disagree...

        So if the result of my campaign to block AI weapons development is that more innocent people are killed, and more civilian property is destroyed, then what is my moral position?

        Has that been established? I've not seen anything remotely resembling a fact that suggests it has.

        Before we can establish that "AI" is a problem here, or even part of a problem, or even that a problem exists, we have to first establish what the collateral to target ratio is for "dumb" weapons. Then we can work out per target eliminated, whether the collateral damage is better or worse. Then we can debate whether or not it is socially acceptable.

        None of that work has been done yet.

        1. Anonymous Coward
          Anonymous Coward

          Re: Has that been established?

          Ask the citizens of Guernica in 1937, London, Plymouth or Sheffield in 1940, Germany and Japan in 1945, of North Vietnam or of anywhere along the Ho Chi Min trail in the 60s or 70s....

    3. jmch Silver badge

      Re: Sorry Cori, I respectfully disagree...

      "If you wish to be martyred, stand and fight like a conventional army. If you're frightended of dying, well, stop picking fights with other nations, and stop blowing up their civillians"

      Well, thats' useful advice to the downtrodden citizens who are victims of a dictatorship. Maybe the French Resistance should have faced the German army, tanks and all, on the field of battle, just so it wouldn't be so inconvenient for Himmler to root them out?

      Just as you say, everyone has their own moral code, so maybe your moral code is fine with blowing up people who may or may not be 'terrorists' (by whose definition, anyway?). In my book, targeting people to kill them just because they have a different moral code than my own is not OK. If they do try to come to my country and blow shit up, by all means do whatever is necessary to stop them.

      1. LucreLout

        Re: Sorry Cori, I respectfully disagree...

        Well, thats' useful advice to the downtrodden citizens who are victims of a dictatorship. Maybe the French Resistance should have faced the German army, tanks and all, on the field of battle, just so it wouldn't be so inconvenient for Himmler to root them out?

        In hiding amongst the populace, the resistance accepted there would be significantly higher French civillian casualties and collateral damage. You may or may not view that as legitimate after the fact, but that is the choice they made. Did it end the war quicker? Maybe, but definitively it ensured innocent civillians were caught up in the sweeps and fire fights.

        Just as you say, everyone has their own moral code, so maybe your moral code is fine with blowing up people who may or may not be 'terrorists' (by whose definition, anyway?).

        My moral code is perfectly fine with executing terrorists. I lose exactly not one seconds sleep over it.

        The only definition that matters is the guy with his finger on the drones trigger. Beyond that, there are international conventions and definitions for these things. Drone strikes where civillians have been inured or killed have all occurred on battlefields where terrorists are hiding amongst the civillian population. If they allow that, or fail to leave the area, then they are accepting the risk of becoming collateral damage. Sorry, but we can't simply not shoot back - it isn't an option.

        In my book, targeting people to kill them just because they have a different moral code than my own is not OK.

        That's not what we're doing. We're targetting them because they are terrorists who are actively engaged in the murder of innocent civillians - those not shielding combatants of any variety for any reason.

        If they do try to come to my country and blow shit up, by all means do whatever is necessary to stop them.

        There are those that would argue that is precisely what we have been doing since 2001.

    4. Anonymous Coward
      Anonymous Coward

      Re: If you wish to be martyred, stand and fight like a conventional army.

      i.e. gather all you mates in a large-open space, away from any inhabited areas (desert would be great, if available), get ready with your AKs to face your opponenents' conventional army...

      ....

      Oh, pardon me, you said "IF YOU WISH TO BE MARTYRED", look, here it comes.... BANG.

      CONGRATULATIONS, you

      a) stood and fought like a conventional army (tick)

      b) got martyred (tick)

      MISSION ACCOMPLISHED.

      1. P. Lee

        Re: If you wish to be martyred, stand and fight like a conventional army.

        I have to agree with you here.

        The US military hegemony is so large, expecting a 19th century meeting on a battlefield is ridiculous. The only way to win is to make the cost of war unacceptable to your opponents. In the case of the US, that means ensuring that the true (or exaggerated) civilian cost is publicised to the US voters.

        The US strategy is to minimise the loss of US life with overwhelming firepower. Their opponents strategy is to make every US strike an expensive one. I'm sure Sun Tzu would have something to say about using your enemy's strength against them.

        Perhaps, rather than debating military strategy, we should be examining the civilian political decisions which lead to fighting.

    5. rg287

      Re: Sorry Cori, I respectfully disagree...

      You spend your day shooting at soldiers and blowing them up with IEDs, then seek to complain when a drone takes out your house while you're having dinner?

      Here's where the crux of your argument fails.

      Back in Iraq, Coalition forces frequently came under mortar attack - i.e. indirect fire.

      Frequently, they could spot someone (colloquially known as "a dicker") on a mobile phone or radio who was quite blatantly the spotter calling in the fire, but they were not authorised to engage because they were very deliberately NOT carrying a weapon and posed no direct/apparent threat to the troops.

      They didn't need AI to do this, but fundamentally that's a signature behaviour. You're under fire, you can see one person with Line-of-Sight who is on a phone, it's probably them calling it in.

      This was escalated and the ROE was changed so that someone who was part of a mortar team could be engaged irrespective of whether they personally were carrying a weapon.

      Was that fraught with risk? Yes. Did you risk shooting some poor unarmed sod who was on the phone to their mum? Yes.

      But the worst that happens is you shoot someone accidentally. That's quite bad, but it's better than following them home, dropping a 500lb bomb through their roof and killing their wife, children and extended family in the process.

      When a drone lobs a bomb into a house (with - inevitably - a fuzzy number of occupants, with unknown identities), you risk enormous collateral damage.

      When you industrialise collateral damage and make it accepted practice, you commit a war crime (except this isn't a conventional declared war, it's "counter-insurgency" so the USA likes to wash it's hands in the grey area, same as the non-POW "enemy combatants" in Gitmo).

      Blowing up someone's family has nothing to do with your duty to your serviceman, or any Military Covenant. It's sloppy practice and - as others have pointed out - there's no duty to servicemen. Officially there are no ground forces in those locations. We're just flying in and bombing - proving the adage that a war cannot be won remotely. You need boots on the ground. And if there were boots on the ground (limited SF), then you'd fly in teams in helicopters (bypassing IEDS) for targeted snatch/kill missions.

      1. LucreLout

        Re: Sorry Cori, I respectfully disagree...

        But the worst that happens is you shoot someone accidentally. That's quite bad, but it's better than following them home, dropping a 500lb bomb through their roof and killing their wife, children and extended family in the process.

        Thankfully, that isn't what happened or how drone strikes are scheduled. Certainly it'd mark them as a person of interest and they'd be followed (likely from a drone camera) - the intelligence opportunity would outweight the benefit in taking out a single spotter after the fact.

        When a drone lobs a bomb into a house (with - inevitably - a fuzzy number of occupants, with unknown identities), you risk enormous collateral damage.

        Fortunately, they only bomb houses where there are known to be terrorists hiding. Yes, it is hard to account for collateral damage, but then, if daddy is a terrorist and insists on coming home at night, well, then daddy is a moron who has chose to put his family at risk. They have chosen to let him.

        You can't simply allow terrorists to escape because they seek to use their own families for cover. If they choose to endanger them, then that is their choice. The drone pilots do their level best to reduce collateral damage, because when dropping a 500lbs bomb, there will always be some (same as the poor sod calling his mum that gets a bullet through the face).

        then you'd fly in teams in helicopters (bypassing IEDS) for targeted snatch/kill missions.

        Worked real well in Somalia, no? Blackhawk Down is a very prettied up - turn disaster into victory - illustration of why that doesn't work so well as a plan. On paper, its fine, but in the real world, its way better to send a drone.

  13. Halcin

    How Many?

    3100 signed the letter, but out of how many? How many others are willing/coerced into cooperating?

    I seriously doubt people in countries like $Country* will be signing up anytime soon.

    *I have a horrible feeling the list is too long to list.

  14. Daytona955

    Engineers, coders – it's down to you to prevent AI being weaponised

    It's a bit flippin' late for that!

    When I was a fresh faced grad on the milk round in 1978 one of the companies was proudly showing video of an 'AI' targetting system tracking a tank moving over rough terrain with visual obstructions.

    I didn't take the job. But I'm under no illusions that my decision made any difference at all to the development of automated tagetting systems.

    4% of Google's staff might feel better about themselves, but it won't stop people getting killed by automatically targetted weapons.

    Shutting the stable door ~5 decades after the AI horse has bolted...

    1. John Brown (no body) Silver badge

      Re: Engineers, coders – it's down to you to prevent AI being weaponised

      "I didn't take the job. But I'm under no illusions that my decision made any difference at all to the development of automated tagetting systems.

      4% of Google's staff might feel better about themselves, but it won't stop people getting killed by automatically targetted weapons."

      The difference between you back then and the Googlers now is both that they have acted as a group and got international publicity, partly thanks to Googles own business as major bit of the internet. It's how grass roots movements start and public opinion changes. Having said that, I doubt there will be much difference made in the short to medium term, but it could conceivable lead to an equivalent to (or addendum to) the Geneva Convention. Land mines and cluster bombs are used a bit less these days and in more controlled fashion than previously due to publicity and public opinion.

  15. Anonymous Coward
    Anonymous Coward

    Human review of the AI's determination

    If you think about it, the human reviewer is not likely to have access to any additional information and is not likely to have access to any additional decision-making criteria (even moral and ethical policies). Any additional information and criteria that might become available would immediately (*) be incorporated into the AI system.

    THIS-> Effectively the human reviewer is left to nod their head, and press 'PROCEED'.

    * Exception: The humans might be quicker to 'reprogram' with a memo or direct order on any given day, whereas the software update to the AI might need weeks or months.

    1. 's water music

      Re: Human review of the AI's determination

      THIS-> Effectively the human reviewer is left to nod their head, and press 'PROCEED'.

      Well the human may be able to review the AI's pattern match for a Gun and be better able to distinguish whether it is, in fact, a table leg.

      Wait, what?

  16. WibbleMe

    Any one remember the British TV series Red Dwarf?

    All AI's had a belief chip... i.e be good and go to silicon haven unless you were an cheap toaster AI that is and were made without one.

    For those in the US, you can get it on Netflix.

  17. Vanir

    Engineers, coders ... and lawyers

    Engineers, coders – it's down to you to not write any software and for any project you believe will be used for 'evil' intent.

    That's like telling lawyers, barristers ,attorneys, attorney-generals etc, not to defend people they believe are gulty or not to prosecute people they believe are innocent.

  18. Anonymous Coward
    Anonymous Coward

    I've waited for this very moment

    "AI developers have immense power. They are a mobile, coveted population. Weapons and surveillance kit can only be built with their talent and assent. If they chose to wield their power for good, who knows what they could do?"

    Well, I know. I'm going to take every chance I can to sneak into the sub-sub-sub routines a bit of machine code that identifies managers and targets them first. My first visual algorithm should identify software "developers" who wear dress shirts and spend more time with their fingers curled inside a coffee-cup handle than pressing the "H" key.

    Software middle-managers first... Zap! Gone! Scrum lords next... Zap! Gone!

    I will bake into the AI a learning routine to recognize the "type" that replaces them and then ... Zap! Gone! It will search Jenkins notifications for "targets" who receive the notifications but don't commit code... Zap! Gone!

    My AI will identify those who commit code at least once a day and protect them. There are myriad problems to work out, as with any new system.

    The first is that I'll never work on an AI targeting platform. I knew my plan was flawed from the start.

    </sarc>The moral of this story is: there isn't a terrible technology used for bad that cannot be altered -- ever so slightly -- and changed for the good.

    1. Charles 9

      Re: I've waited for this very moment

      So what if it's an incompetent middle mangler who DOES submit code (terribly-written code that nonetheless has the board's approval) every day?

      The problem with anything live is that where will be edge cases. Only edge cases don't stay edge cases for long.

  19. FrankAlphaXII

    Article's title says it all

    >>Engineers, coders – it's down to you to prevent AI being weaponised

    Then we're totally fucked. There's always someone in every Scientific endeavor willing to weaponize anything.

    Physicists weaponized a theory proposing that you could split and later fuse atoms in a massive burst of energy.

    Virologists and Bacteriologists weaponized human and animal disease. Geneticists made those diseases even deadlier.

    Chemists took chemicals and turned them into weapons.

    Psychologists figured out how to use words to erode an opponent's morale.

    Radio engineers came up with ways to disrupt an enemy's communications using energy.

    I really don't think that software developers and hardware engineers in the realm of computing are any different. They may feel they are about themselves, and they certainly enjoy patting themselves on the back about how great and ethical they are all the time, but someone's going to weaponize AI (if it hasn't been done already) despite any protestations to the contrary because there's always someone willing to play God and a million ways to justify doing so because the consequences don't matter to them, as long as "progress" keeps happening.

  20. Anonymous Coward
    Anonymous Coward

    What we should think about ...

    What I find strange is how so many are quick to jump the "All humans are evil bastards" rant. I have to respectfully disagree, just like any "evil people" situation is that we are forgetting that those who generally make it to the top of the stack are sociopaths or psychopaths and they don't think like normal people. These people strive on strife, they enjoy driving us crazy with ridiculousness and we bite every time, then we spend the rest of time calling ourselves evil and destructive.

    Most people are good nature'd (which we often call naive or gullible), fair (which again we call naive or non-business savvy) and giving (which we just call foolish). How many times have you heard of a business practice that is morally reprehensible but we call it "SMART"... the word that should be used is "Manipulative", "Subversive" or just plainly "Psychopathic". We need to give credit where it is deserved, do you notice the guy who cut in front of you or the guy who let you in front of them... for sure its the former not the later. When you start noticing how many good things happen on a daily basis you will see it out weighs the bad but that is only if you notice it.

    1. Anonymous Coward
      Anonymous Coward

      Re: What we should think about ...

      Wanna bet? Is it good or is it indifference? And is indifference being misinterpreted as good? Especially in a world where people get bombarded in more ways than one eight days a week? Does one have time to be good if the wife and kids are going hungry (along with you)?

  21. Claverhouse Silver badge

    Obama was a particularly evil old bastard, but so is Hillary and so is Trump.

    America wants their presidents to be fully representative, so killers.

    1. LucreLout

      America wants their presidents to be fully representative, so killers.

      Never judge a nation by their political leadership.

      Most Americans are polite, civilised, and friendly. Same as the English. But I'd not want to be judged by the standards of old bloody hands Blair. Would you?

      1. Charles 9

        No, but that's the standard by which we are judged nonetheless, either because we actively allow it or passively do so through indifference or lack of awareness. I mean, if people were truly good on average, how come average voters don't cry for a "None of the Above" vote?

  22. Anonymous Coward
    Anonymous Coward

    Creating Abominable Intelligence is heresy of the highest order and punishable only by death.

  23. Anonymous Coward
    Anonymous Coward

    We have

    created something powerful beyond measure. If it is used by the minds of evil and power-hungry men, the legion of dead and the scope of destruction will be that never before seen by all tribes in the kingdom. We must refuse to make this for the wrong reasons. We must only use it for tools that improve the quality of our lives. We need a name for this new material on which so much is at stake. We shall call it metal.

    And so it goes in this ever-repeating matrix fractal.

    Everything that can be used to wage war and kill people, IS used to wage war and kill people. It is the unfortunate way of humans. Depressing as that is, it means it can not be stopped. No way, no how.

    Debate all you want, scream, cry, wail, gnash teeth, write letters, create websites, self-ignite, agree to treaties, but, in the end, it will be all for naught. Because, while the debate was raging, they kept working and it is simply too late, even now.

    1. Charles 9

      Re: We have

      So you're basically saying we're screwed. Because someone somewhere WILL use all this war tech with the mentality that M.A.D. is an acceptable outcome, meaning they'll use it with absolutely nothing to lose.

  24. cantankerous swineherd

    imagine the squealing if the norks had a drone floating about over the USA.

    1. Anonymous Coward
      Anonymous Coward

      Well - They don't really need to: The yanks have started droning "American Citizens" like they were nothing special at all. When all of those Poo-Dunk SWAT teams gets to operate their own militarised drones to fight them pesky criminals (and they will, cause APC's and rocket launchers just aint enuff firepower), then the action will kick off for real.

  25. steviebuk Silver badge

    Is that actually a case of....

    ...we won't renew the licence openly. We'll just keep it quiet and force any engineer working on it to never speak of it.

  26. Brian Miller
    Devil

    Mmmmm.... Evil!

    The problem here is not the weaponization of AI, but the real lack of it. AI is being used for something like a "smells like terrorism" test, and then humans take that and push a button. There is no feedback to the software that it's done the wrong thing!

    When AI is applied to warfare, it should be used the same way as carpet bombing or arclight: Let loose, and stand back. You want the target destroyed by software? It gets destroyed by software. It is the responsibility of those on the trigger and those in charge of them to not pull the trigger, or give the order!

    In WWII, the USSR used radio-controlled flame thrower tanks because the Fins were so good at killing tanks with humans inside them. These days we are using remote-controlled mini-bombers.

    If the military is going to kill people based on someone scratching their ass the wrong way or shopping habits, then the program is fully in the "Dr. Evil" realm, no two ways about it. This isn't about "the fog of war," because the U.S. isn't in a war. Our borders are not in Syria. One does not halt a problem by random approximation.

    Let the AIs fully fight the war, if they are going to be brought into it. Otherwise, the humans should take full responsibility for their actions.

  27. Frumious Bandersnatch

    If there's one thing that I've gathered

    from reading the various pro/anti arguments above, it's that even people cannot decide on the ethical standards that should apply in all this. Or how a particular scenario should be evaluated, if you will.

    How can we expect AI to improve this situation, especially given that only the "pro" side will provide the training data?

    Better to have everyone agree to some sort of normative standard of ethics before things get out of hand. Asimov's three laws seem uncontroversial enough.

    THERE IS ANOTHER SYSTEM

    1. LucreLout

      Re: If there's one thing that I've gathered

      it's that even people cannot decide on the ethical standards that should apply in all this.

      Yup - its why ethics, for all intents and purposes, is just a county of orange people near the sea.

      Better to have everyone agree to some sort of normative standard of ethics before things get out of hand.

      The problem is people have very different ethical standards and frameworks and everyone thinks theirs is the right set. Thus, people will never agree a common set.

      Asimov's three laws seem uncontroversial enough.

      You'd think so, but they're no use for an automated weapon - quite the opposite.

      Take Toar Bora as an example - wallpoing it with bunker busting bombs (daisy cutters, if you will) means we want everyone inside dead, whoever they may be. A series of small autonomous drones that could navigate the caves killing those inside would have been ideal, and less destructive to the surrounding area.

      Like it or not, Terminator style Hunter Killers are coming. And the people building and deploying them will consider doing so perfectly ethical when they do.

      1. Charles 9

        Re: If there's one thing that I've gathered

        "A series of small autonomous drones that could navigate the caves killing those inside would have been ideal, and less destructive to the surrounding area."

        OR less effective because enclosed areas like caves offer natural choke points where such things can easily be assessed and dealt with.

        1. Mark 85

          Re: If there's one thing that I've gathered

          OR less effective because enclosed areas like caves offer natural choke points where such things can easily be assessed and dealt with.

          I wouldn't use explosives in a cave. The blast takes the path of least resistance and may just go back to you. Even "normal" firearms are risky when things start ricocheting and kicking rock splinters about.

          1. Anonymous Coward
            Anonymous Coward

            Re: If there's one thing that I've gathered

            But what about things like flamethrowers which were pretty much made for enclosed warfare? Caves are one of the places where they're particularly effective. As for explosives, that depends on the type and placement of the explosives. Plus, stuff facing outward is less likely to rebound on you.

    2. Michael Wojcik Silver badge

      Re: If there's one thing that I've gathered

      even people cannot decide on the ethical standards that should apply in all this

      Or in any other situation.

      That doesn't mean there's no point in debating ethics, attempting to arrive at a compromise that's acceptable to the political power in a community, codifying it, and promulgating it through various institutions. That's what gives us a little thing called "civilization".

      It's not pretty, it's not reliable, it requires constant maintenance, and it creates nearly as many problems as it solves. (Much like IT, in fact.) But most seem to feel it's better than the alternative.

      1. Anonymous Coward
        Anonymous Coward

        Re: If there's one thing that I've gathered

        That assumes the two sides have common ground. But when it comes to ethics, that can be a bridge too far, especially if their situations are at or near diametric opposition. Someone under a constant existential threat WILL have a different set of Rules of Engagement, and odds are they'll clash with others in ways that cannot necessarily be negotiated down.

  28. DeeCee

    Militarized AI is the new nukes, as long as one country has them(like China or russia) other countries need them

    Militarized AI could be one of Great Filters, just like nukes

  29. StargateSg7

    Does this mean that my fancy 65,000 objects per second image recognition system which I designed and coded all by myself with its 4 x 4096 by 2160 of 32-bit or 64-bit pixels at 10,000 FPS camera arrays attached to my 200,000 item terrain, person, building, animal, ground/air/space/submersible-vehicle vector-based object recognition database SHOULD NOT be attached to my colleague's Insulated Gate Bipolar Transistor (IGBT)-controlled electromagnetic-coil-based linear-induction rail gun system which pulses said linear EM coils every 10 nanoseconds shooting 3 metre long aluminum oxide ceramic coated (for massive 4000C+ resistance against aerodynamic heating) tungsten and steel rods that are accelerated to 160,000 KMH (100,000 MPH) with a kinetic energy of 10,000 KG (11 US tons or 22,000 lbs) at up to 6000 rounds per minute (or faster!) in metal-storm configurations.

    AND....That maybe I should NOT attach that rail gun system to my pure SOBEL-edge detection-based vision recognition system for fully autonomous flight control which then uses said rail gun system to cut multiple 300 km long, 200 meters wide and 50 metre deep trenches around ANY targets I feel like! Ya Mean THAT sort of Powerful A.I. War Machine System?

    Naaaahhh.. instead, I'm gonna attach my system to my other colleague's CNC-machined bi-pedal and quadrapedal robots and let them go all Terminator on my targets letting THEM figure out what and/or who to hit.........

    1. Michael Wojcik Silver badge

      Hey, it's been a while since I've seen one of SG7's dick-waving posts. This is a fine example of the genre. Kid's well on the way to becoming one of the Reg's top-tier resident kooks.

      1. StargateSg7

        "......well on the way to becoming one of the Reg's top-tier resident kooks...."'' ????

        Well on the Wayyyyyyy.......????

        ARE YOU KIDDING ME !!! ?????

        You have INSULTED ME DIRELY!

        I'M ABSOLUTELY ALREADY COMPLETELY AND UTTERLY AM of the highest flight quality of KOOK and RAGING Register lunatic!

        To put it mildly, ur a smeg for pinning me as a MERE YOUNG up and coming kook when I am the head executive chief if not the FIVE MICHELIN STAR CHEF of kookiness!

        bleeeeeehhhhh

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like