back to article Engineers, coders – it's down to you to prevent AI being weaponised

Debate has raged for months over various internet giants' forays into providing next-generation technology for war. For example, in March, dissenters at Google went to the press about the web goliath's contract with the US military's Project Maven, which aims to fit drones with object-detecting AI among other things. This US …

Page:

  1. Halcin

    How Many?

    3100 signed the letter, but out of how many? How many others are willing/coerced into cooperating?

    I seriously doubt people in countries like $Country* will be signing up anytime soon.

    *I have a horrible feeling the list is too long to list.

  2. Daytona955

    Engineers, coders – it's down to you to prevent AI being weaponised

    It's a bit flippin' late for that!

    When I was a fresh faced grad on the milk round in 1978 one of the companies was proudly showing video of an 'AI' targetting system tracking a tank moving over rough terrain with visual obstructions.

    I didn't take the job. But I'm under no illusions that my decision made any difference at all to the development of automated tagetting systems.

    4% of Google's staff might feel better about themselves, but it won't stop people getting killed by automatically targetted weapons.

    Shutting the stable door ~5 decades after the AI horse has bolted...

    1. John Brown (no body) Silver badge

      Re: Engineers, coders – it's down to you to prevent AI being weaponised

      "I didn't take the job. But I'm under no illusions that my decision made any difference at all to the development of automated tagetting systems.

      4% of Google's staff might feel better about themselves, but it won't stop people getting killed by automatically targetted weapons."

      The difference between you back then and the Googlers now is both that they have acted as a group and got international publicity, partly thanks to Googles own business as major bit of the internet. It's how grass roots movements start and public opinion changes. Having said that, I doubt there will be much difference made in the short to medium term, but it could conceivable lead to an equivalent to (or addendum to) the Geneva Convention. Land mines and cluster bombs are used a bit less these days and in more controlled fashion than previously due to publicity and public opinion.

  3. Anonymous Coward
    Anonymous Coward

    Human review of the AI's determination

    If you think about it, the human reviewer is not likely to have access to any additional information and is not likely to have access to any additional decision-making criteria (even moral and ethical policies). Any additional information and criteria that might become available would immediately (*) be incorporated into the AI system.

    THIS-> Effectively the human reviewer is left to nod their head, and press 'PROCEED'.

    * Exception: The humans might be quicker to 'reprogram' with a memo or direct order on any given day, whereas the software update to the AI might need weeks or months.

    1. 's water music Silver badge

      Re: Human review of the AI's determination

      THIS-> Effectively the human reviewer is left to nod their head, and press 'PROCEED'.

      Well the human may be able to review the AI's pattern match for a Gun and be better able to distinguish whether it is, in fact, a table leg.

      Wait, what?

  4. WibbleMe

    Any one remember the British TV series Red Dwarf?

    All AI's had a belief chip... i.e be good and go to silicon haven unless you were an cheap toaster AI that is and were made without one.

    For those in the US, you can get it on Netflix.

  5. Vanir

    Engineers, coders ... and lawyers

    Engineers, coders – it's down to you to not write any software and for any project you believe will be used for 'evil' intent.

    That's like telling lawyers, barristers ,attorneys, attorney-generals etc, not to defend people they believe are gulty or not to prosecute people they believe are innocent.

  6. Anonymous Coward
    Anonymous Coward

    I've waited for this very moment

    "AI developers have immense power. They are a mobile, coveted population. Weapons and surveillance kit can only be built with their talent and assent. If they chose to wield their power for good, who knows what they could do?"

    Well, I know. I'm going to take every chance I can to sneak into the sub-sub-sub routines a bit of machine code that identifies managers and targets them first. My first visual algorithm should identify software "developers" who wear dress shirts and spend more time with their fingers curled inside a coffee-cup handle than pressing the "H" key.

    Software middle-managers first... Zap! Gone! Scrum lords next... Zap! Gone!

    I will bake into the AI a learning routine to recognize the "type" that replaces them and then ... Zap! Gone! It will search Jenkins notifications for "targets" who receive the notifications but don't commit code... Zap! Gone!

    My AI will identify those who commit code at least once a day and protect them. There are myriad problems to work out, as with any new system.

    The first is that I'll never work on an AI targeting platform. I knew my plan was flawed from the start.

    </sarc>The moral of this story is: there isn't a terrible technology used for bad that cannot be altered -- ever so slightly -- and changed for the good.

    1. Charles 9 Silver badge

      Re: I've waited for this very moment

      So what if it's an incompetent middle mangler who DOES submit code (terribly-written code that nonetheless has the board's approval) every day?

      The problem with anything live is that where will be edge cases. Only edge cases don't stay edge cases for long.

  7. FrankAlphaXII Silver badge

    Article's title says it all

    >>Engineers, coders – it's down to you to prevent AI being weaponised

    Then we're totally fucked. There's always someone in every Scientific endeavor willing to weaponize anything.

    Physicists weaponized a theory proposing that you could split and later fuse atoms in a massive burst of energy.

    Virologists and Bacteriologists weaponized human and animal disease. Geneticists made those diseases even deadlier.

    Chemists took chemicals and turned them into weapons.

    Psychologists figured out how to use words to erode an opponent's morale.

    Radio engineers came up with ways to disrupt an enemy's communications using energy.

    I really don't think that software developers and hardware engineers in the realm of computing are any different. They may feel they are about themselves, and they certainly enjoy patting themselves on the back about how great and ethical they are all the time, but someone's going to weaponize AI (if it hasn't been done already) despite any protestations to the contrary because there's always someone willing to play God and a million ways to justify doing so because the consequences don't matter to them, as long as "progress" keeps happening.

  8. Anonymous Coward
    Anonymous Coward

    What we should think about ...

    What I find strange is how so many are quick to jump the "All humans are evil bastards" rant. I have to respectfully disagree, just like any "evil people" situation is that we are forgetting that those who generally make it to the top of the stack are sociopaths or psychopaths and they don't think like normal people. These people strive on strife, they enjoy driving us crazy with ridiculousness and we bite every time, then we spend the rest of time calling ourselves evil and destructive.

    Most people are good nature'd (which we often call naive or gullible), fair (which again we call naive or non-business savvy) and giving (which we just call foolish). How many times have you heard of a business practice that is morally reprehensible but we call it "SMART"... the word that should be used is "Manipulative", "Subversive" or just plainly "Psychopathic". We need to give credit where it is deserved, do you notice the guy who cut in front of you or the guy who let you in front of them... for sure its the former not the later. When you start noticing how many good things happen on a daily basis you will see it out weighs the bad but that is only if you notice it.

    1. Anonymous Coward
      Anonymous Coward

      Re: What we should think about ...

      Wanna bet? Is it good or is it indifference? And is indifference being misinterpreted as good? Especially in a world where people get bombarded in more ways than one eight days a week? Does one have time to be good if the wife and kids are going hungry (along with you)?

  9. Claverhouse Bronze badge

    Obama was a particularly evil old bastard, but so is Hillary and so is Trump.

    America wants their presidents to be fully representative, so killers.

    1. LucreLout Silver badge

      America wants their presidents to be fully representative, so killers.

      Never judge a nation by their political leadership.

      Most Americans are polite, civilised, and friendly. Same as the English. But I'd not want to be judged by the standards of old bloody hands Blair. Would you?

      1. Charles 9 Silver badge

        No, but that's the standard by which we are judged nonetheless, either because we actively allow it or passively do so through indifference or lack of awareness. I mean, if people were truly good on average, how come average voters don't cry for a "None of the Above" vote?

  10. Anonymous Coward
    Anonymous Coward

    Creating Abominable Intelligence is heresy of the highest order and punishable only by death.

  11. Anonymous Coward
    Anonymous Coward

    We have

    created something powerful beyond measure. If it is used by the minds of evil and power-hungry men, the legion of dead and the scope of destruction will be that never before seen by all tribes in the kingdom. We must refuse to make this for the wrong reasons. We must only use it for tools that improve the quality of our lives. We need a name for this new material on which so much is at stake. We shall call it metal.

    And so it goes in this ever-repeating matrix fractal.

    Everything that can be used to wage war and kill people, IS used to wage war and kill people. It is the unfortunate way of humans. Depressing as that is, it means it can not be stopped. No way, no how.

    Debate all you want, scream, cry, wail, gnash teeth, write letters, create websites, self-ignite, agree to treaties, but, in the end, it will be all for naught. Because, while the debate was raging, they kept working and it is simply too late, even now.

    1. Charles 9 Silver badge

      Re: We have

      So you're basically saying we're screwed. Because someone somewhere WILL use all this war tech with the mentality that M.A.D. is an acceptable outcome, meaning they'll use it with absolutely nothing to lose.

  12. cantankerous swineherd Silver badge

    imagine the squealing if the norks had a drone floating about over the USA.

    1. Anonymous Coward
      Anonymous Coward

      Well - They don't really need to: The yanks have started droning "American Citizens" like they were nothing special at all. When all of those Poo-Dunk SWAT teams gets to operate their own militarised drones to fight them pesky criminals (and they will, cause APC's and rocket launchers just aint enuff firepower), then the action will kick off for real.

  13. steviebuk Silver badge

    Is that actually a case of....

    ...we won't renew the licence openly. We'll just keep it quiet and force any engineer working on it to never speak of it.

  14. Brian Miller
    Devil

    Mmmmm.... Evil!

    The problem here is not the weaponization of AI, but the real lack of it. AI is being used for something like a "smells like terrorism" test, and then humans take that and push a button. There is no feedback to the software that it's done the wrong thing!

    When AI is applied to warfare, it should be used the same way as carpet bombing or arclight: Let loose, and stand back. You want the target destroyed by software? It gets destroyed by software. It is the responsibility of those on the trigger and those in charge of them to not pull the trigger, or give the order!

    In WWII, the USSR used radio-controlled flame thrower tanks because the Fins were so good at killing tanks with humans inside them. These days we are using remote-controlled mini-bombers.

    If the military is going to kill people based on someone scratching their ass the wrong way or shopping habits, then the program is fully in the "Dr. Evil" realm, no two ways about it. This isn't about "the fog of war," because the U.S. isn't in a war. Our borders are not in Syria. One does not halt a problem by random approximation.

    Let the AIs fully fight the war, if they are going to be brought into it. Otherwise, the humans should take full responsibility for their actions.

  15. Frumious Bandersnatch Silver badge

    If there's one thing that I've gathered

    from reading the various pro/anti arguments above, it's that even people cannot decide on the ethical standards that should apply in all this. Or how a particular scenario should be evaluated, if you will.

    How can we expect AI to improve this situation, especially given that only the "pro" side will provide the training data?

    Better to have everyone agree to some sort of normative standard of ethics before things get out of hand. Asimov's three laws seem uncontroversial enough.

    THERE IS ANOTHER SYSTEM

    1. LucreLout Silver badge

      Re: If there's one thing that I've gathered

      it's that even people cannot decide on the ethical standards that should apply in all this.

      Yup - its why ethics, for all intents and purposes, is just a county of orange people near the sea.

      Better to have everyone agree to some sort of normative standard of ethics before things get out of hand.

      The problem is people have very different ethical standards and frameworks and everyone thinks theirs is the right set. Thus, people will never agree a common set.

      Asimov's three laws seem uncontroversial enough.

      You'd think so, but they're no use for an automated weapon - quite the opposite.

      Take Toar Bora as an example - wallpoing it with bunker busting bombs (daisy cutters, if you will) means we want everyone inside dead, whoever they may be. A series of small autonomous drones that could navigate the caves killing those inside would have been ideal, and less destructive to the surrounding area.

      Like it or not, Terminator style Hunter Killers are coming. And the people building and deploying them will consider doing so perfectly ethical when they do.

      1. Charles 9 Silver badge

        Re: If there's one thing that I've gathered

        "A series of small autonomous drones that could navigate the caves killing those inside would have been ideal, and less destructive to the surrounding area."

        OR less effective because enclosed areas like caves offer natural choke points where such things can easily be assessed and dealt with.

        1. Mark 85 Silver badge

          Re: If there's one thing that I've gathered

          OR less effective because enclosed areas like caves offer natural choke points where such things can easily be assessed and dealt with.

          I wouldn't use explosives in a cave. The blast takes the path of least resistance and may just go back to you. Even "normal" firearms are risky when things start ricocheting and kicking rock splinters about.

          1. Anonymous Coward
            Anonymous Coward

            Re: If there's one thing that I've gathered

            But what about things like flamethrowers which were pretty much made for enclosed warfare? Caves are one of the places where they're particularly effective. As for explosives, that depends on the type and placement of the explosives. Plus, stuff facing outward is less likely to rebound on you.

    2. Michael Wojcik Silver badge

      Re: If there's one thing that I've gathered

      even people cannot decide on the ethical standards that should apply in all this

      Or in any other situation.

      That doesn't mean there's no point in debating ethics, attempting to arrive at a compromise that's acceptable to the political power in a community, codifying it, and promulgating it through various institutions. That's what gives us a little thing called "civilization".

      It's not pretty, it's not reliable, it requires constant maintenance, and it creates nearly as many problems as it solves. (Much like IT, in fact.) But most seem to feel it's better than the alternative.

      1. Anonymous Coward
        Anonymous Coward

        Re: If there's one thing that I've gathered

        That assumes the two sides have common ground. But when it comes to ethics, that can be a bridge too far, especially if their situations are at or near diametric opposition. Someone under a constant existential threat WILL have a different set of Rules of Engagement, and odds are they'll clash with others in ways that cannot necessarily be negotiated down.

  16. DeeCee

    Militarized AI is the new nukes, as long as one country has them(like China or russia) other countries need them

    Militarized AI could be one of Great Filters, just like nukes

  17. StargateSg7 Bronze badge

    Does this mean that my fancy 65,000 objects per second image recognition system which I designed and coded all by myself with its 4 x 4096 by 2160 of 32-bit or 64-bit pixels at 10,000 FPS camera arrays attached to my 200,000 item terrain, person, building, animal, ground/air/space/submersible-vehicle vector-based object recognition database SHOULD NOT be attached to my colleague's Insulated Gate Bipolar Transistor (IGBT)-controlled electromagnetic-coil-based linear-induction rail gun system which pulses said linear EM coils every 10 nanoseconds shooting 3 metre long aluminum oxide ceramic coated (for massive 4000C+ resistance against aerodynamic heating) tungsten and steel rods that are accelerated to 160,000 KMH (100,000 MPH) with a kinetic energy of 10,000 KG (11 US tons or 22,000 lbs) at up to 6000 rounds per minute (or faster!) in metal-storm configurations.

    AND....That maybe I should NOT attach that rail gun system to my pure SOBEL-edge detection-based vision recognition system for fully autonomous flight control which then uses said rail gun system to cut multiple 300 km long, 200 meters wide and 50 metre deep trenches around ANY targets I feel like! Ya Mean THAT sort of Powerful A.I. War Machine System?

    Naaaahhh.. instead, I'm gonna attach my system to my other colleague's CNC-machined bi-pedal and quadrapedal robots and let them go all Terminator on my targets letting THEM figure out what and/or who to hit.........

    1. Michael Wojcik Silver badge

      Hey, it's been a while since I've seen one of SG7's dick-waving posts. This is a fine example of the genre. Kid's well on the way to becoming one of the Reg's top-tier resident kooks.

      1. StargateSg7 Bronze badge

        "......well on the way to becoming one of the Reg's top-tier resident kooks...."'' ????

        Well on the Wayyyyyyy.......????

        ARE YOU KIDDING ME !!! ?????

        You have INSULTED ME DIRELY!

        I'M ABSOLUTELY ALREADY COMPLETELY AND UTTERLY AM of the highest flight quality of KOOK and RAGING Register lunatic!

        To put it mildly, ur a smeg for pinning me as a MERE YOUNG up and coming kook when I am the head executive chief if not the FIVE MICHELIN STAR CHEF of kookiness!

        bleeeeeehhhhh

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019