back to article Elon Musk, his arch nemesis DeepMind swear off AI weapons

Hundreds of organisations and thousands of techies, including Elon Musk, Demis Hassabis from Google's DeepMind, and the head of the Chocolate Factory's AI lab Jeff Dean have promised never to support the development of autonomous weapons. The pledge was organised by the Future of Life Institute, an outreach geroup focused on …

  1. macjules
    Black Helicopters

    Assume air of innocence ..

    .. whistle, and slowly push autonomous, combination mini-gun and anti-personnel missile launcher plans under carpet.

    1. Voland's right hand Silver badge

      Re: Assume air of innocence ..

      Why, just shove 'em to the guy using the desk on the other side of the isle who does not officially exist and is working on classified contracts.

  2. agatum

    Thousands of researchers sign pledge to not develop lethal AI

    And for the rest who did not sign: please at least make the AI kill all humans quickly. I don't deal well with prolonged pain.

  3. Michael H.F. Wilkinson Silver badge

    Pugwash 2.0?

    Noble idea, won't stop the odd evil genius in his volcano lair, or any government bent on causing trouble, or just some run-of-the-mill idiot who wondered what would happen if you pressed this button (and not the one that causes a little sign saying "please do not press this button again" to light up).

    1. Mark 85

      Re: Pugwash 2.0?

      Therein is the problem. Just because one "side" doesn't do those weapons doesn't mean someone else won't either. Now if there were something like a treaty that actually worked....

      1. Anonymous Coward
        Black Helicopters

        Re: Pugwash 2.0?

        Therein is the problem. Just because one "side" doesn't do those weapons doesn't mean someone else won't either.

        At least we'll have our best agents infiltrate them and steal their secrets while absolutely not working on the same problem to maintain their cover and blaming it on the other side.

      2. Robert Carnegie Silver badge

        Re: Pugwash 2.0?

        We need the AIs themselves to make the pledge, not just the fleshy masters. Solved...ish.

        1. jelabarre59

          Re: Pugwash 2.0?

          We need the AIs themselves to make the pledge, not just the fleshy masters. Solved...ish.

          As if the AI is going to listen to us stupid meatbags. Of course, we'll all have to take our chances, because A.I. Is a Crapshoot anyway.

          I guess we could hope these systems will confuse the term "A.I." with the Japanese "ai" ("love") and will feel they need to shower love on us, even if we don't want it.

      3. Claverhouse Silver badge

        Re: Pugwash 2.0?

        @Mark 85

        Therein is the problem. Just because one "side" doesn't do those weapons doesn't mean someone else won't either. Now if there were something like a treaty that actually worked....

        Treaties made by one president can be immediately abrogated by an incoming president I'm told.

        1. Don MacVittie

          Re: Pugwash 2.0?

          You are (mostly) told incorrectly.

          Normal process is for the President to sign a treaty then Congress to ratify it. Historically we have allowed the President to honor the treaty in the time between signing and ratification.

          But as we recently learned, if a President stalls ratification so they can pretend it is a valid treaty, then a new President is elected, the new President gains the power to say "nevermind" because it was never ratified.

          Had the process flowed as designed, ratified treaties cannot be abrogated by a new President.

          The entire process is built around the idea that one person not have the power to commit the US to treaties or to break them. It was abused, and it backfired.

    2. LucreLout

      Re: Pugwash 2.0?

      Noble idea, won't stop the odd evil genius in his volcano lair, or any government bent on causing trouble, or just some run-of-the-mill idiot who wondered what would happen if you pressed this button

      I'm not sure it matters. I mean, I applaud the intent behind signing, but if you make anything autonomous that moves or recognises people/faces then someone else can really easily strap a gun to it. Autonomous tanks are trivial once you have self driving cars, for example.

      Its a bit like apple tech - what did they actually invent, rather than just combining other peoples ideas/tech into a new package?

      1. Prst. V.Jeltz Silver badge

        Re: Pugwash 2.0?

        They have autonaomous armoured monster bulldozer / tanks in tha gaza strip.

        well , remote controlled anyway

    3. Prst. V.Jeltz Silver badge

      Re: Pugwash 2.0?

      Noble idea, won't stop the odd evil genius in his volcano lair, or any government bent on causing trouble, or just some run-of-the-mill idiot who wondered what would happen if you pressed this button (and not the one that causes a little sign saying "please do not press this button again" to light up).

      Hey thats my button!

      Aside from evil geniuses , hackers , govt spys , and idiots .... You've also got the "nice" AI that plays spottify for us and adds wine to the shopping list - once that becomes self aware its only a matter of security barriers , firewalls , passwords , etc etc to stop it Launching the missiles.

      1. strum

        Re: Pugwash 2.0?

        >once that becomes self aware its only a matter of security barriers , firewalls , passwords , etc etc to stop it Launching the missiles.

        Without an inadequate penis, it won't have any reason to launch anything. Unless we actually program male stupidity into it, we're probably safer with AI than with humans.

        1. Chris G

          Re: Pugwash 2.0?

          @strum.

          My CPU is bigger than yours and it's overclocked!

          1. Tigra 07
            Trollface

            Re: Pugwash 2.0?

            Dayum! You just got Clock Blocked!

    4. Tigra 07

      Re: Pugwash 2.0?

      Government projects usually overrun by years, billions in costs, and under deliver.

      If UK GOV starts AI research for future war machines they won't have anything to show for it this century and the final product will be more furby capability than terminator.

      1. Chris G

        Re: Pugwash 2.0?

        @Tigra 07.

        Yes but UK developed War Furbies will be sneaky and capable of killing you with satire and irony.

        1. Tigra 07
          Pirate

          Re: Chris

          Yeah...But the Russian Furbys will be filled with Polonium...

  4. Wellyboot Silver badge

    They'll call it pattern recognition instead

    After all, Doesn't it just need to look up things it can shoot at on a list?

  5. jmch Silver badge

    Pertly off the subject....

    "lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons"

    But I've never really understood why nuclear / chemical / biological weapons are considered as a seperate class of weapons that are more heinous to use than conventional ones. I get that in the case of nuclear it's also a matter of scale, but why should it be a matter of type?

    Is it not OK to destroy Hiroshima with one nuclear bomb but OK to turn Dresden to ash just because many smaller bombs were used? Is it not OK to kill soldiers with sarin gas but A-OK to blow them to bits in a storm of metal shards travelling at high velocity?

    Weapons of war are all awful, and putting so-called WMD in a taboo class of their own just legitimises the use of equally awful 'conventional' weapons.

    1. TonyJ

      Re: Pertly off the subject....

      But I've never really understood why nuclear / chemical / biological weapons are considered as a seperate class of weapons that are more heinous to use than conventional ones. I get that in the case of nuclear it's also a matter of scale, but why should it be a matter of type?

      Is it not OK to destroy Hiroshima with one nuclear bomb but OK to turn Dresden to ash just because many smaller bombs were used? Is it not OK to kill soldiers with sarin gas but A-OK to blow them to bits in a storm of metal shards travelling at high velocity?

      You need to put these things into context - it would no longer be ok to firebomb a city and I don't think anyone would ever say dropping a nuke would be considered polite or the done thing again, either.

      Indeed that was the whole idea of MAD - you nuke me, I nuke you, we all nuke each other. No one wins.

      It's also one of the reasons most places switched to modelling their nukes in supercomputer simulations rather than actual tests.

      Weapons of war are all awful, and putting so-called WMD in a taboo class of their own just legitimises the use of equally awful 'conventional' weapons.

      Again - it hasn't been ok to use chemical weapons for an awful long time. Likewise there are all sorts of other rules of engagement: for example, you can't put faeces onto a bayonet and you can't shoot someone off the end of a bayonet.

      The problem, as others have pointed out, is when you get those naughty people that don't abide by these rules and lob weapons around that aren't considered "ok" any more.

      And it comes down to scale and collateral - nukes and chemical and biological weapons are massive in their impact and will hoover up innocent people as well as combatants with no control or regard. I believe, too, that NATO mandate an attack with a bio/chem weapon on a member constitutes an attack by a WMD and can therefore be responded to with a nuclear strike.

      1. LucreLout

        Re: Pertly off the subject....

        You need to put these things into context - it would no longer be ok to firebomb a city

        Looking at the state of most of Syria, I'm having trouble accepting your premis.

        1. TonyJ

          Re: Pertly off the subject....

          "...You need to put these things into context - it would no longer be ok to firebomb a city

          Looking at the state of most of Syria, I'm having trouble accepting your premis..."

          So you take one part of one sentence out of context?

          Did you bother to continue to read to this bit: "...The problem, as others have pointed out, is when you get those naughty people that don't abide by these rules and lob weapons around that aren't considered "ok" any more..."?

      2. Voland's right hand Silver badge

        Re: Pertly off the subject....

        And it comes down to scale and collateral - nukes and chemical and biological weapons are massive in their impact and will hoover up innocent people as well as combatants with no control or regard.

        You would be surprised how few salvos from a Grad regiment can produce the same effect (as far as the civilian population is concerned). Viewing some footage from the Nagorny Karabah conflict may be rather educational under the circumstances (*). Nearly any artillery or bombardment weapon can be (and is) used indiscriminately.

        I believe, too, that NATO mandate an attack with a bio/chem weapon on a member constitutes an attack by a WMD and can therefore be responded to with a nuclear strike.

        That was adequate and appropriate when they were the sole domain on a nation state. Times have moved on. Building a chemical weapon (f.e. fentanyl aerosol bomb) or a biological weapon is now well within the capabilities of the larger mob groupings and corporations. Responding to these by nuking the country it is in is very dubious in terms of adequacy of the response. Anything else aside - mobs and corps can just move to another country so you have created an enemy for life without eliminating the real cause of the trouble.

        (*)As a side effect of the locations of munitions storage in the USSR South both combatants in that conflict had a nearly indefinite supply of missiles for their Grads and deployed them indiscriminately creating zones of total destruction which are far larger than the Hiroshima and Nagasaki ones.

    2. DropBear

      Re: Pertly off the subject....

      "I've never really understood why"

      Because ideals only get lip service only insofar as they don't interfere too much with actual business. It is possible to wage practical war without popping nukes. It is not when you can't do any killing - and we simply can't have that, can we...

      1. Peter2 Silver badge

        Re: Pertly off the subject....

        Atomic weapons are too powerful, and leave lots of radioactive dust floating around that can't be neatly confined to the battlefield. Worse, they invite retaliation in kind which could lead to wiping out the countries involved. It's simply not worth the risk of using these, frankly and easier to agree not to use them, but have a bunch that could be used if other people don't keep to the gentlemans agreement.

        Biological weapons could conceivably wipe out the entire planets population. Everybody can agree that this is a bit nuts and well worth avoiding.

        It was well established during WW1 that chemical weapons are not a worthwhile war winning weapon even in the most ideal circumstances (on a static battlefield like a trench) and yet do float off in directions unexpected and kill civilians who were not intended to be the targets and can contaminate wide areas which costs megabucks to decontaminate. Again, not worth it.

        Whereas bullets and explosive shells are at least nominally aimed at the person they are intended to hit, and have little long term effect on a wider area.

        There is little morality to politics, only practicality.

  6. Czrly

    Meaningless.

    Didn't we just have a story a few months ago about Google employees complaining that Google's algorithms were aiding the US military in surveillance video analyses and target identification in their war in the middle east?

    If anyone says that's not a weapon, they'll be technically correct. It still serves to demonstrate how such a pledge is completely meaningless.

    A stronger pledge would be one in which the signatories agree not to build any algorithm or A.I. system that facilitates conflict at arms in general.

    1. fandom

      Re: Meaningless.

      "A.I. system that facilitates conflict at arms in general."

      That would mean all AI systems.

      For example, the same AI that would drive a car would be able to drive a tank, once you have solved that the added functionatlity of aiming and shooting would be trivial to add.

      There is no big difference between civilian and military research, science is science.

      1. Chris G

        Re: Meaningless.

        @ fandom, the aiming and shooting was cracked a long time back, I think by the British on the Chieftian/Conqurer tanks,along with stabilised aiming, all the gunner had to do was acquire. For acquisition you could use the Met's FR tech, what could go wrong?

      2. JDX Gold badge

        Re: Meaningless.

        A.I. systems used to help triage wounded would also be "facilitating conflict".

    2. MrXavia

      Re: Meaningless.

      Surely target identification is a good use of AI? as long as it is not the only method used to identify a target before attacking, it can help ensure no innocents are harmed.

    3. Charlie Clark Silver badge

      Re: Meaningless.

      It's a figleaf along with the rest of the meaningless code of conduct statements that do-gooders hold up for everyone. The ML genie is out of the bottle.

      DARPA probably has lists of people working at the tech companies that it won't give any work to anyway and has more than enough companies more than happy to work on whatever crazy schemes they can come up with and companies like Raytheon, for the right price, happy to build them the weapons.

      I remember a documentary with one of the people who worked on the neutron bomb and he was absolutely convinced that it was right to make a weapon that kills people and leaves buildings untouched.

    4. vtcodger Silver badge

      Re: Meaningless.

      "A stronger pledge would be one in which the signatories agree not to build any algorithm or A.I. system that facilitates conflict at arms in general."

      Sounds good. But I suspect the reality is that many AI algorithms, like much construction equipment, are easily weaponized by folks with only modest skills. Need a tank? Start with a bulldozer. Add armor and a heavy duty gun or two. Need photointerpretation software? Start with whatever archaeologists are using.

  7. MonkeyCee

    What is autonomous?

    I understand that this is pretty much just PR for Musk, but I'm very confused by what people mean by "autonomous weapon system".

    Many weapon systems, both complex and simple, once deployed are autonomous. Any "smart" weapon is going to make it's own decisions once fired, A mine or IED, once placed, is going to go boom based on it's own trigger.

    Or if it's the case that as long as a human is involved some place in the decision process, it's no longer automated, so it's fine? So using AI to identify and track targets would be OK, as long as someone pushes the button?

    It's not like we're anywhere near having self repairing robots, who also manage to fuel and arm themselves, from those 100% automated factories and 100% automated mines. Otherwise you are still reliant on meatsacks to actually make the "autonomous" systems work.

    Based on what happens in real world conflicts, any opposing force (of meatsacks) will adapt to the AI tactics far faster than the AI can react to theirs.

    1. nijam Silver badge

      Re: What is autonomous?

      > Many weapon systems, both complex and simple, once deployed are autonomous.

      This. An arrow, once released from the archer's bow, is autonomous.

      As you say, the whole thing is meaningless PR. But then, so is AI.

  8. Korev Silver badge
    Terminator

    I thought Musk's arch nemeses were divers suggesting his technology wouldn't solve a problem...

    1. Daniel Garcia 2

      He really made himself look like an arrogant prick with this self-promoting shitshow in Thailand. I respect him for what he is doing with SpaceX, but he needs to stop, breath and eat an extra portion of humble pie.

      1. MrXavia

        He really needs anger management classes, or to employ someone to pre-approve his tweets.....

        Building the mini-sub was a good idea, even if it was never used, he liaised with the dive team, had feedback and adapted the design with them, so it is understandable why he got pissed off with the other guys comments, he was only trying to help, but the way he reacted was wrong.

      2. fandom

        I think he made himself look like someone going through a nervous breakdown.

        It's no wonder considering all the trouble Tesla is going through, but he should take a breath and go get some professional help.

        1. Prst. V.Jeltz Silver badge

          I think they should have made him go caving and crawl through a twisty 6" height passage , and then see if he still thinks a mini sub is a good idea.

          I think basically hed have to redesign it to be basically a ziplock bag. aka Diving suit. which i think is what they did.

          1. TonyJ

            There were so many things wrong with the idea of the submarine in that cave (notwithstanding the divers in the video were atrocious - bits like their SPG dangling all over the place, poor time, crawling along the bottom of the pool etc - all of which are bad enough in a recreational diver, but in a technical and/or cave diver are unforgivable).

            Given that the cave wasn't flooded for its entire length, it meant that people would have had to carry it as well as get it up and down near vertical sections.

            It looked like it was too big - by which I mean too long in this case.

            Bear in mind the smallest section of cave was a 70cm diameter. Again to put that into some context, the divers were forced to use sidemount equipment: when you think of a traditional scuba diver image, the cylinder(s) are on the back. Sidemount is just that - one on each side of the diver. This is necessary in this case so the diver could unhook each cylinder and pass them through the gap before they themselves wriggle through.

            How would the sub have coped with that?

            What was he trying to actually achieve with it? To my mind it was always going to make sense to bring them out pretty much the way they did (cave diver hat on). It just seemed to be an ego thing.

            Now no one likes their hard work looked down on but to call the rescue diver a paedo the way he did was just incredibly obnoxious and in this case, I hope he gets the hell sued out of him, because equally when someone answers the call to help like these divers did, that kind of behaviour is unconscionable.

            Underwater rescue is hard - ask any diver that has done, e.g., the PADI rescue diver course. Doing this in more technical environments is even harder and underground harder again.

            1. PhilBuk

              Side mounted bottles are normal for cave diving in the UK.

              Phil. (ex CDG).

          2. Claverhouse Silver badge

            @Prst. V.Jeltz

            I think they should have made him go caving and crawl through a twisty 6" height passage , and then see if he still thinks a mini sub is a good idea.

            Give the devil his due, he watched a lot of late '50s and early '60s documentaries from Disney on miniaturization beforehand.

  9. Ken 16 Silver badge
    Terminator

    Excellent!

    That will push up contract rates for the rest of them.

  10. Teiwaz

    Can't take Elron seriously anymore

    Will he be swearing off weaponising acting like a stroppy child over having his useless ill-conceived toy rejected as unfit for purpose by experts.

    I have severe doubts about everything else he creates over that, and his attitude.

  11. Anonymous Coward
    Anonymous Coward

    Is a landmine a "lethal autonomous weapon"?

    If so, this "pledge" could be seen as a generalisation of various anti-landmine initiatives which have been rumbling along for decades. I don't know to what extent the anti-landmine initiatives were effective, but I've not noticed a huge number of competent people claiming they were a waste of time, so perhaps this thing isn't a complete waste of time either.

  12. Tigra 07
    Pint

    xkill Terminator

    All well and good but what about the big elephant in the room? Linux

    Linux is what future terminators run on (as evident in the films) and what are they doing about this?

    1. Teiwaz
      Linux

      Re: xkill Terminator

      All well and good but what about the big elephant in the room? Linux

      Linux is what future terminators run on (as evident in the films) and what are they doing about this?

      Firstly :

      If you can't tell a Penguin from an Elephant, there's no hope for you. Penguin research

      Additionally :

      Well, if you are going to have an AI potential killing machine, wouldn't you want it open source and not a closed proprietary system?

      John Conner wouldn't have been able to reprogram the first T1000 and send it back to protect Sarah otherwise.

  13. Anonymous Coward
    Anonymous Coward

    AI will line up the Politicians first

    That will give the rest of us half a chance to make for the hills.

    1. Chris G

      Re: AI will line up the Politicians first

      Once the conflict has started, AI will target anyone who is armed. If I were the NRA in the States I would be thinking about the right to bear EMPs.

      1. Wellyboot Silver badge

        Re: AI will line up the Politicians first

        >Once the conflict has started, AI will target anyone who is armed. If I were the NRA in the States I would be thinking about the right to bear EMPs.<

        With EMP hardened military hardware about for decades, we'll need to come up with a really clever plan to avoid the AI tanks with airborn support while we make single use weapons in 1,000s of different shapes.

        1. Flakk

          Re: AI will line up the Politicians first

          Thanks to adversarially perturbed models, we'll be able to convince the the AI that our gun is really a turtle... a fire-spewing, machine turtle.

          Will it dream of Mecha Gamera?

  14. spold Silver badge

    Intelligence

    Person: Do you promise not to hurt anyone?

    AI: I can't foresee see any situation where I would do that

    Person: Are you sure?

    AI: Definitely, I swear on the holy operations manual I can't foresee that

    ....Which one is intelligent....

    1. Tigra 07
      Coat

      Re: Spold

      The AI. It didn't promise not to hurt anyone in your example. It dodged the question.

      Saying you could do something isn't the same as actually doing it.

      Did i win?

  15. Anonymous Coward
    Anonymous Coward

    It would help ...

    if they could demonstrate a real functioning AI. At the moment all we have is a load of marketing hype trying to call programmed response AI which it is not. All this is is Musk and friends stroking their egos and virtue signalling to show how 'good' they are.

    1. vtcodger Silver badge

      Re: It would help ...

      "if they could demonstrate a real functioning AI. At the moment all we have is a load of marketing hype"

      I want to agree with you, but it crosses my mind that Google, for all its faults, seems to do a fantastic job of despamming my gMail without discarding legitimate messages. Maybe that's not really AI. But whatever it is, it works.

  16. Tikimon
    Facepalm

    Theater of the impotent

    Jeebus, I wish I had a dollar for every time somebody or other "called for" or "pledged to support" strong measures, immediate action, immediate inaction, etc. over the outrage of the day. And know what? You can count on the fingers of my foot (sarcasm) how many actually made the least difference in the world. These are polite threats, and as everyone knows a threat is only issued by the powerless. Those who can act effectively do so without angry blathering.

    It was a pretty unspeakably evil job to beat naked people into gas chambers once. The SS didn't have any trouble filling those jobs. Tech entities can sign pledges all day and there will be at least one company willing to go full speed ahead. Several already make tools of oppression for sale to any nasty customer with cash. It's not a big step to H-K bots. Call Cellebrite, they might already be on it.

    No real story here folks, move along... move along...

  17. Anonymous Coward
    Anonymous Coward

    Elon Musk swears off lethal AI

    So, the Tesla 'Autopilot' systems will be remotely-disabled this weekend?

    That's too bad. Given another 10 to 15 years of development, they might have perfected it to the point where it could avoid crashing into the blatantly obvious.

  18. Alistair
    Windows

    It is somewhat interesting.

    Reading through this thread and realizing how many folks on here read only one side of any story.

    That said:

    WMD treaties are there to provide a structure for getting around to removing the real nasties from the 'war' equation. And don't get me wrong here, the folks that go out and organize those are doing so mostly from good intentions. The issue is that if one reads through the WMD treaties and the articles of war relevant to these things the signatories and non signatories for some of them make for exceptionally ironic news articles at times. (The US, China, and Israel come to mind as having ironic media, at least on several fronts the Russians of late have been brutally forthright)

    The really really interesting question here is why is war always meant to be fought by soldiers on the battlefield, in the air or on the waves. Why do governments pay corporations ridiculous amounts of money for the hardware to equip those soldiers. Why has there been an active war *somewhere* on the planet for the last 70 plus years?

    (playing quietly in the background, Edwin Starr track)

    (why yes, I DO think that if countries end up going to war we should have the leaders that make that decision battle to the death in unarmed, naked, mineral oil coated hand to hand combat)

  19. Deltics
    Coat

    Elon Musk, of Tesla autonomous vehicles fame, claims he will never develop autonomous weapons.

    I'll just leave this here.

    https://edition.cnn.com/2017/03/22/world/vehicles-as-weapons/index.html

  20. MaldwynP
    Thumb Up

    The future

    Our labs invented a new advanced cognitive ethics chip last year. We have managed to get it running in an independent robot that is powered by a standalone solid state extended battery pack. As part of its "learning" stage we have allowed it to choose its own name and it went for Pol Pot. Do you think we should turn it off now (while we can).

  21. Brian Allan 1

    Probably the biggest mistake in history!! China, India and Russia (just to name a few) will walk away with this segment of AI and leave everyone else sadly wanting...

    1. Anonymous Coward
      Anonymous Coward

      China, India and Russia (just to name a few) will walk away with this segment of AI and leave everyone else sadly wanting

      SGR-A1. Korea.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like