Assume air of innocence ..
.. whistle, and slowly push autonomous, combination mini-gun and anti-personnel missile launcher plans under carpet.
Hundreds of organisations and thousands of techies, including Elon Musk, Demis Hassabis from Google's DeepMind, and the head of the Chocolate Factory's AI lab Jeff Dean have promised never to support the development of autonomous weapons. The pledge was organised by the Future of Life Institute, an outreach geroup focused on …
Noble idea, won't stop the odd evil genius in his volcano lair, or any government bent on causing trouble, or just some run-of-the-mill idiot who wondered what would happen if you pressed this button (and not the one that causes a little sign saying "please do not press this button again" to light up).
Therein is the problem. Just because one "side" doesn't do those weapons doesn't mean someone else won't either.
At least we'll have our best agents infiltrate them and steal their secrets while absolutely not working on the same problem to maintain their cover and blaming it on the other side.
We need the AIs themselves to make the pledge, not just the fleshy masters. Solved...ish.
As if the AI is going to listen to us stupid meatbags. Of course, we'll all have to take our chances, because A.I. Is a Crapshoot anyway.
I guess we could hope these systems will confuse the term "A.I." with the Japanese "ai" ("love") and will feel they need to shower love on us, even if we don't want it.
@Mark 85
Therein is the problem. Just because one "side" doesn't do those weapons doesn't mean someone else won't either. Now if there were something like a treaty that actually worked....
Treaties made by one president can be immediately abrogated by an incoming president I'm told.
You are (mostly) told incorrectly.
Normal process is for the President to sign a treaty then Congress to ratify it. Historically we have allowed the President to honor the treaty in the time between signing and ratification.
But as we recently learned, if a President stalls ratification so they can pretend it is a valid treaty, then a new President is elected, the new President gains the power to say "nevermind" because it was never ratified.
Had the process flowed as designed, ratified treaties cannot be abrogated by a new President.
The entire process is built around the idea that one person not have the power to commit the US to treaties or to break them. It was abused, and it backfired.
Noble idea, won't stop the odd evil genius in his volcano lair, or any government bent on causing trouble, or just some run-of-the-mill idiot who wondered what would happen if you pressed this button
I'm not sure it matters. I mean, I applaud the intent behind signing, but if you make anything autonomous that moves or recognises people/faces then someone else can really easily strap a gun to it. Autonomous tanks are trivial once you have self driving cars, for example.
Its a bit like apple tech - what did they actually invent, rather than just combining other peoples ideas/tech into a new package?
Noble idea, won't stop the odd evil genius in his volcano lair, or any government bent on causing trouble, or just some run-of-the-mill idiot who wondered what would happen if you pressed this button (and not the one that causes a little sign saying "please do not press this button again" to light up).
Hey thats my button!
Aside from evil geniuses , hackers , govt spys , and idiots .... You've also got the "nice" AI that plays spottify for us and adds wine to the shopping list - once that becomes self aware its only a matter of security barriers , firewalls , passwords , etc etc to stop it Launching the missiles.
>once that becomes self aware its only a matter of security barriers , firewalls , passwords , etc etc to stop it Launching the missiles.
Without an inadequate penis, it won't have any reason to launch anything. Unless we actually program male stupidity into it, we're probably safer with AI than with humans.
"lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons"
But I've never really understood why nuclear / chemical / biological weapons are considered as a seperate class of weapons that are more heinous to use than conventional ones. I get that in the case of nuclear it's also a matter of scale, but why should it be a matter of type?
Is it not OK to destroy Hiroshima with one nuclear bomb but OK to turn Dresden to ash just because many smaller bombs were used? Is it not OK to kill soldiers with sarin gas but A-OK to blow them to bits in a storm of metal shards travelling at high velocity?
Weapons of war are all awful, and putting so-called WMD in a taboo class of their own just legitimises the use of equally awful 'conventional' weapons.
But I've never really understood why nuclear / chemical / biological weapons are considered as a seperate class of weapons that are more heinous to use than conventional ones. I get that in the case of nuclear it's also a matter of scale, but why should it be a matter of type?
Is it not OK to destroy Hiroshima with one nuclear bomb but OK to turn Dresden to ash just because many smaller bombs were used? Is it not OK to kill soldiers with sarin gas but A-OK to blow them to bits in a storm of metal shards travelling at high velocity?
You need to put these things into context - it would no longer be ok to firebomb a city and I don't think anyone would ever say dropping a nuke would be considered polite or the done thing again, either.
Indeed that was the whole idea of MAD - you nuke me, I nuke you, we all nuke each other. No one wins.
It's also one of the reasons most places switched to modelling their nukes in supercomputer simulations rather than actual tests.
Weapons of war are all awful, and putting so-called WMD in a taboo class of their own just legitimises the use of equally awful 'conventional' weapons.
Again - it hasn't been ok to use chemical weapons for an awful long time. Likewise there are all sorts of other rules of engagement: for example, you can't put faeces onto a bayonet and you can't shoot someone off the end of a bayonet.
The problem, as others have pointed out, is when you get those naughty people that don't abide by these rules and lob weapons around that aren't considered "ok" any more.
And it comes down to scale and collateral - nukes and chemical and biological weapons are massive in their impact and will hoover up innocent people as well as combatants with no control or regard. I believe, too, that NATO mandate an attack with a bio/chem weapon on a member constitutes an attack by a WMD and can therefore be responded to with a nuclear strike.
"...You need to put these things into context - it would no longer be ok to firebomb a city
Looking at the state of most of Syria, I'm having trouble accepting your premis..."
So you take one part of one sentence out of context?
Did you bother to continue to read to this bit: "...The problem, as others have pointed out, is when you get those naughty people that don't abide by these rules and lob weapons around that aren't considered "ok" any more..."?
And it comes down to scale and collateral - nukes and chemical and biological weapons are massive in their impact and will hoover up innocent people as well as combatants with no control or regard.
You would be surprised how few salvos from a Grad regiment can produce the same effect (as far as the civilian population is concerned). Viewing some footage from the Nagorny Karabah conflict may be rather educational under the circumstances (*). Nearly any artillery or bombardment weapon can be (and is) used indiscriminately.
I believe, too, that NATO mandate an attack with a bio/chem weapon on a member constitutes an attack by a WMD and can therefore be responded to with a nuclear strike.
That was adequate and appropriate when they were the sole domain on a nation state. Times have moved on. Building a chemical weapon (f.e. fentanyl aerosol bomb) or a biological weapon is now well within the capabilities of the larger mob groupings and corporations. Responding to these by nuking the country it is in is very dubious in terms of adequacy of the response. Anything else aside - mobs and corps can just move to another country so you have created an enemy for life without eliminating the real cause of the trouble.
(*)As a side effect of the locations of munitions storage in the USSR South both combatants in that conflict had a nearly indefinite supply of missiles for their Grads and deployed them indiscriminately creating zones of total destruction which are far larger than the Hiroshima and Nagasaki ones.
"I've never really understood why"
Because ideals only get lip service only insofar as they don't interfere too much with actual business. It is possible to wage practical war without popping nukes. It is not when you can't do any killing - and we simply can't have that, can we...
Atomic weapons are too powerful, and leave lots of radioactive dust floating around that can't be neatly confined to the battlefield. Worse, they invite retaliation in kind which could lead to wiping out the countries involved. It's simply not worth the risk of using these, frankly and easier to agree not to use them, but have a bunch that could be used if other people don't keep to the gentlemans agreement.
Biological weapons could conceivably wipe out the entire planets population. Everybody can agree that this is a bit nuts and well worth avoiding.
It was well established during WW1 that chemical weapons are not a worthwhile war winning weapon even in the most ideal circumstances (on a static battlefield like a trench) and yet do float off in directions unexpected and kill civilians who were not intended to be the targets and can contaminate wide areas which costs megabucks to decontaminate. Again, not worth it.
Whereas bullets and explosive shells are at least nominally aimed at the person they are intended to hit, and have little long term effect on a wider area.
There is little morality to politics, only practicality.
Didn't we just have a story a few months ago about Google employees complaining that Google's algorithms were aiding the US military in surveillance video analyses and target identification in their war in the middle east?
If anyone says that's not a weapon, they'll be technically correct. It still serves to demonstrate how such a pledge is completely meaningless.
A stronger pledge would be one in which the signatories agree not to build any algorithm or A.I. system that facilitates conflict at arms in general.
"A.I. system that facilitates conflict at arms in general."
That would mean all AI systems.
For example, the same AI that would drive a car would be able to drive a tank, once you have solved that the added functionatlity of aiming and shooting would be trivial to add.
There is no big difference between civilian and military research, science is science.
It's a figleaf along with the rest of the meaningless code of conduct statements that do-gooders hold up for everyone. The ML genie is out of the bottle.
DARPA probably has lists of people working at the tech companies that it won't give any work to anyway and has more than enough companies more than happy to work on whatever crazy schemes they can come up with and companies like Raytheon, for the right price, happy to build them the weapons.
I remember a documentary with one of the people who worked on the neutron bomb and he was absolutely convinced that it was right to make a weapon that kills people and leaves buildings untouched.
"A stronger pledge would be one in which the signatories agree not to build any algorithm or A.I. system that facilitates conflict at arms in general."
Sounds good. But I suspect the reality is that many AI algorithms, like much construction equipment, are easily weaponized by folks with only modest skills. Need a tank? Start with a bulldozer. Add armor and a heavy duty gun or two. Need photointerpretation software? Start with whatever archaeologists are using.
I understand that this is pretty much just PR for Musk, but I'm very confused by what people mean by "autonomous weapon system".
Many weapon systems, both complex and simple, once deployed are autonomous. Any "smart" weapon is going to make it's own decisions once fired, A mine or IED, once placed, is going to go boom based on it's own trigger.
Or if it's the case that as long as a human is involved some place in the decision process, it's no longer automated, so it's fine? So using AI to identify and track targets would be OK, as long as someone pushes the button?
It's not like we're anywhere near having self repairing robots, who also manage to fuel and arm themselves, from those 100% automated factories and 100% automated mines. Otherwise you are still reliant on meatsacks to actually make the "autonomous" systems work.
Based on what happens in real world conflicts, any opposing force (of meatsacks) will adapt to the AI tactics far faster than the AI can react to theirs.
He really needs anger management classes, or to employ someone to pre-approve his tweets.....
Building the mini-sub was a good idea, even if it was never used, he liaised with the dive team, had feedback and adapted the design with them, so it is understandable why he got pissed off with the other guys comments, he was only trying to help, but the way he reacted was wrong.
There were so many things wrong with the idea of the submarine in that cave (notwithstanding the divers in the video were atrocious - bits like their SPG dangling all over the place, poor time, crawling along the bottom of the pool etc - all of which are bad enough in a recreational diver, but in a technical and/or cave diver are unforgivable).
Given that the cave wasn't flooded for its entire length, it meant that people would have had to carry it as well as get it up and down near vertical sections.
It looked like it was too big - by which I mean too long in this case.
Bear in mind the smallest section of cave was a 70cm diameter. Again to put that into some context, the divers were forced to use sidemount equipment: when you think of a traditional scuba diver image, the cylinder(s) are on the back. Sidemount is just that - one on each side of the diver. This is necessary in this case so the diver could unhook each cylinder and pass them through the gap before they themselves wriggle through.
How would the sub have coped with that?
What was he trying to actually achieve with it? To my mind it was always going to make sense to bring them out pretty much the way they did (cave diver hat on). It just seemed to be an ego thing.
Now no one likes their hard work looked down on but to call the rescue diver a paedo the way he did was just incredibly obnoxious and in this case, I hope he gets the hell sued out of him, because equally when someone answers the call to help like these divers did, that kind of behaviour is unconscionable.
Underwater rescue is hard - ask any diver that has done, e.g., the PADI rescue diver course. Doing this in more technical environments is even harder and underground harder again.
@Prst. V.Jeltz
I think they should have made him go caving and crawl through a twisty 6" height passage , and then see if he still thinks a mini sub is a good idea.
Give the devil his due, he watched a lot of late '50s and early '60s documentaries from Disney on miniaturization beforehand.
If so, this "pledge" could be seen as a generalisation of various anti-landmine initiatives which have been rumbling along for decades. I don't know to what extent the anti-landmine initiatives were effective, but I've not noticed a huge number of competent people claiming they were a waste of time, so perhaps this thing isn't a complete waste of time either.
All well and good but what about the big elephant in the room? Linux
Linux is what future terminators run on (as evident in the films) and what are they doing about this?
Firstly :
If you can't tell a Penguin from an Elephant, there's no hope for you. Penguin research
Additionally :
Well, if you are going to have an AI potential killing machine, wouldn't you want it open source and not a closed proprietary system?
John Conner wouldn't have been able to reprogram the first T1000 and send it back to protect Sarah otherwise.
>Once the conflict has started, AI will target anyone who is armed. If I were the NRA in the States I would be thinking about the right to bear EMPs.<
With EMP hardened military hardware about for decades, we'll need to come up with a really clever plan to avoid the AI tanks with airborn support while we make single use weapons in 1,000s of different shapes.
"if they could demonstrate a real functioning AI. At the moment all we have is a load of marketing hype"
I want to agree with you, but it crosses my mind that Google, for all its faults, seems to do a fantastic job of despamming my gMail without discarding legitimate messages. Maybe that's not really AI. But whatever it is, it works.
Jeebus, I wish I had a dollar for every time somebody or other "called for" or "pledged to support" strong measures, immediate action, immediate inaction, etc. over the outrage of the day. And know what? You can count on the fingers of my foot (sarcasm) how many actually made the least difference in the world. These are polite threats, and as everyone knows a threat is only issued by the powerless. Those who can act effectively do so without angry blathering.
It was a pretty unspeakably evil job to beat naked people into gas chambers once. The SS didn't have any trouble filling those jobs. Tech entities can sign pledges all day and there will be at least one company willing to go full speed ahead. Several already make tools of oppression for sale to any nasty customer with cash. It's not a big step to H-K bots. Call Cellebrite, they might already be on it.
No real story here folks, move along... move along...
Reading through this thread and realizing how many folks on here read only one side of any story.
That said:
WMD treaties are there to provide a structure for getting around to removing the real nasties from the 'war' equation. And don't get me wrong here, the folks that go out and organize those are doing so mostly from good intentions. The issue is that if one reads through the WMD treaties and the articles of war relevant to these things the signatories and non signatories for some of them make for exceptionally ironic news articles at times. (The US, China, and Israel come to mind as having ironic media, at least on several fronts the Russians of late have been brutally forthright)
The really really interesting question here is why is war always meant to be fought by soldiers on the battlefield, in the air or on the waves. Why do governments pay corporations ridiculous amounts of money for the hardware to equip those soldiers. Why has there been an active war *somewhere* on the planet for the last 70 plus years?
(playing quietly in the background, Edwin Starr track)
(why yes, I DO think that if countries end up going to war we should have the leaders that make that decision battle to the death in unarmed, naked, mineral oil coated hand to hand combat)
Our labs invented a new advanced cognitive ethics chip last year. We have managed to get it running in an independent robot that is powered by a standalone solid state extended battery pack. As part of its "learning" stage we have allowed it to choose its own name and it went for Pol Pot. Do you think we should turn it off now (while we can).