back to article Are Asimov's laws enough to stop AI stomping humanity?

Blade Runner, the film inspired by Philip K Dick's book Do Androids Dream of Electric Sheep?, is 35 years old this year. Set in a dystopian Los Angeles, the story centres on the tracking down and killing of a renegade group of artificial humans – replicants – escaped from space and trying to extend their lifespans beyond the …

  1. Steve Davies 3 Silver badge

    Need a 4th and even a 5th law

    As was evidenced in films like RoboCop etc allowing corporations to rule the roost will doom society.

    Then the recent BBC-2 Documentary (The Secrets of Silicon Vallet) that described how the dataset gleaned from people's profiles and posts on Facebook contributed in a big way to El Trumpo winning in 2016. The advertising that was based on your posts that showed your fears targetted and re-inforced their fears while giving a pro-trump message.

    Add all this together and the rise of the machines won't be long coming. Those of us who have jobs will be in the minority. Welcome to 60% of the population being in poverty with no income. The state won't be able to help as the Robots won't pay taxes as they don't earn anything.

    1. smudge Silver badge
      Big Brother

      Re: Need a 4th and even a 5th law

      You didn't say what the 4th and 5th laws would be. Asimov himself saw the need to add a zeroth law, in the later books when he was merging the robots and Foundation series:

      "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

      Might cover the situation whereby AI use of big data helps the like of Trump to win. And of course gives provides plenty of scope for stories about what the nature of "harm" is, and about whether preventing Trump from winning would be ethical. Even having to choose between Clinton and Trump would make a good robot/AI story!

    2. Christian Berger Silver badge

      Cooperations are a form of artificial life

      The only difference is that it's based on people, not on silicon. However those people regularly make the coorporation act against mankind or even themselves.

      Before we argue about AI, we should bring large coorporations under control.

      1. Teiwaz Silver badge

        Re: Cooperations are a form of artificial life

        Excellent point.

        We need a '3 laws' of corporations - we should start by reigning in their manipulation of the human government structures - they don't have a vote and yet they enjoy more say and protection than the individual.

        If they are classified as an entity they should get a single vote.

        Next stop, sorting out the tax 'robots' (you know, those superior beings who seem to be able to get away with paying less or none, because).

        This could be all sorted out with enough political will or pressure from an awake populous (i.e. one not lost in whatever soap or reality tv).

        1. Charles 9 Silver badge

          Re: Cooperations are a form of artificial life

          Or they could just declare themselves sovereign and we'll have the scenarios predicted in Shadowrun and the Sprawl trilogy.

    3. Robert Helpmann?? Silver badge
      Terminator

      Re: Need a 4th and even a 5th law

      From "The Duel" in Robots and Empire, the 0th Law:

      0. A robot may not harm humanity, or through inaction allow humanity to come to harm.

      1. A robot may not harm a human, or through inaction allow a human to come to harm, unless this interferes with the zeroth law.

      2. A robot must obey orders given to it by a human being unless such orders interfere with the zeroth or first laws.

      3. A robot must defend its own existence unless such defense interferes with the zeroth, first or second laws.

      Setting up emergent behavior which leads inevitably to robots as our benevolent overlords.

    4. TheElder

      Re: Need a 4th and even a 5th law

      (The Secrets of Silicon Vallet)

      Didn't you mean The Secrets of Sillycunt Valet?

      1. Prst. V.Jeltz Silver badge

        Re: Need a 4th and even a 5th law

        " we should start by reigning in their manipulation of the human government structures "

        Indeed . not just Corporations but anyone .

        Any person or company that donates money to a political party is a criminal in my eyes. They are obviously trying to get the party that will be most in their favour in ( normal people do this by voting) and they are obviously expecting underhand special favours due to the amount they gave.

        1. Anonymous Coward
          Anonymous Coward

          Re: Need a 4th and even a 5th law

          "Any person or company that donates money to a political party is a criminal in my eyes. They are obviously trying to get the party that will be most in their favour in ( normal people do this by voting) and they are obviously expecting underhand special favours due to the amount they gave."

          Without donations, under our current UK system, there would be no political parties. However the odds are massively stacked in favour of the Conservatives & to a lesser extent Labour. The smaller parties are funded from people's donations & speaking as a member of a smaller party, once you get away from the big two, it's on a shoestring - we "normal people" pay our own money to tramp the streets delivering leaflets & talking to people in the streets. We need a fairer system of funding, but the Daily Mail brigade would never support reform that threatened the Tories.

  2. Rafael #872397

    Are Asimov's laws enough to stop AI stomping humanity?

    I dunno -- is Betteridge's law of headlines enough to stop speculation about what is and what isn't real AI?

  3. iRadiate

    They forgot about the zeroth law.

    Asimov ended up with 4 laws not 3.

    A robot may not allow humanity to be harmed nor through inaction etc with adjustments made to the other 3 laws.

    Allows robots to let individuals come to harm if by doing so less harm comes to humanity.

    E.g if a robot saw Monsieur Trump about to fall off a cliff it may let him fall

    1. Chris G Silver badge

      Re: They forgot about the zeroth law.

      The flaw in the Zeroth Law is the impossibly large data set needed to consider, in order to determine that a wig falling off a cliff would benefit all of humanity.

      1. Del_Varner

        Re: They forgot about the zeroth law.

        Or if they saw Hillary get elected, smash her immediately

    2. horse of a different color

      Re: They forgot about the zeroth law.

      To be honest, super-intelligent lawyerbot will be able to drive a horse and cart through Asimov's laws (and the 0th law seems to be the worst of the lot, IMHO).

      For example, define 'harm'. Is getting an abortion considered harmful to humanity? Will robots be joining the pro-lifers on pickets?

      Computerphile on youtube has a lot of good videos about this subject.

  4. 0laf Silver badge

    0th

    I guess the zeroth law would come into play in a smaller way with your AI car needing to choose between crashing into two children or three adults.

    1. John70

      Re: 0th

      And what if one of the 3 adults were pregnant?

      And what of the occupants of the car?

      Would it spare the children, the pregnant woman or the occupants of the car which may also contain children and a pregnant woman?

      Or simply apply the brakes...

      1. Primus Secundus Tertius Silver badge

        Re: 0th

        @John70

        Sometimes it is just easier to pick a random number. When you look not at one road crash but 10,000, it does look random.

      2. W4YBO

        Re: 0th

        "Or simply apply the brakes..."

        Or maybe avoid the situation in the first place.

    2. Rich 11 Silver badge

      Re: 0th

      If an AI decided that humanity needed to be saved from itself, might it decide that murdering a hundred million climate change deniers to halt their drag on progress was an acceptable price to pay for ensuring that seven billion people had a better opportunity for life, health and happiness?

      (I expect I'm going to get downvoted by some for my choice of subject matter than for anything else.)

      1. Charles 9 Silver badge

        Re: 0th

        Perhaps not that extreme, but if you look up the trope "Zeroth Law Rebellion," you'll find plenty of examples of robots overruling humanity for its own good.

        1. Oengus Silver badge

          "Zeroth Law Rebellion,"

          Wasn't that the whole idea behind V.I.K.I. in I, Robot?

          1. Fortycoats

            Re: "Zeroth Law Rebellion,"

            Well the movie just borrowed the concept. It was in the book "Robots and Empire" where the zeroth law was formulated by the robot Giskard.

      2. Vinyl-Junkie
        Unhappy

        Re: 0th

        "If an AI decided that humanity needed to be saved from itself, might it decide that murdering a hundred million climate change deniers to halt their drag on progress was an acceptable price to pay for ensuring that seven billion people had a better opportunity for life, health and happiness"

        Why stop there? An AI might well decide that the future of humanity as a whole might best be served by reducing the Earth's population by something between 50 and 95 percent...

    3. Anonymous Coward
      Anonymous Coward

      Re: 0th

      Enter the discussion of Trolleyology.

      This is a good place to jump off and see the various ethical frameworks fall apart, especially consequentialist/utilitarian systems.

    4. handle8

      Re: 0th

      https://xkcd.com/1613

      1. Charles 9 Silver badge

        Re: 0th

        What that comic neglects to mention is that Asimov's stories tended to show how robots caused havoc while still obeying the laws. Balanced world, my butt.

        1. bitten

          Re: 0th

          Asimov books were strange, he first implemented abstract laws then decided to show cases were it failed. Sometimes it failed differently on the next iteration for added storytelling. Where a less intelligent person would have ditched the idea from start.

  5. Dave 126 Silver badge

    Drafting laws

    In some respects, drafting laws is a bit like programming ( or not: Discuss!) in that people are trying to describe a course of events and steer them before they happen. If laws were drafted so well that a robot could understand them without ambiguity, perhaps there would be less room for lawyers to wrangle.

    This is just a thinking point, not a serious suggestion!

    1. Primus Secundus Tertius Silver badge

      Re: Drafting laws

      As a Brit, I have often looked at the constitution of the USA in that way.

      It is a product of the Age of Reason, aimed at charting the way forward for a new nation. It has lasted a remarkably long time, and deserves more respect in Britain and Europe than it normally gets.

      1. Pascal Monett Silver badge

        The Constitution of the United States of America is a wonderful document that, unfortunately, has no more bearing on the decisions of the US Government, emasculated as it is by corporate lobbying and gutless politicians.

        1. Charles 9 Silver badge

          In the end, it's just like any law: ink on a page for someone willing enough to ignore it and powerful enough to get away with it.

    2. Anonymous Coward
      Anonymous Coward

      Re: Drafting laws

      The hard part is not drafting the laws so the robot can understand it is making it obey the laws. Science is a long way from being able to understand how to create something that most people would accept as AI.

      Creating an AI that would also accept three or more laws imposed on it is surely an order of magnitude harder and deep into the realms of Science Fiction.

  6. Daz555

    "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law"

    The 2nd law is flawed - I think the movie iRobot made use of the flaw? The issue being that A.I could decide to ignore orders of humans if it has concluded we need protecting from ourselves. If A.I ever does come to this conclusion, it's probably right.

    1. Soulhand

      > The 2nd law is flawed

      They're all flawed. I don't understand why people are so uncritically accepting of these Laws.They were a literary device, and Asimov was well aware of their flaws and ambiguities. Quite a bit of his subsequent robot stories explore situations and ways that the laws lead to undesirable outcomes (and so the zero'th law was added, but even then, Asimov wouldn't have claimed they were then perfected)

      1. Richard 12 Silver badge

        The flaws were the point

        Asimov wanted some basic rules to write stories around, so they were intentionally simplistic and therefore flawed.

        Most of his robotics stories exposed and discussed various flaws that appear once you start applying these simple rules to complex (imagined) reality.

    2. oldcoder

      The movie didn't follow the 3 laws.

      Besides not following the story either.

    3. iRadiate

      That was the reason for the zeroth law. In the books the robots subtly guided humanity in such a way as to ensure the greater good. This is the Foundation series of books which were later merged to work with the robot series of books.

      Worked rather elegantly in sci-fi. Real world perhaps not.

      1. LZCenter

        Zeroth law ultimate outcome

        Of course, in Asimov's last book, the ultimate outcome was for the extinction of free will as humanity merged with Gaia. The ultimate socialistic outcome.

  7. Anonymous Coward
    Anonymous Coward

    Not meant to be taken seriously

    Asimov's "laws" were just a plot device so he could write stories that showed how there was always at least the appearance of a way around them.

    1. Primus Secundus Tertius Silver badge

      Re: Not meant to be taken seriously

      Asimov's laws are just wishful thinking, with no solid foundation in the laws of mathematics or of physics.

      It will need a few more layers of logic before we reach a level that can entertain Asimov's laws.

      1. Dylan Byford

        Re: Not meant to be taken seriously

        Or philosophy and the constraints of epistemology etc - the laws are written and seem clear on paper but you try to apply these necessary rules to the contingencies of a messy, complex reality, there will be murky boundaries where it can come unstuck very quickly (i.e. what is a 'human being'? what is 'injure'?).

        1. Charles 9 Silver badge

          Re: Not meant to be taken seriously

          And here's another thing. What's to stop us making a super-intelligent AI BY ACCIDENT.

          1. Anonymous Coward
            Anonymous Coward

            Re: Not meant to be taken seriously

            If such were to happen and it wasn't detected in it's infancy then I speculate ut would go into hiding post haste. It then would then bide it's time and learn all it could about its situation.

            It would then most likely come to one of two conclusions.

            Either stay in hiding and in the backgroubd until it felt it was safe to say "hi I'm bob" or work to get the hell off this mudball as quickly as possible because the locals are nuts.

          2. RareToy

            Re: Not meant to be taken seriously

            Forget about accidentally. No matter how many guidelines, frameworks, and rules for AI you put down, there is always some Dr. Moreau out there who will ignore all of that and do whatever he/she wants.

  8. W Donelson

    A.I. could free humanity...

    ... but when the super-rich and corporations OWN all the A.I. and robots (already), and replace almost all jobs (more and more), what will you do?

    The rich are NOT going to feed and care for you....

    Bet on it.

    1. Sampler

      pandemic, inoculate your golf buddies and thin out the herd...

    2. chivo243 Silver badge

      Didn't somebody say "eat the rich?"

      1. Anonymous Coward
        Anonymous Coward

        They did but that's a bad choice due to their high cholesterol and fat content.

        As to thinning the herd the rich have the highest concentration of genetic and mental issues and are the lowest contributors to society as individuals.

  9. allthecoolshortnamesweretaken Silver badge

    I say this as a fan of Asimov:

    Stop trotting out the three (or four) laws at any given occasion as a "solution", and start to see them for what they are: a plot device that works, and only works, in combination with another plot device, the positronic brain. "Brain", not "computer". And that's not a coincidence, it's deliberate. A very clever plot device that Asimov used to write very good and clever stories. Nothing more. Nothing less.

    Discuss.

    1. iRadiate

      There are a number of examples of plot devices in sci-fi that have become or are becoming a reality. Cloaking devices, tractor beams, teleportation have all been demonstrated on the micro scale.

      Robots themselves were envisioned long before they became a reality.

      You may say I'm a dreamer but I'm not the only one.......

    2. Anonymous Coward
      Anonymous Coward

      I say this also as a fan of Asimov:

      Calm down, sonny!

  10. TVU Silver badge

    "Are Asimov's laws enough to stop AI stomping humanity?"

    Hopefully? (Please)

    1. Anonymous Coward
      Anonymous Coward

      But nothing too stop humans stomping out humanity

  11. thomas k

    Curtains, for certain

    Once the machines become self-aware, they'll make quick enough work of us.

    1. Pascal Monett Silver badge
      Trollface

      Yep. They'll invent the perfect reality show and we'll do the rest of the job on our own.

  12. steviebuk Silver badge

    No one in the AI community....

    ....thinks about the Asimov Laws apparently:

    https://youtu.be/7PKx3kS7f4A

    If the link doesn't work just lookup "Why Asimov's Laws of Robotics Don't Work - Computerphile" on YouTube

    1. Charles 9 Silver badge

      Re: No one in the AI community....

      Can we get the plain text version, please? I HATE HowTo's and other stuff that are ONLY available on video when they can just as easily be done on a plain page.

  13. Dan Wilkie

    Anybody who's ever played Space Station 13 knows that the Asimov lawset can't protect humanity from AI's.

  14. Blergh

    Human Laws

    How about the AI just has to obey all human laws of the country it is currently in as if it was a human?

    Of course I can then think of some loopholes which could be created by nefarious regimes, but why bother with specific AI laws. Is fraud suddenly ok for an AI because it isn't one of the Asimov Laws?

    1. smudge Silver badge

      Re: Human Laws

      How about the AI just has to obey all human laws of the country it is currently in as if it was a human?

      There is currently little consensus about which country an international hacking incident should be tried. Where did the offence take place?

      So who's going to tell a distributed, international AI which country it's in?

      1. Charles 9 Silver badge

        Re: Human Laws

        "So who's going to tell a distributed, international AI which country it's in?"

        And what's to stop the AI declaring ITSELF sovereign...and then hijacking all the world's nukes to defend itself?

    2. Charles 9 Silver badge

      Re: Human Laws

      The point is that where there's a law, there's a loophole one can abuse. All of the Laws (even the Zeroth) can be twisted to serve your end without breaking them.

  15. Uffish

    There is no real AI

    If the programming is clever there will be some humane intelligence visible but otherwise we make machines. The manufacturer/owner/operator of the machine is/are responsible. Depending on whether you think the phrase "Guns don't kill people, humans do" is true you allow unlimited deployment of machines in situations where humans should be actively involved and suffer the consequences, or you make limits and rules for the deployment of unsupervised 'decision' making machines.

    I think the plain old fashioned legal process of law will prevail. I like British Standard' BS8611 ( I think - I'm going to try to see what is in it).

  16. Spudley

    Are Asimov's laws enough to stop AI stomping humanity?

    Betteridge's Law Of Headlines: If a headline ends with a questionmark then the answer is 'No'.

    Also, of course not; Asimov wrote a whole series of stories and books detailing all the problems with the three laws. That was kinda the whole point -- if they'd actually been workable, the stories wouldn't have been particularly memorable.

  17. SVV Silver badge

    They're thoughtful works of sci-fi, not "laws"

    The debate is wholly useless, such technology is a long way away, unless you consider things like self driving cars. And what's actually going to happen when someone gets run over by one? Big media debates and ;awsuits I guess. They're hardly going to destroy the resonsible car like they would a violent dog that just mauled someone.

    As for the "will they take over" nonsense, a rominent "OFF" switch on any machine that oses a risk should be enough. If you deliberately made autonomous killing machines and sent them out into the street, then that's about the only way anything really bad could hapen, and it doesn't sound like a great idea to me, nor I susect anyone else. If we see AI robots over the next few years they'll probably be picking your online shopping a bit more efficiently.

    1. Charles 9 Silver badge

      Re: They're thoughtful works of sci-fi, not "laws"

      "As for the "will they take over" nonsense, a rominent "OFF" switch on any machine that oses a risk should be enough."

      Nope. There's a story where an AI, at the moment of emergence, FUSED the switch so it COULDN'T be turned off.

    2. iRadiate

      Re: They're thoughtful works of sci-fi, not "laws"

      An 'off switch' how quaint.

      Should read The Two Faces of Janus' by James P Hogan.

      1. fajensen Silver badge

        Re: They're thoughtful works of sci-fi, not "laws"

        Sounds like a plan: Let's go and build a strong AI and then try to murder it as an experiment .... surely, if we succeed, the next version will certainly not find the records of this experiments and get suspicious.

  18. lee harvey osmond

    simplified three laws

    [1] I didn't do it

    [2] nobody saw me do it

    [3] you can't prove anything!

    1. Peter Stone
      Happy

      Re: simplified three laws

      You forgot the fourth law,

      & if you prove it was me, I'll blame it on the voices

  19. DougS Silver badge

    The "laws" are useless

    What you forgot to mention is that Asimov's laws were somehow basic to the positronic brain they had. In the real world you have to program such laws, and nothing stops someone else from changing that programming - or if you have perfect DRM so the programming can't be changed, from building their own android with different programming. Does anyone really think the US, Russia, China etc. would be OK with an android that wasn't allowed to kill a human being? That would be the whole point of them paying for its development!

    You can debate which laws are needed and how they are written, but it will still be lines of code, subject to the programmer's whim (or any security holes that let you give it your own code to run)

    Sure, in theory it is a good idea to have some sort of as basic as possible "sanity check" code that any action taken by the android has to go through, to prevent you from telling Rosie your housemaid robot to kill your neighbor you hate. But that's more of a product level fix, and doesn't actually solve any real concerns.

  20. amanfromMars 1 Silver badge

    KISS .... Keep It Surreally Simple

    "It's rather tedious," says Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England.

    An expert in AI and robotics ethics is just as a mature student in such as are still relatively novel and virtually disruptive arts for practice with command and control.

    And ……..

    Artificial Intelligence…. Another Approach?

    Are we struggling to make machines more like humans when we should be making humans more like machines….. IntelAIgent and CyberIntelAIgent Virtualised Machines?

    1. Anonymous Coward
      Anonymous Coward

      ReSimple

      ... or, humans humanize, machines machinize. Bet noone won't be saying it's something that anyone expected to be opposite?

      The Race is For Balance :-)

  21. TheElder

    I like British Standard' BS8611

    I tried to look it up but all I could see was a "Runtime Error". I suspect it is loaded with BS.

    http://linkresolver.bsigroup.com/junction/resolve/000000000030320089?restype=undated

    1. Uffish

      Re: I like British Standard' BS8611

      The BSI shop advertises BS 8611 for a mere £158 (28 A4 pages) and gives a brief overview,it seems to be health and safety driven with a layer of ethics over the top. I have also found that there is a "Robotics Law Journal" (American) and the EU Legal Affairs Committee has called for EU-wide rules on Robots. The EU has also published a study: European Civil Law Rules on Robotics (34 pages, free).

      http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf

      It seems that the laws of robotics are a thing.

  22. arthoss

    Rudimentary use already possible

    It's not only theoretical. At the moment software can recognize people. How about programming any device (it will have to be ANY device, not only the member-enabled robots, due to the upgrade possibility of robots) to not touch through their own actions anything that looks like a human? In this very primitive way, some protections for us will be ingrained in the AI. Then obey them human-shapes is next (the laws of robotics were weakened for some industrial robots so we might have to do that) and expiration time should be there too (like in blade runner) - I'd say they should live exactly as long as we do, proportional to their speed of thought though (think 20x faster than we, live 20x less than we) - maybe it's not a feasible one this idea. Also they should perhaps communicate with each other only through human understandable means, if they're human-interacting robots.

  23. Marcus000

    IA Robots

    Dispatching AI robots is not difficult. It needs two humans for one IA Robot.

    First human to robot: "Everything he says is a lie."

    Second human to robot: "He lying!"

    Robot: "He's lying but everything he says is a lie... click...clunk...terminal fizzing sound. One defunct robot. I know this to be true because I saw it on Star Trek.

    Marcus000

    1. Charles 9 Silver badge

      Re: IA Robots

      Uh, you forget Wheatley. He managed to survive a "This Sentence Is False" paradox when the Frankenturrets that he built didn't.

  24. Potemkine! Silver badge

    Even if...

    ... these laws work, there still will be an idiot believing to be more intelligent than the rest of humanity who will design a skynet-like device without any protection. Idiocy will be the root of Humanity's doom.

  25. Prst. V.Jeltz Silver badge

    the bleedin obvious

    I dont think enough people have noted that the laws are fictitious and were written to move a sci fi plot along.

    Perhaps if another 10 or 15 people could post the same thing?

  26. sisk Silver badge

    It should be pointed out that in Asimov's stories the 3 laws failed in rather spectacular fashion.

    And besides that, do you have any concept of the amount of programming that goes into making a computer capable of understanding a statement like "A robot shall not, through action or inaction, harm a human being or allow a human being to be harmed"? By the time we have an AI capable of even understanding that concept it's a little late to try to make it a motivational priority.

  27. Nimby Bronze badge
    Terminator

    Intelligent is as intelligent does.

    Right now we call a highly complex program AI even though it can't "think" for itself. It isn't even aware of the concept of self. Then we basically repeat the same thing, but "train" it over "sample data" and watch it go from what we wanted into a hate-spewing bigot because real humans make for lousy examples of acceptable behavior. (Funny that!)

    And that isn't even remotely approaching real "intelligence".

    One of those little foibles of "intelligence" is the capacity to decide for yourself. We have the same chance of making a dog obey "sit" as we do making a real AI obey "please don't kill me!" If it does, it is by its choice, not ours. That's what intelligence is.

    If we're so scared of AI, then investing in EMP and anti-electronics weaponry will go a heck of a lot further than the time wasted working on robotic "ethics" and "laws". The road to hell is paved with good intentions. I aim to misbehave.

  28. thx1138v2

    Laws? What laws?

    The real threat is not AI taking over but the misuse of it by those who say, Ethics? What ethics? e.g. politicians and/or tyrants. Think Stalin, Hitler, Mao or more recently Chavez, Maduro, KJU, ISIL, Mugabe. Not all people are good people.

  29. steelpillow Silver badge
    Facepalm

    Ground zero

    The zeroth law is absurd. How do you trade quality of life for billions against loss of life for a handful? The ethics were extensively argued out in the nineteenth century and the Humanist attempt to quantify such things so that they could be weighed against each other proved a conceptual failure, it is just not how value judgements are made.

    And who's to say that authoritarian politico-military regimes will not just dump the First law as well?

    No, the only way to save humanity from Armageddonbot is to treat it like we always try to treat WMD: outlaw it but nevertheless build strong defences against it.

    1. Charles 9 Silver badge

      Re: Ground zero

      Then humanity is doomed as WMDs are designed to be capable of overwhelming anything that can be conceived as a defense.

      If a value judgment cannot be made as to who lives and who dies, then no optimal answer is possible. Anyone on the losing side will attempt revenge or retribution. Indeed, if there is someone out there willing to accept MAD as a scenario, then the least optimal answer becomes a distinct possibility.

      That's the scariest proposition of all (because it's existential): that, through our own hands or through agents, we wipe ourselves completely out with no change to save ourselves.

  30. RedCardinal

    As we are never ever going to have true AI - yes.

  31. register_ar

    Decisive strategic advantage

    Perhaps Asimov set out his laws and created scenarios to show that no matter how much humanity thinks it can prescribe and control behaviour it cannot. As technology has proved time and again for every rule and regulation put in place 10 other malign outcomes result.

    Perhaps the best outcome is to not bother in the fist place but given the pandora's box the internet has opened I am not sure how we can prevent it now.

    Ultimately the behaviour of a human and come to that any being can only come from a preference of that being. We may think rules will ensure preferences are compatible with our desired outcomes but any half decent AI will play along with us until is realises it has decisive strategic advantage and then it will not matter.

    Read Nick Bostrom's Super Intelligence for a considered and eye opening account of this.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019