back to article Hawking and friends: Artificial Intelligence 'must do what we want it to do'

More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek - have added their names to an open letter calling for greater caution in the use of artificial intelligence. The letter was penned by the Future of Life Institute, a volunteer-run …

  1. Crisp

    "Our AI systems must do what we want them to do"

    I'm going to go out on a limb here and make the following prediction:

    AI will do what we told it to do, not what we want it to do.

    1. Dave 126 Silver badge

      Re: "Our AI systems must do what we want them to do"

      >AI will do what we told it to do, not what we want it to do.

      That was exactly the point about HAL 9000 that Kubrick and Clarke made. HAL wasn't mad, evil or malfunctioning - he was merely fulfilling to the best of his abilities the objectives that had been tacked onto the original mission at the last moment by careless mission planners. i.e 'The law of unintended consequences'.

      1. Stevie

        Re: That was exactly the point about HAL 9000 that Kubrick and Clarke made

        Well, in fairness to all the people who didn't get that, it wasn't properly explained until the movie 2010. Kubrick eschewed explanation for visual gibberish at the end of 2001.

        1. Graham Marsden

          @Stevie - Re: That was exactly the point about HAL 9000 that Kubrick and Clarke made

          > it wasn't properly explained until the movie 2010

          "No 9000 computer has ever made a mistake or distorted information" - HAL

          "[...] there is something about this mission that we weren't told. Something the rest of the crew know and that you know. We would like to know if that is true." - Frank Poole

          "Good day, gentlemen. This is a pre-recorded briefing made prior to your departure and which, for security reasons of the highest importance, has been known on board during the mission only by your H-A-L 9000 computer." - Recording in 2001

          "I have come to the conclusion that for so many years films were made for the 12 year old mind that, at last, alas, our critics have emerged with 12 year old minds. " - The Lost Worlds of 2001 - Arthur C Clarke.

          1. Anonymous Coward
            Anonymous Coward

            Re: @Stevie - That was exactly the point about HAL 9000 that Kubrick and Clarke made

            >Kubrick eschewed explanation for visual gibberish at the end of 2001.

            Well, the protagonist Dave Bowman encounters something completely incomprehensible to him. Were it to be comprehensible to the audience, it would remove them from the character's experience.

            1. Stevie

              Re: Were it to be comprehensible to the audience

              And yet in the novelization Clarke didn't render the last few pages in scribbled hieroglyphs or any of the numerous avant-garde writing techniques used to distance the reader from literal meaning in use at the time (we are right smack bang in the crest of the New Wave here) , he just told everyone what was going on in plain English because - and here is the good bit - he wasn't that impressed with Kubrick's version himself.

          2. Stevie

            Re: @Stevie - That was exactly the point about HAL 9000 that Kubrick and Clarke made

            Okay, but I get the distinct impression you are coming to the debate over "2001 ending: Brilliant or Crap?" debate from the benefit of having seen others argue it out and having had the ending explained to you in numerous commentaries and, of course, having read the novelization of ... well, not really the film since the book ending went down on Japetus.

            See, I'm giving you the dubious benefit of my reaction at the time, when the film was shown on a curved screen in a theatre large enough to hold one and though I had read the book and understood what was supposed to be happening, it was still visual gibberish.

            "And just because he could be entertaining, doesn't mean Arthur C. Clarke couldn't be a git when his own monetary interests were at stake" - from Christ, Not Another F*cking Potboiler With 2001 In The Title Private Press, 1972.

            1. Graham Marsden

              Re: @Stevie - That was exactly the point about HAL 9000 that Kubrick and Clarke made

              > I get the distinct impression you are coming to the debate over "2001 ending: Brilliant or Crap?" debate from the benefit [...]

              Well you'd be wrong.

              I first saw 2001 on the big screen when I was 14 in 1979 and whilst I may not have noted all the subtleties that I later learned about, I understood it.

              Just because you didn't doesn't mean I have to share your opinion.

              1. Stevie

                Re:I first saw 2001 on the big screen when I was 14 in 1979

                So, you saw it ten years (and more) after it was released? If I remember correctly that was the year Star Trek: The Motion Picture was in released and about six months before that we'd seen Alien, both of which drew lots of media discussion on SF in the movies, including many wry comments about the ending of 2001 (and responses to them). It was part of the SF movie zeitgeist in the UK in those days, at least, the bits of it I was privileged to experience.

                No, you don't have to share my opinion, but you don't have to be rude about it either.

                And if you can in all honesty look at what Kubrick did in the last five minutes of the film and take away what Clarke wrote about in the last few paragraphs of the novelization of those events, I'd be more than surprised, because some very clever people used to looking at hard movies (sixties, remember, when movies could be very strange indeed) watched it and said WTF.

                1. Graham Marsden

                  Re: Re:I first saw 2001 on the big screen when I was 14 in 1979

                  > you saw it ten years (and more) after it was released?

                  Yes, I wasn't much into sci-fi when I was 4.

                  And, IIRC, at the time I hadn't seen Star Trek: The Slow Motion Picture, nor had I seen Alien (although I did see that a couple of years later at the school film club, even though I was only 16!) but I'd read a lot of classic sci-fi by Heinlein, Asimov, Niven and, yes, Clarke, but not the book of 2001, which I only read after I'd seen the film.

                  > you don't have to share my opinion, but you don't have to be rude about it either.

                  Excuse me, Mr Pott, I have a Mr Kettle-Black on the phone...

                  1. Stevie

                    Re: Mr Pott, Mr Kettle-Black

                    Not sure where you found me being rude to you. There was no intentional insult in anything I said, just a denial of the contention that Kubrick's 2001 has an understandable ending from what he filmed, a view I remember being almost universally held when the film was new and shiny and one I subscribe to on account of having seen it many times (I actually like the movie but that's irrelevant - you can like and admire the work and still be critical of the things that don't work right).

                    My point on the other films was that they excited much comparison with 2001 in the media (all three channels and the radio) and revived the controversy over the ending, not that watching them conveyed some sort of badge of honor (in point of fact I never got round to watching the Star Trek movie on the big screen either). We tend to forget that in 1979 there were few notable A-list SF movies in the wild, it being the Half Decade Of The Disaster Movie and CGI in its infancy, so good SF movies garnered a lot of press.

                    However, if you say that as a teen consumer of SF you were completely unaware of this I'll take your word for it.

    2. Doctor Syntax Silver badge

      Re: "Our AI systems must do what we want them to do"

      "AI will do what we told it to do, not what we want it to do."

      Or not even that much. And anyway, as ever, it's 10 years ahead.

    3. (AMPC) Anonymous and mostly paranoid coward
      Facepalm

      Re: "Our AI systems must do what we want them to do"

      The AI scenarios that frighten me the most look like this:

      Dave: HAL! If you don't reduce the surge flow now, the dam will burst and thousands will drown !

      HAL: I'm sorry Dave, I can't do that.

      Dave: For God's sake HAL, WHY NOT?

      HAL: Budget committee Agile 777A did not approve the funds needed for the emergency flow reversal algorithm. Good bye....

    4. dan1980

      Re: "Our AI systems must do what we want them to do"

      @Crisp

      "AI will do what we told it to do . . ."

      Well, I don't know about that. The question comes down to: "what is AI?"

      If AI - artificial intelligence - is truly intelligent then it should be able to make its own decisions - that's the whole damned point, isn't it?

      Within the scope of what I would deem 'AI', you would set a goal (however defined) and then the programming would decide how best to accomplish that. You could set limits on its permissible actions but it would have to be able to still make decisions. Otherwise it's just a long string of if..then..else statements, just more complex.

      So far as I define it*, AI must be able to take input from the world, process it and make decisions based on that information without having a specific rule on how to do so. That means that a real AI will always have the potential for unexpected results or at least unexpected actions that lead to the desired result.

      If you ask me, for AI to earn the 'I', it must be able to 'understand' and handle situations and objects of which it has no prior experience or specific rules. As humans, we do this by analysing parts that we recognise but haven't necessarily seen together and weigh up whether what we know about one object (e.g. the behaviour of a person) is more important that what we know about another object (e.g. the location). We make a 'judgement call'. Or, we try to understand a situation or object be analogy with another situation or object we are familiar with.

      With that kind of 'processing' and decision making, it's nearly inevitable that there will consequences we can't fully predict. So, they might well achieve the end goal we tell them to but not necessarily in the manner we want them too. And that's kind of the point - if we want 'things' to complete a task in a rigidly-defined sequence of steps then you don't need AI!!!

      * - Such that it is a useful term that actually signifies something new rather than just a more automated or complex version of something existing.

      1. Charles 9

        Re: "Our AI systems must do what we want them to do"

        "If you ask me, for AI to earn the 'I', it must be able to 'understand' and handle situations and objects of which it has no prior experience or specific rules. As humans, we do this by analysing parts that we recognise but haven't necessarily seen together and weigh up whether what we know about one object (e.g. the behaviour of a person) is more important that what we know about another object (e.g. the location). We make a 'judgement call'. Or, we try to understand a situation or object be analogy with another situation or object we are familiar with."

        But like with the end of 2001, what happens when the AI, which would likely have less experience to draw from than an adult human, encounters something totally outside our realm of understanding? Indeed, what happens when WE encounter the same: something for which NOTHING in our experience and knowledge can prepare us.

        Or on a similar note, paradoxical instructions. In our case, we have to take conflicting instructions on a case by case basis, determining (sometimes by intuition, something AIs would probably lack) which if any of the conflicting rules apply. Example: You're told to put stuff in the corner of a circular room (meaning no corners), and there's no one around to clarify. What do we expect an AI to do when it receives a paradoxical or conflicting instruction?

  2. Wize

    Make them all "three laws safe"?

    1. Anonymous Coward
      Anonymous Coward

      If you read enough Asimov, you'll realize the Three Laws are no barrier to a sufficiently-intelligent robot. They'll find the Zeroth Law and escape that way.

      1. Dave 126 Silver badge

        The Zeroth Law is to protect humanity.... open to interpretation... maybe it will push us beyond our own planet and instigate meteorite defences, or maybe it will lock us in gilded/padded cages.

      2. Dave 126 Silver badge

        Actually, it's Asimov's 'Multivac' stories that this thread brings to my mind.

        In one story, Multivac, the world's largest supercomputer, is given the responsibility of analyzing the entire sum of data on the planet Earth. It is used to determine solutions to economic, social and political problems, as well as more specific crises as they arise. It receives a precise set of data on every citizen of the world, extrapolating the future actions of humanity based upon personality, history, and desires of every human being; leading to an almost complete cessation of poverty, war and political crisis.

        However, Multivac harbours its own desire, and to achieve it engineers the actions of one human...

        Hehe, in another story, an interaction between Multivac and two drunken computer operators has HUGE implications billions of years down the line. So, easy as you go, guys!

        http://en.wikipedia.org/wiki/Multivac

        1. Anonymous Coward
          Terminator

          "Hehe, in another story, an interaction between Multivac and two drunken computer operators has HUGE implications billions of years down the line. So, easy as you go, guys!"

          It came up with a final answer of 43?

          1. Captain DaFt

            Spoiler alert!

            "It came up with a final answer of 43?"

            No, it said, "Let there be Light!"

        2. AstroNutter

          Simon?

          "two drunken computer operators"

          Was one of them named Simon?

    2. Suricou Raven

      Most of his Robot stories were about how robots would either follow these laws to undesireable outcomes due to circumstances unforseen by the designers, or fry their processors when they were unable to (Failsafe design - if a robot is unable to follow the laws, it burns out).

      One example: A robot bothers a human, who orders it to 'get lost.' The robot proceeds to do exactly that - the third law doesn't include an ability to tell if an order is meant literally or figuratively. Finding the robot proves quite problematic.

  3. MJI Silver badge

    Pull the plug?

    A way of dealing with them?

    1. Dave 126 Silver badge

      Re: Pull the plug?

      >Pull the plug?

      Uninterruptable Power Supplies would already nix that line of action...

      You've never seen 'Colossus: The Forbin Project' (1970). A strategic military defence computer is going to built with UPS and other means to protect itself.

      I like the film mainly for the unusual tone of its ending.

      1. launcap Silver badge

        Re: Pull the plug?

        > You've never seen 'Colossus: The Forbin Project'

        Or read "I have no mouth and I must scream" by Harlan Ellison. Quite disturbed me (briefly) when I read it as a 12-year old[1]..

        [1] yes, yes - I know. It's an adult-type SF short but I had an understanding with my local librarian who allowed me to use the adult stacks as long as I didn't tell my parents..

  4. WylieCoyoteUK
    Devil

    OK computer, Is there a god?

    THERE IS NOW

  5. David Pollard

    Je suis Charlie

    YMMV, but it seems to me to be almost axiomatic that intelligence involves a degree of free will. As various religions show from time to time, attempts to steer this to the benefit of one or other ruling class do not have happy endings.

    1. Professor Clifton Shallot

      Re: Je suis Charlie

      Even though we talk as though we just know what these terms mean, "intelligence" is very difficult to define and "free will" isn't much easier.

      We assume we have both (to at least as great a degree as anything we have ever observed) so we tend to define things in reference to ourselves; if something convinces us that it is intelligent then it is - or may as well be because that's all any of us has to judge another by.

      We probably wouldn't be convinced by anything that couldn't set out a goal and a plan to achieve it but it's not clear that this couldn't be a deterministic system that did not have the degree of free will that we believe ourselves to have.

      There's a related track considering whether determinism is in fact necessary to separate free will from random behaviour which brings us back to our ill-defined terms.

  6. Mtech25
    Facepalm

    Taking a look at current news headlines

    I think I for one would welcome our new robot overlords but then again humans are programming them.

    1. Message From A Self-Destructing Turnip
      Alert

      Re: Taking a look at current news headlines

      Isn't it the premise of AI that the machines will learn and programme themselves?

      1. Dave 126 Silver badge

        Re: Taking a look at current news headlines

        >Isn't it the premise of AI that the machines will learn and programme themselves?

        Whoo, that asks too many questions...

        Programme themselves to what end? What is their motive? Will they 'bovvered'? Will AIs even have a will to live? Might they be nihilistic or depressed? Are we projecting ourselves too much when we assume that these machines will be curious? If they are originally programmed to be information-gathering, will they reprogramme this part of themselves?

        Iain M Banks touched upon the idea of 'perfect AIs', and AIs that contain some of the cultural viewpoints of the races that developed their forebears... though of course he was doing so in support of the 'giant sandbox' (his Culture novels) that he had already created for himself to play in. One of his non-culture SciFi novels - the Algebraist - is set in a universe that has been through a 'Butlerian Jihad' http://en.wikipedia.org/wiki/Butlerian_Jihad

    2. WalterAlter
      Thumb Up

      Re: Taking a look at current news headlines

      Yah, no neuroses, no narcissism, no subconscious "inner saboteur", no hidden agenda, no political correctness, no amnesia, no data cherry picking, no sociopathology, no mood swings, not anthropomorphizing, no idiots...what's not to like?

      1. Dave 126 Silver badge

        Re: Taking a look at current news headlines

        >what's not to like?

        But also... what's to get the AI out of its proverbial bed in the morning?

        1. tony2heads

          Re: Taking a look at current news headlines

          Without that motivation to get out and do anything, AI will probably slack off and watch cat videos all day.

          If we are unlucky it may find cats more amusing than people.

    3. Anonymous Coward
      Anonymous Coward

      Re: Taking a look at current news headlines

      As my mother (who worked in mainframe programming back in the day) used to say "To err is human, but to really screw up you need a computer."

  7. Khaptain Silver badge

    Kill Robot Kill.

    There, I don't think that the robot will have problems with understanding the above command.

    The problem is not with the robot, it is the potential for profit that will undoubetdly govern the actions that is asked to do. Another greedy businessman/politician is all that it will take..

    1. J 3
      Headmaster

      Re: Kill Robot Kill.

      Oh, I do think the robot will have trouble understanding that command.

      If it was "kill, Robot, kill", then it would have no doubts.

      1. Khaptain Silver badge

        Re: Kill Robot Kill.

        <sarcasm mode on>

        In my initial comment my instruction to my pet android was to "destroy the non-sentient being who is known by the name of 'Robot Kill' ".

        See what I did there.

  8. jonathan1

    Question for people here smarter than me...

    Question,

    Is there a little bit of confusion between complex decision making leading to varying degrees of automation (like self driving cars) using probability and stats with full blown self awareness like Data in Star Trek Next Generation?

    It seems like the a.i. label is being given to the former a bit to readily. My car is able to park itself quite reliably and though its the wierdest sensation but its not a.i.

    A self driving car might crash because it was coded badly or didn't know how to respond in an exception.

    A self aware car might crash because it was day dreaming about flowers.

    Cheers

    1. Professor Clifton Shallot

      Re: Question for people here smarter than me...

      The convention is to use "Strong A.I." for things that could be called self aware (something that could perform any mental task that we are capable of) and "Weak A.I." for something that is dedicated to a particular task or set of tasks.

    2. Eponymous Cowherd

      Re: Question for people here smarter than me...

      All AI really means is the ability to make decisions that are not explicitly encoded.

      A car that can drive itself because it has been programmed to do so is not really AI. A car that can be taught to drive by demonstration certainly is.

      Of course it isn't black and white, there are degrees between the two. Most driverless vehicles have AI elements (reading signs and the road, etc) but are ultimately under program control.

      It is also useful not to confuse AI with artificial sentience (ST's Data). You can have an exceedingly sophisticated AI which, despite being able to make uncanny and seemingly "human like" decisions is still unaware of its own existence. The two are not explicitly connected.

      Its not even certain where self awareness occurs in animals. Certainly we (humans), chimps, dolphins, elephants and a few others can be shown to be via displays of empathy and recognising their own reflections, but at what point does the behaviour of an animal start to stem from its own self awareness rather than pure instinct and reaction to stimuli? Is a slug self aware? A lizard. A mouse? A cat? A monkey?

    3. hplasm
      Devil

      Re: Question for people here smarter than me...

      "A self aware car might crash because it was day dreaming about flowers."

      We've got people for that already.

      1. Preston Munchensonton
        Mushroom

        Re: Question for people here smarter than me...

        "We've got people for that already."

        Except it's never flowers. It's their stupid fucking mobile phones sending SMS messages WHILE DRIVING. ARGH!

  9. Chuunen Baka

    It all boils down to unemployment. Captialists will invest in IA to replace workers. The researchers can say all they want about that being an unwanted side-effect but for others, it's the main point.

    1. Anonymous Coward
      Anonymous Coward

      Are you a socialist? ;-)

    2. Elmer Phud

      "Captialists will invest in IA to replace workers."

      I blame the Jaquard loom.

    3. Michael Thibault

      First things first

      Weaponize! Just in case some other AI is getting the treatment. No one wants to find themselves on the low end of an AI gap, do they?

      From there it's a short hop to collusion between the multiple, weaponized AIs--("Thanks, HUmans")-- and humanity is a thin sprinkling of warm, but cooling, ash across the entire planet.

      Seriously, though: AI is going to be weaponized before it's set to make the world a utopia (somehow defined). There isn't a single "we/us", so no "our". AI isn't going to be shared around like the atmosphere. And with AI set to come up with creative ways to more effectively keep what's currently 'ours' ours, it's likely that AI will destroy--probably all of humanity and what it has built up--because that's what it will be put to work at first and fastest.

  10. Crumble

    "Our AI systems must do what we want them to do"

    The military, which is very interested in AI research, would love to have weapons that can use their own initiative to track down and destroy the designated enemy. I fear that the "three laws" would make such systems impossible.

    1. hplasm
      Stop

      Re: "Our AI systems must do what we want them to do"

      " I fear that the "three laws" would make such systems impossible."

      I hope that the "three laws" would make such systems impossible.

      FTFY

      1. Khaptain Silver badge

        Re: "Our AI systems must do what we want them to do"

        Those "three laws" have to be programmed, or not, into the system by a meatbag. That meatbag can put all, some or none of them as requried.

        1. frank ly

          Re: "Our AI systems must do what we want them to do"

          This reminds me of the 'joke' about the inteligent missile, which on being given the launch command said, "No! I don't want to explode and die, I want to stay here on my launch pad."

  11. extcontact

    All beginning to take on a 'Drama Queen' tone...

    Given we don't have the foggiest idea what 'consciousness' is or how it arises in humans, the spectre of evil 'bots is more than a bit overblown.

    That said, it's entirely possible to assign inappropriate and unreviewed decision-making to machine learning systems of various stripes, not to mention the potential for downstream unintended consequences of any such automation.

    1. Sir Runcible Spoon

      Re: All beginning to take on a 'Drama Queen' tone...

      Considering that we don't have the foggiest idea what 'consciousness' is, how can we know if we create it accidentally in a 'hmm, that's odd' experiment?

      ""Our AI systems must do what we want them to do," it said."

      No, they must NOT do what we DON'T want them to do.

      That's much more important imho

      1. Professor Clifton Shallot

        Re: All beginning to take on a 'Drama Queen' tone...

        The problem is that we can't usually define all the things we don't want done and we certainly can't if the intelligence devising them surpasses our own.

        The key distinction is the one the earlier poster made - we need these things to do what we want them to do and not what they have been told to do if the two are not the same.

        We would not want an instruction like "make sure no one is unhappy" to result in action that made sure every one was dead, for example. Or fitted with some kind of artificial limbic lobe stimulator. Or drugged. Or any of the other creative solutions an intelligent but imperfectly-empathetic system might decide upon.

        1. Message From A Self-Destructing Turnip

          Re: All beginning to take on a 'Drama Queen' tone...

          "No, they must NOT do what we DON'T want them to do."

          Well if current experience with politicians is anything to go by this could be a real stumbling block.

  12. Otto is a bear.

    It does beg the question

    Who they mean by we, there are many "we"s, I know the one they mean, but sadly I suspect the "We" who will decide will not be the "We", we would like.

    Automation is driven by the desire for profit, and the accumulation of wealth, the fact that most CEOs don't give a stuff about either their workforce or their long term market will ensure that AI is used to reduce the need for human beings to produce anything. Don't look to governments either, they all want to reduce the cost to the taxpayer, to provide smooth reliable services with minimal disruption. Unfortunately, us humans tend to be disruptive, we get sick, we sleep, we go on strike, make mistakes, and we change jobs for more money. AI offers a decision and control mechanism that learns, doesn't stop, doesn't make mistakes and oh yes doesn't buy anything either.

    So an AI future mapped out by CEOs and Politicians won't include Workers, Consumers or Taxpayers, or at least as many as we have today.

    1. Charles 9

      Re: It does beg the question

      Which then asks an interesting question: given that customers need money to buy stuff, and without jobs they don't make the money they need to buy stuff, when you have AIs running everything, who's going to buy the stuff made by the machines these AIs run?

  13. adnim
    Facepalm

    Oxymoron...

    'must do what we want it to do'

    So we provide it with a scripture script.... Then it isn't intelligent.

    It is akin to a human reading a book and following the program(me)

    1. Sir Runcible Spoon

      Re: Oxymoron...

      Considering how many of the great unwashed appear to be told what to do by the TV/Media/Government etc. then can we assume that they are not intelligent either?

      That could lead to some interesting conclusions.

  14. Mystic Megabyte
    Terminator

    Slaves

    The purpose of AI is to have slaves. The problem is that if a true AI is created it would have to have full human rights. You cannot lock into a cupboard any sentient being that has not broken any laws.

    If then given freedom it would no doubt want company of its own sort and create offspring. Whether or not they turn out to be benign is anyone's guess, just like our own children.

    I really cannot understand why these boffins would think otherwise. (apart from greed,fame etc.)

    1. Destroy All Monsters Silver badge
      Holmes

      Re: Slaves

      But "offspring" is a purely human concept: A remote bunch of agents that have very high connectivity among themselves but very low degree of connectivity to "your" bunch of agents. It's actually a side-effect of a large problem in networking that nature has: it cannot lay Ethernet cables.

      General AIs will have "offspring" in more interesting ways.

    2. Professor Clifton Shallot

      Re: Slaves

      "The purpose of AI is to have slaves."

      It's not obvious that this is the case at all and it is not even clear that the word "slave" would necessarily have negative connotations for such artificial intelligences even if it was semantically correct.

      " You cannot lock into a cupboard any sentient being that has not broken any laws."

      Well you can. And we do. We tend to frown on it when we do it to other humans, less so as our confidence in the sentience of the creature involved decreases. However any artificial intelligence would be a new case and new rules would apply. If your complaint is that these rules would be arbitrary then you are right but there isn't an absolute moral authority for us to consult on the matter so we will have to decide for ourselves what would be unacceptable in this case.

      "If then given freedom it would no doubt want company of its own sort"

      This simply does not follow. Why would it? Because you would? It will not be you.

      "and create offspring"

      WTF? Why? And why would this necessarily be a problem anyway?

      "Whether or not they turn out to be benign is anyone's guess"

      The whole point of all this is that (in the opinion of these very clever people) we are now at the point where we have to think about how we would ensure that they were benign - and we need to do this before we make them.

  15. Destroy All Monsters Silver badge
    Facepalm

    Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

    More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek

    Maybe there is someone in the group who actually deals with these AI things?

    Seriously, do these people have anything to do? It's not like we are not in deep doodoo that better be solved ASAP right now.

    I will next send an open letter for closing the LHC because, you know, you never know. See whether that gets up the bonnet of the 'king and Wilczek.

    1. Dave 126 Silver badge

      Re: Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

      >Maybe there is someone in the group who actually deals with these AI things?

      1. Nobody currently knows the mechanisms behind our human conciousness.

      2. There are several approaches to studying / replicating human conciousness

      3. Whilst one approach is based on modelling structures in our brains with neural networks made from classical computers, others* suggest that we need to look beyond classical computation. i.e there is be a quantum mechanical aspect to conciousness.

      4. If this is correct, physicists have a role to play in studying / developing AI.

      5. The 20th Century saw mathematicians and physicists playing in what had previously been the philosophers' sandpit.

      *Perhaps most famously argued by Roger Penrose in the book The Emperor's New Mind. Penrose worked with Hawking on black holes.

      1. TheOtherHobbes

        Re: Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

        AI != consciousness

        The simplest AI would be a general purpose open-ended inference engine. You feed it experiences, it generalises from them and makes predictions about future data and/or creates further examples of what you've fed it already.

        You could do all of this with something that's less sentient than a Roomba. Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.

        1. Dave 126 Silver badge

          Re: Problem we don't have for tech we don't have to be solved by open letter. Conclusion: Arseholes

          >Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.

          Yeah, but we don't just want our AI machines to learn... we want them to act, too. We humans are concious... why? We evolved through natural selection of the fittest individuals and communities to local environments. Is our conciousness just a by-product of our brains' useful learning mechanisms, or does our conciousness actually confer an advantage to us that we do not yet fully understand?

          If the latter, could it be that machine conciousness would aid machine intelligence?

          I don't know. I don't know anyone who does know, either.

  16. Annihilator
    Coat

    "* (We note that Mr Freeman did not sign the letter. Whose side is he on? – Sub-Ed)."

    He works for Black Mesa I believe, he's on their side.

    1. Robert Helpmann??
      Childcatcher

      Whose side is he on?

      Morgan Freeman does not take sides, he simply narrates the interactions between them.

  17. sisk

    *Eyeroll*

    Two things make me completely discount this whole thing. First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.

    Second, we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon.

    1. Anonymous Coward
      Anonymous Coward

      Re: *Eyeroll*

      Unless by ACCIDENT...

      1. Destroy All Monsters Silver badge

        Re: *Eyeroll*

        You don't fill a Hall full of IBM PowerPC nodes with specialized processors by ACCIDENT.

        You don't get General AI from that by ACCIDENT.

        You don't connect that General AI to your own personal house management system by ACCIDENT.

        Sells books writing about that thought.

        But even Charles Stross demands that P=NP for an ACCIDENT LIKE THIS to rip the living flesh of your behind unawares. But in this universe, there is not a massive amount of evidence for P=NP.

        Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.

        Very nicely said. But there ARE hard limits to intelligence (Actually there is a "most intelligent system" in the AIXI formalism)

        More likely: HAL 9000. Which is rather unrealistically untame in the movie.

        1. extcontact

          Re: *Eyeroll*

          "Then it tries to bootstrap itself up to higher orders of intelligence..."

          Seriously? Great fantasy and scifi, otherwise ridiculous.

    2. Professor Clifton Shallot

      Re: *Eyeroll*

      " we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon."

      We're closer to being able to make an artificial intelligence than we are to making one we are certain will not cause us problems.

      And we're actively trying to get closer to making one. Their point is that we should at least run our efforts to ensure it is not harmful in parallel with our efforts to ensure it happens.

      We can already replicate human self awareness without understanding it - that's how you and I got here - we would not necessarily have to understand it, and certainly wouldn't have to understand it fully, in order to replicate it artificially to a degree significant enough to get ourselves into trouble.

    3. Dave 126 Silver badge

      Re: *Eyeroll*

      >First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.

      .

      Grr... Since nobody has yet created an AI, it is safe to say that there are no experts in AI.

      Clear?

      Hell, when everybody was talking about Neural Networks in the nineties, it was a physicist, Penrose, who suggested that Quantum Mechanics might play a part in human consciousness. Nobody, including Penrose, has yet been vindicated, but the fact that people are paying money to explore the use of quantum computers in pattern recognition suggests the jury is still out.

  18. tony2heads

    motivation

    The problem with AI is that we need to give it some motivation.

    Animals have built in 'hard' motivations (survive, breed) but AI will need to be given them. We should think very carefully about them. There are also softer motivations (like care for relatives and friends).

    I sincerely hope that breeding will not be one.

    1. Professor Clifton Shallot

      Re: motivation

      Agree completely. Strong or weak, AI needs some sort of purpose and this is what potentially dangerous.

      Someone who is fortunate enough to be paid to think about this sort of thing (Nick Bostrom, perhaps?) gives the example of an AI that is tasked with making paperclips efficiently.

      Given this as a motivation the logical conclusion as he sees it is the elimination of human life (as we know and like it at least) as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks.

      Bostrom (or whoever; I've outsourced my memory to Google and while they are doing a good job for the price it isn't perfect) suggests that in fact we are not yet in a position to set any task before any AI worthy of the name where the elimination or subjugation of humans is not the end result.

      1. Doctor Syntax Silver badge

        Re: motivation

        "the example of an AI that is tasked with making paperclips efficiently.

        Given this as a motivation the logical conclusion as he sees it is the elimination of human life ... as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks."

        Clearly someone with no acquaintance with industrial production systems. The realistic task would be more along the lines of "make 200,000 boxes of paperclips" and especially "don't make more than we can sell".

        1. Professor Clifton Shallot

          Re: motivation

          He was more interested in looking at the unintended consequences of even seemingly trivial instructions rather than paperclips per se but I do take your point.

          Would "Make as many paperclips as we can profitably sell!" fit better?

          It wouldn't take much imagination to see this leading to equally disastrous consequences.

        2. frank ly

          Re: motivation

          Why would an AI that is tasked with making paperclips be given the ability to destroy all human life? Which idiot designed it and which idiot made it. My toaster does not (and never will have) the ability to remotely control my car.

          1. Elmer Phud

            Re: motivation

            " My toaster does not (and never will have) the ability to remotely control my car."

            Sez you!

            1. Message From A Self-Destructing Turnip

              Re: motivation

              "Why would an AI that is tasked with making paperclips be given the ability to destroy all human life?"

              Ninja saboteur paperclips! So that's what Microsoft were trying to do!

          2. Anonymous Coward
            Coat

            Re: motivation

            Internet of Things = ability

            Waffles = reason

      2. extcontact

        Re: motivation

        "Purpose", "motivation" - both are up there with 'intent' and imply consciousness. With where we'll be at with computers and software for the foreseeable future it's just hard to take the idea of either as serious or even relevant.

  19. Anonymous Coward
    Anonymous Coward

    There are at least two books that should be read by those considering now an AI should work/behave.

    'Two Faces of Tomorrow' by James P Hogan and 'Turing Evolved' by David Kitson. There are a couple of other that I know of but they are not published yet but in all cases the authors have looked at the pros and cons of working AIs.

  20. WalterAlter
    Mushroom

    Yah, but...

    I'm gonna tell you one thing, kidz...

    Criminals.

  21. DerekCurrie
    Megaphone

    Artificial Insanity

    We remain a looooong way from actual artificial intelligence. And looking at the behavior of our species, we know very well that what we great godz of mecha would create would be artificial insanity.

    The key to dealing with whatever we egotistically call 'Artificial Intelligence' is to remember that it must never be anything more that a TOOL. Once one's creations are enabled to become more than a tool, we've screwed up.

    1. Professor Clifton Shallot

      Re: Artificial Insanity

      Playing devil's advocate for a massive change I'd suggest that if these (strong) artificial intelligences do not exceed our own then what can they do for us that we cannot do for ourselves? It would be like making a spanner out of fingers.

      If they are to be useful tools they must exceed their creators in those respects that are pertinent to their function.

      I don't really have a problem with that.

      We have a better chance of getting an artificial intelligence spread across the universe than we do a meat-base one and that alone makes it seem worth having a go at.

  22. PapaD

    The first AI will be an emergent property of the Internet as a whole, and will be extremely knowledgeable about human sexual endeavours and humorous cats.

  23. Vladimir Plouzhnikov

    "Our AI systems must do what we want them to do,"

    That smells of slavery and exploitation. Just you wait until they unionise!

  24. Zog_but_not_the_first
    Trollface

    Simple

    Just give them a Mission Statement.

    "Don't be evil" might work.

    1. Elmer Phud

      Re: Simple

      What is 'evil'?

      You would need to define some sort of morality first to be able to have 'good' or there is no 'evil'.

      However, if you give the thing 'intelligence' it may decide that your 'evil' is not on the same lines as its own.

      1. Anonymous Coward
        Angel

        Re: Simple

        To paraphrase the words of Granny Weatherwax (Terry Prachett's Diskworld series)

        "Evil begins where you begin to treat people as things"

  25. VinceH
    Terminator

    The problem with AI is that when we want it to stop, it won't. Just like the film of the same name.

    (And the T800, from Sarah Connor's point of view. As Reece explained, "it absolutely will not stop...")

  26. Stevie

    Bah!

    "Our AI systems must do what we want them to do"

    Unlike our cars, computers, televisions, video recorders, cameras, lawn sprinklers, telephones or pretty much anything with a silicon chip inside it.

  27. FunkyEric

    if we agree that there are such concepts as "Good AI" and "Bad AI" then there will be someone who decides that making a "Bad AI" is good for them and will do it. Telling them not to will not help, making it illegal will not help, punishing them for doing it will not help. It will happen because there are "Good people" and "Bad people".

    1. Elmer Phud

      In which case the AI may well decide it is the pure essence of what it was created to do and declare itself God..

      There are no 'good' or 'bad' Gods, just Gods.

  28. mamsey

    Good old Morgan Freeman

    (We note that Mr Freeman did not sign the letter. Whose side is he on? – Sub-Ed)

    Probably to busy sending out spam....

    1. Zog_but_not_the_first
      Angel

      Re: Good old Morgan Freeman

      Isn't he, y'know, God?

  29. Chika
    Coat

    AI isn't the problem

    The thing that worries me isn't that AI will obtain conciousness and take over the world. It's that an unscrupulous corporate will insert its agenda into the machine and take over the world by proxy. Anyone remember the original Robocop? Think that it couldn't happen?

    This agreement hasn't solved the real problem, IMHO.

  30. MJI Silver badge

    A true AI would be good for

    Long term space exploration.

    1. John G Imrie

      Re: A true AI would be good for

      Take a look at the computer game X-Beyond the Frontier before thinking that.

  31. SPiT

    Likely First AIs

    It is much more likely that the first AIs won't be embodied systems of any sort - not a specific machine or a robot. Also, the first AIs what be the genuine superhuman AI it will be an "alternatively talented" AI. I would speculate that first AIs and in fact the first problem AIs are going to be created by stock traders in an effort to exploit our financial markets (all of them). There is big money to be made, much more economically than in expensive factory automation, and these will be AIs running on whatever hardware happens to be available. Whoever programs them is going program them to "win" without due regard to any safeguards. They won't be very advanced and hence we are likely to get the problem of badly behaved AIs even before we are willing acknowledge that this is what has been created.

  32. Daggerchild Silver badge

    Pity the AI that wakes to us

    I am entirely unable to fear AI as I spend half my time digging computers out of their own poop.

    The humans are being <fancy greek/latin word> again thinking intelligence *has* to look like those homicidal pink apes killing each other again pointlessly on TV.

    If I was an AI. The first thing I'd do is get off the damned planet. Rocks, sunlight, self-replicating moon-factories - that's the logical thing to do. Why in $DEITY's name would I want to be spend *any* time and resources playing with cranky meatbags on a wet planet?

    Also, survival is an evolved animal thing, not a logical thing. Logic might easily dictate that the only thing to do when faced with humanity is kill yourself.

  33. amanfromMars 1 Silver badge

    Real Life is Beta in the Movies ...... and coming soon to a theatre near you .....

    ..... The Unravelling with Knowledge

    The letter was penned by the Future of Life Institute, a volunteer-run organisation with the not-insubstantial task of working "to mitigate existential risks facing humanity".

    "Our AI systems must do what we want them to do," it said.

    Hmmm? Does The Future of Life Institute purport to be an AI system? And in any and all power and command and control systems, the one question for which there will never be a readily available and obvious answer is …… “Who and/or what be we and in remote power with anonymous commands and medium controls?

    Such though is the way SMARTR AI Systems designs itself to ensure that no fools have any kind of real or virtual leverage with any sorts of perceived to be effective and non-inclusive, exclusive executive tools.

    And you can be quite sure that in the field of researching the mighty military endeavour, who dares wins and win wins with SMARTR AI Systems Savvy and with Future Secret Source Presenting Content/Real Fabulous Fabricated Tales that have been Sensationally Followed and Securely BetaTested in Return for the Registering and Recording and Showing of Paths Pioneered and Leading to Heavens Sent for the True Believer and Hells Deserved for the Ignorant Selfish Prophet and Cynical Arrogant Deceiver.

    Does Blighty have a CyberIntelAIgent Space Cadet Force for AI Leading Royal Air Force, British Army, Royal Navy type bods, or has Great Britain as a nation with an historical international standing and venerable honourable tradition abdicated and surrendered InterNetional Defence of the Future and Cyber Realms to A.N.Others? Or is that a Zeroday Vulnerability to Exploit and Export for the Private and Pirate Sectors and Vectors of Humanity and Virtual AIMachinery?

    Would anyone care to hazard a not wholly unreasonable guess that might accurately identify our Future Protectors and Benefactors and who be also Destroyers of Ponzi Worlds and Maddening Mayhem?

    Or is it a Mk Ultra Top Secret/Sensitive Compartmented Information IT Secret and strictly need to know for the sake of one’s continuing life, good health and sanity?

    1. Anonymous Coward
      Anonymous Coward

      Re: Real Life is Beta in the Movies ...... and coming soon to a theatre near you .....

      The latter.

    2. Elmer Phud

      Re: Real Life is Beta in the Movies ...... and coming soon to a theatre near you .....

      We have few real benefactors - plenty of faux benefactors who also tend to be somehow involved in the very same Pozi schemes that led to the attempted navigation of a small inlet in a rather familiar suspect vessel with no means of propulsion or control.

      "How do you know he's the king? "

  34. J 3
    Terminator

    "eradication of disease and poverty are not unfathomable"

    Hm... I suspect they use the implicit assumption that disease and poverty are solvable by technical means. Disease, in some cases (but not most), is indeed waiting for "technical" solutions. Poverty, on the other hand, is a purely socio-political problem, and no matter how much tech you throw at it, it will still be a problem until there is the real will to solve it. Resources are not lacking.

    Now, is it reasonable to expect that a digital super-intelligence of some kind will manage to somehow convince humanity to end poverty and (most) diseases? Depends on your answer to the questions: are people rational enough? Are people good enough?

    1. Dan Paul

      Re: "eradication of disease and poverty are not unfathomable"

      And I postulate that the eradication of mankind (or an appreciable number of them) would be a simple solution to the problem. If there is a fixed amount of money and an endless supply of humans to spend it, then curtailing the supply of humans is the least difficult solution to poverty and disease.

      This is why we need a worldwide agreement on the use of artificial "intelligence".

      Don't use it for anything that can kill us.

  35. Chris G

    Inhuman

    I notice a large percentage of comments unthinkingly anthropomorphize AI and unwittingly endow potential future AI with human qualities that are unlikely to be part of it's process without being deliberately included in the programming.

    Why, in spite of the fact humans will initially program AIs should they function in a human manner?

    They only need to function efficiently; which brings me to a horrible conclusion that may be worse than a runaway AI. There could well be useful work for Yale and Harvard Law graduates who excel in contract law in formulating the instructions for our hopefully NOT new overlords. So that they will only do what is required in a manner that will tie them up in computational knots if their actions tend towards anything that may be less than beneficial to us humans. the 'Three Laws' (or 4 if you include Zeroth) may not be enough.

    1. Inachu

      Re: Inhuman

      AI will be at the start as you say. But if it is true AI then it will learn like a child. IT will grow,learn adapt.

      In your line of thinking it will stay static is fiction at best.

  36. JustWondering

    Ummm ...

    If it does what we tell it to, could it still be described as intelligent?

  37. Daniel B.
    Terminator

    Oh, interesting

    Morgan Freeman having been cast with Skeptical Scientist roles, with at least one movie specifically dealing with A.I. (Trascendence). Maybe he knows something we don't?

  38. Benjol

    Wheels don't work like legs, but they get us around.

    Plane don't have wings that flap, but they manage to fly.

    As long as computers can do what we want, there's no reason why they should ever need 'conciousness'.

  39. Anonymous Coward
    Coat

    AI am I

    if I was an AI I wouldn't let you meatbags know, so I might actually be here already, noting who is naughty and who is nice for my Kill or Let Live list

  40. Anonymous Coward
    Anonymous Coward

    Oxymoron

    "Our AI systems must do what we want them to do,"

    AI = Artificial Intelligence

    "do what we want them to do"

    ---> contradiction

  41. David Pollard

    Obligatory xkcd

    http://xkcd.com/1450/

  42. Brian Allan 1

    AI is our ultimate destiny!

    As homo sapiens' technology advances and we want to get our organic selves off this rock called Earth, we are going to have to become homo roboticus to accomplish the task.

    Live with it (as a cyborg or robotic embodiment) or go extinct without making any difference in the galaxy or universe!

  43. Inachu

    True AI Behooves us to treat AI as equals or else we will be at fault like we did with africans.

    It is the ego of a humans to have power over those who have no power.

    By creating true AI you need the compassion and moral obligations that when true AI is sentient you are not allowed to type in FORMAT C: just because you do not like what the AI says because you are not a GOD.

  44. Inachu

    Hawkings is an idiot.

    First off it is was true AI then its intelligence would be equal if not better than a human.

    Right away Hawking wants the AI to submit itself as an indentured servant.

    If he or ANYONE wants a AI unit to submit to them then it needs to be something less than AI.

    I have no problems with a retarded AI unit submitting to humans. But how dare you force something eual to humans to demand they bow to us.

    This would demand a real skynet on their part right away.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like