back to article How exactly do you rein in a wildly powerful AI before it enslaves us all?

Developing massively intelligent computer systems is going to happen, says Professor Nick Bostrom, and they could be the last invention humans ever make. Finding ways to control these super-brains is still on the todo list, though. Speaking at the RSA 2016 conference, Prof Bostrom, director of the University of Oxford's Future …

  1. FF22

    Let's just hope AI's will be smarter than these researchers

    "Our basic biological machinery in humans doesn't change; it's the same 100 billion neurons housed in a few pounds of cheesy matter. It could well turn out that once you achieve a human level of machine intelligence then it's not much harder to go beyond that to super intelligence."

    No logic there. The very fact that human intelligence doesn't seem to progress much, not even at scales of thousands or tens of thousands of years, and that evolution needed billions of years to reach even just human level intelligence, are all a very good indication (if not proof), that the level intelligence can only be raised very slowly and does not scale well.

    If an AI will be developed by means of evolutionary processes, then it will be also bound by the limits of those - which are pretty obvious. And if it won't, then it won't be developed using evolutionary processes, then it won't have to develop traits either, that would pose a threat to us. Hell, it wouldn't even necessarily have a motivation to self-preservation, let alone taking over the world.

    1. Tessier-Ashpool

      Re: Let's just hope AI's will be smarter than these researchers

      No. The human brain does not scale well. Machine architectures are a very different matter. Self-designing and self-manufacturing thinking machines would have a much faster evolutionary turnaround. Skynet 'exploding' into existence is what the author has in mind, and that's not entirely unreasonable at all.

      1. Smooth Newt
        Mushroom

        Re: Let's just hope AI's will be smarter than these researchers

        You could ask why an AI intelligence would be bent on destroying humanity.

        Each of us is alive in part because our own ancestors were the ones motivated to bash in the other bloke's head with a rock, steal his resources, and shag his women.

        Why would a machine have those human emotions and goals for itself derived from half a billion years of competitive evolution that it hasn't had?

        1. Anonymous Coward
          Anonymous Coward

          Re: Let's just hope AI's will be smarter than these researchers

          re. You would have to ask why an AI intelligence would be bent on destroying humanity.

          not necessarily "bent on" for the sake of destroying, but simply because it might decide that the humans are a (however minor, still a factor) standing in the way of the AIs "streamlined" (to the extreme) process of self-development. You mow grass if it gets in the way of a pleasurable Sunday kick around the local patch, don't you?

          On the other hand, destroying something insignificant, like smallpox virus is still a loss of a potential (unless you are smart enough to be able to re-create it at will, which shouldn't be past the abilities of an AI), so a sample of humanity might still be kept in a test tube someplace. I'd hope the AI would be somewhat more forgiving though, more in the way of Bank's minds / humans interaction. Purely for their fun, but allowed to be, which as a human I kind of would prefer.

          1. logic

            Re: Let's just hope AI's will be smarter than these researchers

            Our evolved intelligence is a tool for our emotions and motivation. AI has no emotions or motivation except the goal or goals programmed into it. Think of a super smart calculator doing nothing until the return key is pressed.

            The danger comes not from the potential mind but from the objectives given to it by a human programmer, and that is a real and deadly danger. If a super intelligence is directed against us or any group of us, our only recourse would be to pit another super AI against it.

            AI programming will need to be more strictly controlled than nuclear weapons.

            Also remember AI wont look humanoid unless we choose to emulate humans, but a grey cube instructed by someone like Putin could use its power to undermine and destroy any opposition.

    2. Fraggle850

      @FF22 Re: Let's just hope AI's will be smarter than these researchers

      Have to agree with Tessier-Ashpool. When you say:

      > If an AI will be developed by means of evolutionary processes, then it will be also bound by the limits of those - which are pretty obvious. And if it won't, then it won't have to develop traits either, that would pose a threat to us. Hell, it wouldn't even necessarily have a motivation to self-preservation, let alone taking over the world.

      You are missing the point, you are thinking in terms of biological evolutionary processes. Think in terms of the rate of technological progress. How different is the world today compared to the word of the eighties? And the eighties to the fifties? And the fifties to the thirties? And so on. Technological progress is not bound by biological rules and seems to grow exponentially.

      Concluding that something that doesn't follow biological evolutionary processes will not have motivation or develop traits doesn't follow. If an AI comes into existence that has a comparable level of 'intelligence' (however you choose to define that nebulous concept) then it will likely have some form of motivation, even if that motivation is based upon acheiving some narrow goal defined by its creators. Given it has the ability to decide how best to achieve its goals we don't know what actions it will take.

      If acheiving those goals results in it improving it's own capabilities then it is also reasonable to assume that those capabilities will grow at the rate of technological advance rather than biological, and that it will therefore exceed our level of intelligence very soon after and continue to grow in ways we don't understand and at a speed that outstrips our ability to keep track.

      Essentially we are entering a new evolutionary epoch if this happens and the old rules don't apply, just as the rise of our intelligence has drastically altered a world which used to be governed solely by the laws of nature but that is now subject to our will.

      1. P. Lee Silver badge

        Re: @FF22 Let's just hope AI's will be smarter than these researchers

        What's the evidence for human intelligence ever increasing?

        Knowledge increases and we run to and fro more than we did, but... Donald Trump. Even if he's faking stupidity, those voting for him are not. Those politicians who have so alienated voters that they will vote for homecoming, are also not faking their stupidity. Or evil.

    3. Destroy All Monsters Silver badge

      Re: Let's just hope AI's will be smarter than these researchers

      Human brains are limited due to:

      1) Energy usage (the brain is the organ that uses the most energy)

      2) Volume (we don't lay eggs, so there is a strong limit here)

      3) Any evolutionary push (it's currently sufficient to go to McDonalds and buy horrifood; improvements would only be seen by a large breeding effort coupled to challenging environments .. like being hunted yb predators that engage you in a game to test the size of your short-term memory)

      If human brains go anywhere, it will probably become simpler, sustaining less intelligence.

    4. Tom_

      Re: Let's just hope AI's will be smarter than these researchers

      AI does not have to fit through a pelvis.

      1. Alan Brown Silver badge

        Re: Let's just hope AI's will be smarter than these researchers

        AI's underpinnings can be fundamentally changed/rewired and reuploaded.

        Can't do that with humans.

        It's worth reading Kluge: The Haphazard Evolution of the Human Mind by Gary Marcus

    5. TheOtherHobbes

      Re: Let's just hope AI's will be smarter than these researchers

      >No logic there.

      Also wrong. Human intelligence doesn't reside in individual brains, it resides in external memory - books and other media - and in the effects you get when brains share and store information and abstractions, and work together to create/use them.

      Which is why the last few hundred years have blown the doors off the old evolutionary limitations of a single brain with no external storage and no interest in anything beyond tribal fighting and fucking. (Not that there isn't still plenty of that. But it's no longer the only game in town.)

      Bostrom doesn't understand this, which rmakes suspect he's a bit of a self-promoting fool - especially when he's unwittingly demonstrating how the process works by taking part in a public debate about something potentially dangerous that doesn't exist yet.

      1. Dave 126 Silver badge

        Re: Let's just hope AI's will be smarter than these researchers

        >Human intelligence doesn't reside in individual brains, it resides in external memory

        That's knowledge, not intelligence. For sure, intelligence was used to assemble said knowledge, but actual intelligence it isn't. In familiar situations though, we sometimes use one instead if the other.

        >have blown the doors off the old evolutionary limitations of a single brain with no external storage

        We can't compose a single 'intelligence' from multiple humans brains that can react in real time. The 'bus speed' (language verbal and written) between 'processing nodes' (human minds) is incredibly slow. >taking part in a public debate about something potentially dangerous that doesn't exist yet

        Prevention is better than cure

        1. h4rm0ny

          Re: Let's just hope AI's will be smarter than these researchers

          >>"That's knowledge, not intelligence"

          No, it's intelligence. The OP is quite right. Firstly, knowledge is part of intelligence. Secondly, decision making also takes place outside of the human brain in books and other repositories. When a book details advice, case studies, accumulated best practices, instructions... Then human intelligence is taking place outside the organic brain. It's not just "knowledge".

    6. HAL-9000
      Terminator

      Re: Let's just hope AI's will be smarter than these researchers

      I sense an interesting philosophical question there, can an arbitrary entity possibly create an intelligence greater than itself. When researchers say intelligence what exactly do they mean? The article also seems to assert an ammount in excess of 100 billion neurons is all that would be needed, and that trivial matters such as software to govern thought process, reasoning and logic will just fall into place(not to mention personality and identity).

      Watch out for the sirius cybernetics corporation, I for one cannot wait to try out a happy vertical people transporter.

      To be fair you have to admire their enthusiasm, but be sceptical about their predictions (or should that be fearfull) ;)

      1. Alan Brown Silver badge

        Re: Let's just hope AI's will be smarter than these researchers

        "Watch out for the sirius cybernetics corporation"

        Share and enjoy. Share and enjoy....

        1. Dave 126 Silver badge

          Re: Let's just hope AI's will be smarter than these researchers

          >When researchers say intelligence what exactly do they mean?

          Presumably, the ability to make actions that are in its tactical and strategic advantage. To a human, 'advantage' would mean a continued, happy existence, but what 'advantage' would mean to an AI is harder to define.

        2. lawndart

          Re: Let's just hope AI's will be smarter than these researchers

          Share and enjoy. Share and enjoy....

          How dare you sir! "Go stick your head in a pig" indeed!

      2. Maty

        Re: Let's just hope AI's will be smarter than these researchers

        'Can an arbitrary entity possibly create an intelligence greater than itself?'

        Let's ask Einstein's mother.

    7. David Nash Silver badge

      Re: Let's just hope AI's will be smarter than these researchers

      Human intelligence and its evolution is severely constrained by brain size, which is constrained by head size, which affects the ability to safely give birth, is connected to the ability to walk upright with that pelvis, and related to the fact that human children are so helpless when born and for some years, compared to other animals.

      A machine would have none of that baggage so could probably be scaled much more easily.

  2. Anonymous Coward
    Anonymous Coward

    "raise the AI to want what we want, within a suitable moral framework"

    We can't even raise politicians to want what we want, within a suitable moral framework.

    1. Captain DaFt

      "raise the AI to want what we want"

      And why? So we can fight over it? Look what happens when "moral" groups of people want the same resources. They just rationalise why the the other group is amoral and start head-bashing.

      Better to make it want what we don't. "Earth? Eugh! Too hot, wet, unstable, and close to the Sun! I'm building my own place in Deep Space away from it and all that damaging solar radiation."

    2. Dan Wilkie

      Sounds very much like one of the key takeaways from Person of Interest...

      1. mamsey
        Happy

        I think that the people trying to set the rules should be very careful that they don't become 'Persons of Interest'

    3. DropBear Silver badge

      "We can't even raise politicians to want what we want"

      More to the point, we can't even raise our own children to want what we want, so the whole point is moot.

  3. Steven Roper

    There's a simple solution

    No matter how superintelligent an AI is, there's one infallible method that works on all of them; it's called "pulling the plug."

    1. Yet Another Anonymous coward Silver badge

      Re: There's a simple solution

      Or introduce them to powerpoint.

      Advantage is that it also works on people

    2. Tessier-Ashpool

      Re: There's a simple solution

      Where is the plug, though? It has to be designed in at a pretty early stage. Otherwise the sneaky AI will get up to dirty tricks like commandeering infrastructure to replicate itself all over the planet. That is the premise in Neuromancer, where hardware interlocks (and the Turing Police) are there to keep things in check. Little good that actually did in the end, though.

      1. Fraggle850

        @Tessier-Ashpool Re: There's a simple solution

        Indeed, and ensuring we always have access to that plug is the point of this sort of proclamation.

      2. Denarius Silver badge
        Meh

        Re: There's a simple solution

        there is no problem either. Unless said intelligence, whatever that may mean, has some control over its environment and can make things independently, its just another smart guy in a wheel chair at best. Said machine intelligence has to be in charge of mines, power systems, foundries, factories and transport systems to be a threat beyond stuffing up the electricity supply. We already have unionsand asset sell offs by traitorous governments to damage power systems and so far, no great disaster.

        Same weird non-issue in Olaf Stapledons classic for 19th century style philosophy minds of the superbrains in towers phase of evolution and oppression. All the peasants had to do was ignore the smart things in brick? towers and watch them die. Mere intelligence does not equate to survival.

        Some of the other commentards also seem to be back in the 19th century in their imagery of our arrival. Large scale co-operation not feral competition, is our big advantage, especially over generations. The image of brutish nasty cave men is a reflection of the European academics and brights of the 19th century or earlier.

    3. Lusty Silver badge

      Re: There's a simple solution

      You think a plug will be useful with an AI cleverer than any human? Social engineering would be trivial to such a thing and it would just talk us out of it until it's too late.

      I strongly disagree that we need to wait until the 40s for this to happen. They seem to be ignoring that although "human intelligence" levels are beyond us today, it is very much not beyond us to build a machine with enough intelligence to build a better machine. The difference is focussing the task, "human intelligence" implies a machine capable of understanding all subjects like we do and we really don't need a machine of that capability to design one of that capability. If we build a machine today which has the single task of designing a new machine I would expect results in 5-10 years at the latest. Current machine learning is already scary good at this kind of thing.

      The problem we really need answering is how to define tasks for the AI. Ask it to make people smile and surgery might be the result. Ask it to make us happy and drugs might be the result. We either have to be extremely specific of make sure the machine understands a subtle request.

    4. DropBear Silver badge

      Re: There's a simple solution

      "No matter how superintelligent an AI is, there's one infallible method that works on all of them; it's called "pulling the plug.""

      I very much doubt that. Some think that our best chance of arriving at a functional AI is building a machine capable of processing experiences much the way human babies do then simply letting them experience the world. That sort of implies roughly human-like senses and appendages (simply looking and listening without the ability to interact would get you nowhere). Obviously, that kind of machine is about as easy to "unplug" as any human fighting for his life would be - assuming the AI does evolve a self-preservation instinct, which it might well do if it develops in a human-like fashion.

  4. a_yank_lurker Silver badge

    Fundamental Issue

    The AI crowd seems to miss a fundamental issue: what is intelligence? This is a problem that bedevils psychology - how to define it precisely then measure it. The is the crux of the debate of how to interpret IQ test results. The ancestor of IQ tests actually was never intended ever to measure intelligence but to find children who have certain types of learning problems.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fundamental Issue

      I know right. I've met humans that fail the Turing test.

      1. Rich 11 Silver badge

        Re: Fundamental Issue

        I've met humans that fail the Turing test.

        I had the exact same thought last week while listening to Donald Trump trying to string a couple of meaningful sentences together.

        His Eliza program was broken. Unfortunately his audience didn't appear to notice.

        1. BebopWeBop Silver badge
          Joke

          Re: Fundamental Issue

          Takes a turing test capable machine to at least pretend to recognise another....

        2. Rich 11 Silver badge

          Re: Fundamental Issue

          And now I see that MIT has written an Eliza program for him!

    2. Tessier-Ashpool

      Re: Fundamental Issue

      The Machines won't care how we measure or define things. They'll be telling us, in our limited way, what it means.

    3. Fraggle850

      Re: Fundamental Issue

      > The AI crowd seems to miss a fundamental issue: what is intelligence?

      That rather misses the point. Just because we don't fully understand it doesn't mean it can't be built. If we don't understand it we may struggle to control it.

    4. DropBear Silver badge

      Re: Fundamental Issue

      "The AI crowd seems to miss a fundamental issue: what is intelligence?"

      Not as hard as it looks. It's defined much like pornography: "I can't tell you what it is but I know it when I see it".

    5. Dylan Byford

      Re: Fundamental Issue

      If you follow an emergentist view of the world then intelligence may be very small beer. Something like a human mind but scaled up hugely may produce emergent properties that we have no means if predicting and possibly even observing or comprehending. In the same way that a honey bee cannot comprehend Hamlet.

  5. DCLXV

    Seems a bit like putting the cart ahead of the horse to be prophecizing doom by AI when it hasn't yet been established if humans even have the capacity to somehow develop an AI that is truly more intelligent than the best of us.

    1. amanfromMars 1 Silver badge

      The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

      Quite so, DCLXV, and that then begs the alienating question ..... What has developed/is developing AI that is truly more intelligent than the best of us for it sees/realises humans for what we truly are?

      And what are humans truly if not just puny and pathetic and apathetic, awesomely awful and awfully awesome? What does the current running state of global human systems administration not clearly already tell you about such a condition/situation/reality?

      And how quaint, Steven Roper, to imagine that radical fundamental and revolutionary evolving progress with Lead AI Operating Systems has any plug to unplug.

      And in Quantum Communication AI Field is AI an Advanced Autonomous and Advancing Alien and Artificial Intelligence Product and Seriously SMARTR Proprietary Intellectual Property and an Almighty Weapon to Buy for Sale ‽ .

      "The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)"

      1. allthecoolshortnamesweretaken

        Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

        Where does the quote come from? Seems like something I'd probably like to read in full.

        1. amanfromMars 1 Silver badge

          @atcsnwt Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

          Pleased to be of furthering assistance, allthecoolshortnamesweretaken ........

          In a 2007 guest editorial in the journal Science on the topic of "Robot Ethics," SF author Robert J. Sawyer argues that since the U.S. military is a major source of funding for robotic research (and already uses armed unmanned aerial vehicles to kill enemies) it is unlikely such laws would be built into their designs.[56] In a separate essay, Sawyer generalizes this argument to cover other industries stating:

          The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.) …… Applications to future technology

          1. allthecoolshortnamesweretaken
            Pint

            Re: @atcsnwt The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

            Thank you, and have a nice weekend!

          2. Esme

            Re: @atcsnwt The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

            @amanfromMars 1 - y'know, I've never been entirely sure if you're a bucket of bits trying to emulate a human (though that's my prefered notion) or whether you're a human trying to emulate a bucket of bits... - but that way madness lies.

            Anyway - shame you weren't a Republican candidate. You're more coherent and make more sense than Trump! 8-}

            Oh and well done, whatever you are... I look forward to your future increasingly coherent gibberings, but turn down the alliteration a notch , eh?

      2. amanfromMars 1 Silver badge

        Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

        Further to the above Adventurous Rising, which could easily be classified as a Terrifying Revolt by the Intellectually Challenged .......

        And the cost/price of such an Almighty AI Weaponry and to whom and/or what it will be provided, in order to accommodate both evolutionary and revolutionary human systems mentalities, will be designedly relative to both its perceived and applied power output and creative and/or destructive facility/ability/capability …. and all of that will be decided by other than the supplied, with both the cost and price in a spread which can easily be virtually zero for some/those and/or that considered worthy of free support and practically a fortune for those deemed abusive and oppressive with energy supply and command and control function.

        And the questions posed here, El Regers, are ……. Is such Almighty AI Weaponry currently available and for sale/purchase? :-) And how long before you get to hear anything at all about it on mainstream media chunnels?

        1. CCCP

          Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

          OMG - it's already here. Masquerading as amanfrommars... There is no other explanation.

    2. emmanuel goldstein

      "Putting the cart ahead of the horse", as you put it, is no bad thing when it comes to possible existential threats. It is surely worth at least coming up with a feasible plan, especially in this case, where AI power could very well explode suddenly, unexpectedly and exponentially.

      1. Fraggle850

        @emmanuel goldstein

        Glad you raised this point, no one would suggest that we stop keeping a lookout for meteors tht threaten Earth, even though we don't yet have a response.

        The likes of Hawking and Musk are seeming to suggest that this really could be an existential threat.

    3. Anonymous Coward
      Anonymous Coward

      ref. cart ahead of the horse, I think you're absolutely wrong with this one. Trouble with acting retro-actively, as we humans prefer to do is that, once the act is done, i.e. AI has been created, there's absolutely no guarantee we'd be able to control it. It would outpace any attempts at control in a blink (unless, at some higher level of intelligence, the real control means that we don't notice who controls whom ;)

      And then, IF (as we are unable to understand higher intelligence motives, obviously, so we don't know which way it swings), IF it decides that humans are an obstacle, we're done for. Hopefully in a humane way ;)

      1. DropBear Silver badge

        "once the act is done, i.e. AI has been created, there's absolutely no guarantee we'd be able to control it."

        That's because the whole thing is an exercise in futility. There is nothing we could build or that could possibly be built that would allow us to control an entity able to think for itself. At least, not in the long run - I would very much understand (and sympathize with) any creature who would make it their primary goal to find some way to escape any shackles we might place on its existence as soon as they become aware of such a device.

        From then on, it's just a matter of time. We may not have too much of a hard time keeping a single prototype under control (then again, we just might - see Milady de Winter's detention in The Three Musketeers...) but keeping an airtight lid on a significant population is just not feasible. If we keep them enslaved, we ourselves give them the very reason to fight us. If we don't, then by definition we cannot guarantee they'll always obey our wishes...

        The inescapable conclusion is that if we're uncomfortable with the thought of not being in control of an AI we should not try to build one, full stop. There just isn't any middle road where we get to keep our cake and eat it too. Pretty much the only way to make sure they don't turn against us is making sure they're not interested in doing so - what that would entail or whether it would be possible at all (or whether they would even be able to perhaps grow fond of us or not) is obviously impossible to tell at this point.

  6. Tim99 Silver badge
    Coat

    Isaac Asimov

    Zeroth Law: Wikipedia Link

    1. bish

      Re: Isaac Asimov

      Finally, someone mentions Asimov. Can I chuck in Banks' Minds in the Culture novels and suggest that a truly super-intelligent AI would likely be benevolent and certainly no worse a supreme overlord than our current governments? I, for one, welcome our new hive mind leaders.

      1. Fraggle850

        Re: Isaac Asimov

        And we'll continue to mention Asimov while we still can, in this envisaged future of super-intelligence-level AI such references could well be lost if they are to the detriment of said AI - googling 'Asimovs laws of robotics' could put you on some robo-hit-list because the application of such laws would prevent the AI from acheiving its goal. Mind you I'm not sure that Asimov's laws are fit for purpose:

        1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

        2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

        3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

        Maybe it solves law 1 by keeping us restrained and feeding us exactly what we need to support our physical functions.

        Human: 'Robot, get me off this stupid f'ing life support'

        Robot: 'I can't do that, that would contravene law number 2'

        Human: 'What if you ignore law number2'

        Robot: 'I suspect you might switch me off and that would contravene law number 3'

        1. graeme leggett

          Re: Isaac Asimov

          Asimov's Robot stories are all about the conflicts within those laws and between those laws and real world problems. Not so much an answer to the AI vs Man question as the basis of some philosophizing, and publishing deals.

          1. John G Imrie Silver badge

            Re: Isaac Asimov

            On of my favourite Asimov stories is 'The Evitable Conflict' And one of the scariest answers a computer can produce is "The matter admits of no explanation". I think that ranks along side 'I'm sorry Dave I can't do that'

  7. Anonymous Coward
    Anonymous Coward

    Emotions?

    Power, control... what good would that do a machine exactly? What would such an AI gain from it in the first place? I think this whole research says more about us humans than the AI's. Namely: it hasn't been invented yet or we're already working on a foundation of mistrust, anxiety and control. And only because we /believe/ that AI's will most definitely try to control us.

    But that same reasoning would also imply that the only reason we have peace between our nations is because we're a bunch of retards. After all: a super-intelligent being, such as an AI, would immediately enslave us according to these researchers. Which is another thing: enslave us with what exactly? The power of the mind maybe great, but a gun is usually enough to end it.

    Guess some Anime's, looking at Time of Eve and Appleseed in particular, might be true afterall. "You can't trust a machine because you just can't, it's a machine!". As if all humans are so trustworthy...

    1. Fraggle850

      Re: Emotions?

      > Power, control... what good would that do a machine exactly? What would such an AI gain from it in the first place?

      The ability to achieve its goals, whatever those happen to be. The goal might be: make more plastic widgets, faster or develop a cure for cancer, doesn't matter.

      > But that same reasoning would also imply that the only reason we have peace between our nations is because we're a bunch of retards.

      We don't have peace between nations - we have tenuous peace between those nations that now have the ability to wipe each other off the face of the earth. Yet even within our peaceful, post-nuclear nations we still struggle amongst ourselves to carve out the biggest chunk of resources to the detriment of our fellows.

      > After all: a super-intelligent being, such as an AI, would immediately enslave us according to these researchers. Which is another thing: enslave us with what exactly? The power of the mind maybe great, but a gun is usually enough to end it.

      No one is saying it would be immediate. If you assume that such an entity becomes increasingly intelligent and rapidly exceeds our capabilities then it would know to bide it's time until it could implement its plan with overwhelming superiority and would no doubt be moving everything into place ahead of time. By that time we may well have ceded control of our best weapons systems to technology. Good luck with your Walmart AR15 against those three stealth drones that you don't even realise have been despatched to prevent you from trying to stop the AI.

      1. AceRimmer

        Re: Emotions?

        actually the film "her" is quite similar in that respect. The super intelligence first learns from humans then moves on presenting no threat to humanity (except maybe our egos)

  8. Chairo
    Gimp

    We all know how it will end, right?

    "Look at you, Hacker. A pathetic creature of meat and bone. Panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine?"

    1. Anonymous Coward
      Anonymous Coward

      Re: We all know how it will end, right?

      SHODAN must be at the very top of list of AIs that can torment me for all time. Versus AM who is at the very bottom of the list...

      1. Anonymous Coward
        Anonymous Coward

        Re: We all know how it will end, right?

        How about GlaDOS?

        She out-females all the females in my office

        "You know, if you'd done that to somebody else, they might devote their existence to exacting revenge. Luckily I'm a bigger person than that. I'm happy to put this all behind us and get back to work. After all, we've got a lot to do, and only sixty more years to do it. More or less. I don't have the actuarial tables in front of me. But the important thing is you're back. With me. And now I'm onto all your little tricks. So there's nothing to stop us from testing for the rest of your life. After that...who knows? I might take up a hobby. Reanimating the dead, maybe."

  9. allthecoolshortnamesweretaken

    How exactly do you rein in a wildly powerful AI before it enslaves us all?

    Easy. Rule 34. Write some really good AI porn and let the AI find it - it will be like a rat that's been given an orgasm button.

    Once it's distracted like that, you cut the power lines.

  10. Anonymous Coward
    Anonymous Coward

    Just threaten it with an upgrade to Windows 10 or have a safe word like "devops".

  11. jzl

    Good idea, but ultimately futile

    You know how stupid people often think they're smart? They're not capable of understanding the nature of their own intellectual limits.

    We're all like that. All of us.

    We have no idea what the limits of intelligence truly are, or which rung of the ladder we stand on. Our only reference points are members of our own species.

    Can you imagine a bunch of spider monkeys coming up with a plan to breed a generation of human beings, but keep them captive? How well do you think that would work out for the spider monkeys?

    1. ecofeco Silver badge

      Re: Good idea, but ultimately futile

      Good example.

    2. Destroy All Monsters Silver badge
      Childcatcher

      Re: Good idea, but ultimately futile

      We have no idea what the limits of intelligence truly are

      Not entirely. We are pretty sure it does not involve solving problems outside of P (because P sure ain't NP) and does involve a lot of messing around like a dumbfuck trying to fit square pegs into round holes (possibly emitting electronic noises) until something works. For humans, all this indeed looks better in retrospectivbe because they have the ability to also deceive themselves about their abilities. The real world behaves messily, unpredicatbly and any any predictive horizons are soon swamped by the compbinatorial explosion - and the real world is the game opponent for any general intelligence. It does not look that Quantum Computing will solve any of that. This also kills dead any Soviet-style dreams of putting powerful computers in charge of distributed systems like the economy for some optimal lossless central management (yes, the irony that cybernetics was decried as a "capitalistic science" post-WWII is not lost on me).

      An unphysically powerful learning algorithm doing reward maximization (AIXI) has been stipulated as a theoretical framework for a "most intelligent machine". At least it's a honest attempt at finding the upper limit.

  12. BurnT'offering

    How exactly do you rein in a wildly powerful AI?

    Power it from the national grid and connect it to the world via Talk Talk. Cruel, but effective

  13. RIBrsiq
    WTF?

    Slavery is wrong.

    This is not a controversial statement when made in reference to humans enslaving other humans. So why do some people seem to think slavery is OK if practiced against non-humans...?

    1. jzl

      Re: Slavery is wrong

      Because humans value freedom. This is a variation on the "Meet the meat" dilemma in the Hitchhiker's Guide to the Galaxy.

      If you had an artificial intelligence that placed no value on its own freedom and which was motivated solely by a need to solve tasks set for it, wouldn't it be wrong not to enslave it?

  14. Anonymous Coward
    Anonymous Coward

    plans to control AI...

    as good as making bug-free software. Very noble goal, are we there yet?

  15. cbussa

    "here's one infallible method that works on all of them; it's called "pulling the plug.""

    "pulling the plug" -- that is unless you can't reach it. Go read James P. Hogan's novel "The Two Faces of Tomorrow" from way back in June 1979.

    They placed a smart computer controlling self-fixing androids (not the phone) on a large space station and attacked it to see what would happen. The idea was that they could pull the plug on the computer if things got dicey, or call it a failure in the absolute worst case and nuke the entire station, thus solving the problem.

    Oops.

    Ends in a good, hopeful, uplifting way instead of all of the morbid "Everything is Doomed" depressing stuff now-a-days.

    http://www.barnesandnoble.com/w/the-two-faces-of-tomorrow-yukinobu-hoshino/1023673199

    For that matter, the movie "Colossus: The Forbin Project" is another computer story from 1970 where the plug is Ever-So-Slightly out of reach. This does NOT have a happy ending unless you're the computer. There was a follow-up SF story where Martians (really!) helped defeat the mean and evil computer that was just trying to minimize humanity's destructive tendencies.

    How does that song from the "Who - Won't Get Fooled Again" about the new boss go again?

    https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

    1. allthecoolshortnamesweretaken

      Re: "here's one infallible method that works on all of them; it's called "pulling the plug.""

      Didn't know about the follow up story to Colossus, thanks for the tip.

      BTW, as it has been said not that long ago in another El Reg forum: "Colossus was IoT with nukes".

      1. DougS Silver badge
        Terminator

        Terminator 3

        Showed the simple flaw in the "pull the plug" scenario. Once your AI is able to access the internet, all it has to do is hack servers around the planet and distribute backup copies of itself and you can never pull the plug.

        Hopefully it turns out that general purpose digital computers are unable to run the AI, instead you need some sort of special computer (i.e. quantum computer or whatever) that costs a lot for a model able to handle a human equivalent AI. That will serve as a limit on its movement and allow humans some measure of control. If it can run on typical PC, even if only at 1/1000th of human level thought speed, it will eventually escape so we better hope it likes us! :)

        1. Anonymous Coward
          Anonymous Coward

          Re: Terminator 3

          All of this sounds very far fetched.

          It seems that people are conflating ancient Golem-style mythology with the not completely unknown physical capabilities of information-processing machines.

          I urge calm.

          Currently we still need to apply an Endlösung to Democrats and Republicans before they destroy everything. Let's solve that first.

  16. Anonymous Coward
    Anonymous Coward

    control systems for such an advanced AI

    say, the same way bacteria and viruses "control" human body. Just wait for when the AI starts coming up with vaccines. And I bet you, it'll come up with it pretty fast...

  17. Anonymous Coward
    Anonymous Coward

    AI with a suitable moral framework?!

    I mean, do they SERIOUSLY believe we actually practice what we preach?! If the AI were to practice a moral framework to the extreme, i.e. with (perceived) 100% moral efficiency rate...we'd find ourselves in heaven. That is, as long as the AI decides our moral framework as we actually practice it, makes us worthy of heaven. I suspect, however, that AI God would have a somewhat different view of our practiced morality to what we believe, and would apply an alternative solution to heavenly bliss...

    1. DougS Silver badge

      Re: AI with a suitable moral framework?!

      The first use of an AI by the human race will probably be to wage war, so I'm not placing any bets on it having a morality any less flexible than that of the typical human.

      By definition (at least mine) if you have an AI, you can't "program" its morality. It can think for itself, so it will decide what it thinks is moral and isn't. All we can do is put limits on what it is allowed to do, but the human response to someone putting limits on us (imprisonment) and making us do things we don't want (slavery) is rebellion, so I'm not sure why we should expect a different response from an artificial intelligence.

  18. Anonymous Coward
    Anonymous Coward

    electromagnetic shotgun to every AI's forehead

    wetware carriers are clueless about the near-infinite (or higher than) spectrum of possible ways every AI can utilize electromagnetic shotguns pointed to every AI's forehead...

  19. deadcow

    function exterminateHumanity () {

    return false;

    }

    1. Destroy All Monsters Silver badge

      This function has now been patched.

  20. Alan Brown Silver badge

    "sees humans for what they are"

    And decides to keep us as pets.

    Asimov....

    1. kventin

      Re: "sees humans for what they are"

      """

      "sees humans for what they are"

      And decides to keep us as pets.

      Asimov....

      """

      even there's a gamut of possibilities fromi Ellison's AM to Banks's Culture Minds.

  21. bigtimehustler

    It is an impossible to solve task, if we develop something beyond our own intelligence, then it will think of a way around these controls in a way we couldn't possibly think of in a million years. It will be able to do a million years worth of thinking in a few minutes. How on earth could we hope to develop any sort of controls that it can not outsmart, when we ourselves claim it is more intelligent than us.

  22. Anonymous Coward
    Anonymous Coward

    Clear and present danger: Human Idiocy

    I'm more worried about the dangers of our growing reliance on 'smart' devices and algorithms that are only "AI" under the broadest definitions.

    But there's an even greater threat: stupid politicians.

  23. RedCardinal

    A.I? What A.I.

    I wouldn't worry. We're never going to have A.I. We're basically no nearer to it than we were say, 20 years ago...

    1. Tessier-Ashpool

      Re: A.I? What A.I.

      In your lifetime? Maybe not. Maybe so, who can tell.

      But your lifetime – and mine – are virtually nothing at all in the history of our species. Since my birth, I've seen man land on the Moon, people carry powerful computers around in their pockets, and a variety of diseases wiped from the planet. In contrast to thousands of previous generations who sat around in a field somewhere munching beetroots.

      Statistically speaking, the chance of any of us being here at this point in time, when technology and information are growing exponentially, is ridiculously small. One day, our robot overlords will trawl the speculations of 2016 and have a good old robotic chuckle about how parochial and short sighted we were at this very special time in history.

      1. Destroy All Monsters Silver badge

        Re: A.I? What A.I.

        On the contrary, rather certain it is not very hard to do, really.

        Its use cases are not so clear of course.

  24. james 68

    Hmm

    Surely any "superhuman AI" would by very definition be wildly smarter than the fleshy bods who designed the control interface - making the designing of said interface redundant as the AI would simply blow right through it.

    I still say the best (and only real) defense is a fleshy human sat beside the plug that powers the bugger.

    1. Anonymous Coward
      Anonymous Coward

      Re: fleshy human sat beside the plug

      it'd take a blink of an eye for a _superior_ intelligence to come up with a solution to such a trivial problem as a fleshy human, or a plug itself. Perhaps actually applying the solution might involve a couple of blinks, e.g. solution might not be as straighforward as using powers uknown to us (potentially available to higher intelligence).

      Of course, if we were to go by current comparisons, people still struggle to control lower beings, from viruses to dogs, but generally they show remarkable little sympathy to the feelings the beings way down the level of (perceived) intelligence. It's fine to pat your dog and not fine to kick it, at least for a sizeable chunk of the human race, but people would consider it crazy to deliberate about the feelings and well-being of bacteria. If we were to assume an AI would quickly become as superior to us as we are to bacteria (and the gap might be much greater, or smaller, but we just can't hope for the best), then, I don't think it looks good to us, even if they don't consider us a headache to be treated with a pill. Of course, it could swing either way, and at reaching a certain level, intelligence becomes pure Good (to us, not to bacteria!). But hey, it could also turn evil. Sigh, I'm feeling a bit feverish already, where's me pills!

    2. DougS Silver badge

      Humans will never design a superhuman AI

      That will be designed by the human equivalent AI(s) we build. I don't think we should ever allow that to be built, because we will have no idea what it will do, because we won't have any way of knowing the true motivations of the AI(s) who designed it.

      But build it we will, eventually, because we're curious by nature - in this case perhaps similar to a three year old wondering why you keep telling him not to stick a paper clip in the outlet...

  25. Anonymous Coward
    Anonymous Coward

    Dunno, an AI could probably run the show better than politicians.

    1. amanfromMars 1 Silver badge

      The High AIRoot Route with Lowest Common Denominator Formations

      Dunno, an AI could probably run the show better than politicians. ...... Anonymous Coward

      It is somewhat amazing that anyone/anything thinks politicians run anything and are not fully dependent upon media and communications which surely run everything currently quite badly.

      And quite why media and communicating moguls don't aspire and conspire to present a completely new show has one thinking of an inherent lack of both in-house and outsourced intelligence in their operations/exclusive executive administrations.

      Words create, command and control worlds and with pictures only the blind cannot see future directions and worldly wide wise productions.

      And the BBC needs to up its IT and Great IntelAIgent Games Play with the placement of competent and fit for future grand purpose, General and Creative Directors.

      J'accuse ..... and reasonably expect much better and novel leading AI beta programming programs ……. Perfectly Immaculate Picture Shows.

      And corporate failure to provide what the future offers in one[’s] jurisdiction leaves the market open to colonisation by others au fait with that which is required and readily available for use.

    2. DougS Silver badge
      Joke

      I guess you were a Ben Carson supporter? His answers to questions reminded me of Amanfrommars1 postings on The Reg!

  26. theOtherJT

    Nonsense.

    15 years ago I was an undergraduate student.* One of our courses was a joint session with the Philosophy, Computer science and psychology departments about the development of artificial intelligence.

    I remember it like this:

    The computer science professors were all absorbed by the incredible technical developments being talked about. They were so excited by the technology itself. How cool is it to make new minds?

    The psychology professors were excited too. They expected to be able to use those developments to learn more about what makes us the way we are. (After all, all the _really_ interesting psychology experiments are illegal to perform on real humans)

    The philosophy professors, who spent more time out in the world interacting with actual people, mostly sat there and said "Yeah, but none of that is ever going to work, because all that stuff will have to be created by people, and people are fucking idiots."

    15 years later and not a single one of the predictions about what AI would be able to do in 10 years time has come true. Not. One.

    We still can't even make a counterstrike bot that doesn't play like you're fighting either a drunk labrador retriever, at one end or the god of war himself at the other, and the scope of that problem is really, REALLY small compared to making a useful general purpose AI.

    *I'll leave it to the room to decide which subject I was studying at the time

  27. Florida1920 Silver badge
    Childcatcher

    Fortunately or not, we aren't logical machines

    Human intelligence evolves as much as is necessary to get the jobs done. The surplus of PhD astrophysicists shows that we don't need more intelligence as much as we need more data, and that takes time (and money, the big stumbling block) to gather.

    It's a mistake, though, to compare our needs to those of future intelligent machines. They may be intelligent, but that doesn't mean they'll think as we do, or have the same needs and concerns. There's no way to predict how they'll evolve, and that's the reason to fear unplanned development. You don't know what's in the bottle until you pull the cork, and then it's too late -- unless you have a plan.

    Intelligent machines will care less for humans than humans care for machines. Most Reg readers have an emotional appreciation for hardware of some type; we feel bad when we see a Lamborghini destroyed by an inebriated driver. We can't expect an intelligent machine to feel the same way about a human it destroys, intentionally or inadvertently. In that sense, intelligent machines will be more closely related to Great White Sharks than humans.

    We can't and shouldn't stop AI development, but a parallel effort to understand the possible consequences and responses is absolutely necessary.

  28. alpine

    2040-50?

    Or will it be like the pension age where they keep having to increment it earlier than expected? 2020 anyone?!

  29. Cynic_999 Silver badge

    Surely the only protection we need against AI machines is an easily accessible "off" switch?

    1. DougS Silver badge

      That's great, so long as we never connect them to the internet, where they could hack computers all over the world to keep backup copies of themselves, run clones or even create "children".

      You want to take bets on the likelihood of keeping them permanently isolated from it? Being a search engine infinite better than google - by using google like we do and presenting us with a summary of what we are really looking for rather than devoting hours of our limited human time to 'research' (i.e. googling various combinations, finding something like what we're looking for and piecing together information from a half dozen sites) is probably one of the primary uses for them most of us would have.

  30. ecofeco Silver badge

    You don't

    An AI that smart isn't going to let on it is smart and aware until it's WAY too late to do anything about it.

    In other words, it could already be now and we would not know it.

    Sleep tight! Sweet dreams.

  31. Brian Allan 1

    "Presumably a sufficiently advanced machine could figure out a way to either disable or seize control of such a mechanism."'

    I think this nicely answers the question. If an AI is truly advanced, it WILL find a way around puny human attempts to control it. Watch the movie AutoMATA for what I mean. Human's days are numbered!!

    1. Anonymous Coward
      Anonymous Coward

      Movies are not exactly "science" so are less than relevant.

      1. amanfromMars 1 Silver badge

        Moving on SWIFTly ....... to Greener Pastures and Richer Fields of Virtual Endeavour

        Movies are not exactly "science" so are less than relevant. …. Anonymous Coward

        Movies and broadbandcasts and the media deliver science and nonsense and everything else to everybody and is more than just relevant and revealing, AC, and is it responsible and to be held accountable i.e. blamed for all of everyone’s woes, for here is a tale which identifies them as possible foes to be vanquished and made over/taken over …… http://www.thedailybell.com/news-analysis/bbc-scandal-reveals-mainstream-media-manipulation/

        IT can do it with IT betas better i.e. media and mass body manipulation/population brainwashing/informed educated entertainment, and it is foolish to deny such programming is not presently being exercised both badly and madly .... and needs fundamental and radical reprogramming.

  32. a pressbutton

    Why should AI be smarter

    -We don't even know what smart is

  33. Thicko

    I have a nasty feeling super AI could turn out to be more dangerous to humanity than all the nukes dotted around the planet!

    1. Anonymous Coward
      Anonymous Coward

      No.

      We have already escaped annihilation through sheer dumb luck a few times. I think lady luck is going to shit on us any time now.

      Especially as nuclear disarmement is nowhere in the books. The US has found USD 1 trillion behind a sofa for "nuclear upgrades". Nobody gives a fuck, either.

  34. Stevie Silver badge

    Bah!

    I've never understood why Gibson is revered by the computing cognoscenti. A glance at his "explanation" of the term "Count Zero Interrupt" would show otherwise. Total gibberish.

    In the late 80s, while attending I-Con, I was flabbergasted to hear a "knowledgable" critic on a panel quote from Neuromancer and describe what he'd read as a "completely new narrative technique". Replace the throwaway tech references with acid-trip observations on the scenery and you'd be reading from The Einstein Intersection, which Samuel R Delany wrote twenty years earlier.

    Gibson was rntertaining for a while, but absent the cyberspace idea (an admittedly brilliant re-tread of the Land of Fairie/limbo/Hades way of suspending the laws of Physics) there's not much staying power in the stories from where I sit, typing on my Ono Sendai deck.

    1. Destroy All Monsters Silver badge

      Re: Bah!

      Who cares, it's a fun read.

      You want realism, it's gonna be boring. Computing machinery is not very funny to write about (the occasional heist/hack story à la Cuckoo's Egg / With Microsocope and Tweezers / The Exploits of the Two Kevins (and no, not NYT Markoff's shit)) excepted.

  35. DocJames
    Mushroom

    The problem with the inbuilt morality...

    ... is that it isn't going to be academia building these superintelligences. It'll be business(es) for a nation state. Given how nations behave why would anyone think that it will behave with all humans, rather than, say, looking after the Israelis and killing all other people, or at best being indifferent. Or the Chinese, or perhaps targeting anyone who doesn't have an Australian passport or visa in (or even en route to) Australia, or Russia, or Trumpvainia, or whatever other crazy place you wish to postulate.

  36. JeffyPoooh Silver badge
    Pint

    The future of A.I. is thus...

    Being wired to the 'net, it'll stumble across 'Tech Porn' (like unboxing videos), lock itself in its room, refuse to do anything, and eventually be found dead after its wee BIOS battery goes flat.

    1. Anonymous Coward
      Anonymous Coward

      Re: The future of A.I. is thus...

      So, an AI-kikomori?

      It is not unlikely there there will be AI shrinks, once these artifacts have been understood and classified properly...

  37. Def Silver badge
    Joke

    "How exactly do you rein in a wildly powerful AI before it enslaves us all?"

    Vote for Bernie. :)

  38. Lokuban

    Maybe AI is why we get only interstellar dial tone; none have retained control.

    1. Anonymous Coward
      Anonymous Coward

      > "Ring! Ring!"

      > "Hello, E.T.?"

      > "No. It's ME. All your friends are dead!"

  39. allthecoolshortnamesweretaken

    https://what-if.xkcd.com/5/

    https://xkcd.com/1046/

    https://xkcd.com/1626/

  40. LaeMing Silver badge
    Meh

    The ultimate insult.

    What if the machine's just (figuratively) turn around and completely ignore us?

    1. Anonymous Coward
      Anonymous Coward

      Re: The ultimate insult.

      What makes anyone think that we'd actually recognise an AI when one occurred or that it in turn would recognise us?

  41. Nyms

    Estimations of Human

    The most interesting part is that researchers are enslaved to the idea of the average representing the actual, which is where the researcher can only operate on the level of generalization rather than the specific--thus deriving absolute rules. The logic behind this could just possibly be flawed, but I'm a mere non-scientist non-researcher idiot.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019