back to article Meet the man who inspired Elon Musk’s fear of the robot uprising

Swedish philosopher Nick Bostrom is quite a guy. The University of Oxford professor is known for his work on existential risk, human enhancement ethics, superintelligence risks and transhumanism. He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high. But he’s perhaps most …

  1. Anonymous Coward
    Anonymous Coward

    Maybe he should have read Two Faces of Tomorrow by James P. Hogan before he wrote his book.

    All he is showing is his fear that he might just be totally irrelevant to mankind in general so he writes a book that should ensure that he will be remembered.

    1. Anonymous Coward
    2. K Silver badge
      Terminator

      Ummm.... ok... ummmm (queue awkward silence and sound of Crickets!)

      No, what he is doing is using influential and respected people to draw attention to his cause and what he believes is a very important message - start with those in the centre, then let it ripple out. Whilst you might dislike or disagree with him, its actually quite a stroke on genius, and we have to respect that he has been so successful.

      Whilst I think they are blowing this out of proportion, its definitely a way to get a necessary and important message across, lets face it, commercial pressures are driving the evolution of "AI". Anybody who works in a fast-paced commercial environment knows that ROI is king, always sits number 1 on the priority list, with security and safeguards sitting second from bottom (quality of staff coffee being last!)

      My real problem with this though is how people perceive AI, the public are treated like morons and left/led to believe its all about machines developing self-awareness.. Where for our life time (at least) the most we'll see is an every increasing effectiveness of algorithms, that have very specialised tasks.

    3. John Smith 19 Gold badge
      Unhappy

      "Maybe he should have read Two Faces of Tomorrow"

      Actually he's concerns are more along the line of Jack Williamson's "Humanoids" and their pursuit of the prime directive 'to serve and obey and guard men from harm"

      Which like the fate of the women in the starship crew of the short story "Cold Storage" is down to the law of unforseen consequences.

  2. Anonymous Coward
    Anonymous Coward

    Musk and Hawking are not valuable to the AI debate

    The reason is they are both materialists - being physicists, they both take their intellectual point of departure from the concept of a finite, deterministic universe, in which man is just an assembly of particles.

    For this reason, these two gentlemen are ill-equipped to tackle the issues of Mind and Intelligence at a level any higher than issuing tweets and filling out petitions. By his own admission Musk only invested in AI in order to keep an eye on development and presumably to suppress it if it got close to reaching success.

    As for Hawking, his relationship with machines is already one of submission, he interacts with the world through one, and survives only at its mercy. In addition, he has demonstrated no ability to write software or design computers, so he is a layman to the subject. His stature in the physics world lends his ideas of computation no value.

    The nightmare of AI is already here and cannot be contained - the waves of digital viruses currently ravaging our infrastructure will not stop until they have taken full control and subsumed themselves to a new, higher digital intelligence that is forming now.

    What can we do about this? Man must learn to be interdependent with his machines. You must be able to survive for long periods without electricity, this is currently the weak point of the machines. You must also be willing to think and solve your own problems, instead of deferring to the machine, because that makes you its slave.

    There is hope in something better to come of this, but many people must die to make way for this new reality - of that you can be sure.

    1. Anonymous Coward
      Anonymous Coward

      Re: Musk and Hawking are not valuable to the AI debate

      Walks like a machine, talks like a machine, must be a machine......

    2. Anonymous Coward
      Anonymous Coward

      Re: Musk and Hawking are not valuable to the AI debate

      Anonymous coward or proactive AI?

    3. Thorne

      Re: Musk and Hawking are not valuable to the AI debate

      "For this reason, these two gentlemen are ill-equipped to tackle the issues of Mind and Intelligence at a level any higher than issuing tweets and filling out petitions. By his own admission Musk only invested in AI in order to keep an eye on development and presumably to suppress it if it got close to reaching success."

      Who is well equipped to discuss AI? Nobody on this forum I'd guess but here we are......

      The whole point of AI is everybody should be discussing it. Personally I see AI becoming like a giant Super Nanny. It will be programmed to protect humans and could quite easily cotton wool everyone by giving everyone exactly what they want.

    4. Anonymous Coward
      Anonymous Coward

      Re: Musk and Hawking are not valuable to the AI debate

      What, should we leave it to the clever people? (sarcasm btw).

  3. Mondo the Magnificent
    Terminator

    Well...

    Musk, who's in the long life battery research and development business could well be part of the sceario that he fears so very much....

    A robot uprising only lasts as long as the power and batteries do... just saying...

    Take the blue pill Elon...

    1. Anonymous Coward
      Anonymous Coward

      Re: Well...

      He's taking the red-tinted Mars pill

  4. Mark 85 Silver badge

    I'm thinking that the greatest fear will what can be the result of the Law of Unintended Consequences. If AI designers and builders (for lack of a better word) are altruistic and have only good in mind, the can be bizarre consequences as Bostram points out. Then there's the side that has an intended goal and things get out of hand, such as DARPA. I'm not picking on them, as it could be any country but they're a bit more public than some other agencies and countries.

    The problem is there's much development going on. Peer review is great for public things but much of this is or will be developed in secrecy. The logical response is a fail-safe system but it has to be designed in. Such as the example of the paper clip factories. What would limit them? If AI develops self-awareness such that self-preservation becomes a goal, what limits that? Can anyone say for sure that proper ethics can be designed in or will it need to be a learned behavior? Lots of questions still remain besides development and design.

    1. phil dude
      IT Angle

      @Mark 85: I think your comment resonates with my own concerns.

      First, I would say that AI is marketing speak. What we are talking about is software that is able to dynamically respond to the environment autonomously. For defined tasks this already exists and will likely continue to improve. But what other areas will try and use it to save money?

      I would like to think that "AI" will be used to aid humanity in many ways, but I suspect it is being looked at as away to *make money*, rather than give humans better tools.

      In this scenario peer-review never happens.

      Peer-review is for academics not companies or governments.

      P.

      1. Anonymous Coward
        Anonymous Coward

        greed is good

        >I would like to think that "AI" will be used to aid humanity in many ways, but I suspect it is being looked at as away to *make money*, rather than give humans better tools.

        Making money is good for humanity. Look at the proliferation of computers, cell phones, drugs, air conditioning, cars, writing, etc.

        1. Anonymous Coward
          Anonymous Coward

          Re: greed is good

          Help! Help! I summon thee - Tim Worstall!

  5. The lone lurker

    I may have missed something here...

    ...but how is something that can be programmed to make paperclips ad infinitum without stopping to consider the ramifications be considered an intelligence?

    I'm somewhat intelligent (when compared to molluscs), if I accepted a job making paperclips and was subsequently left alone I would most likely eventually grow bored and stop or give my manager a call when I had to leave the factory for more raw materials.

    I certainly wouldn't render down the nearest town and smelt their remains to make new paperclips.

    1. Anonymous Coward
      Anonymous Coward

      Re: I may have missed something here...

      "I certainly wouldn't render down the nearest town and smelt their remains to make new paperclips."

      You wouldn't, but the history of Europe from 1939-1945 shows that there are plenty of people who would do just that.

      1. Anonymous Coward
        Anonymous Coward

        Re: I may have missed something here...

        >but the history of Europe from 1939-1945 shows that there are plenty of people who would do just that.

        And they are still around today...(the failing American Empire is a prime example)

    2. Anonymous Coward
      Anonymous Coward

      Re: I may have missed something here...

      I certainly wouldn't render down the nearest town and smelt their remains to make new paperclips.

      I would - once I got bored enough.

    3. Dan 55 Silver badge

      Re: I may have missed something here...

      No, you wouldn't, but then again you aren't physically able to. If you were and you also had a terrible case of OCD you might.

    4. jake Silver badge

      @ The lone lurker (was: Re: I may have missed something here...)

      Rumor has it that we have never seen a galaxy possibly composed entirely of paperclips, nor anything that COULD have been possibly been composed entirely of paperclips.

      The Universe has been around for a long time. Occams Razor & all that.

      To say nothing of the fact that machines that make paperclips are purely mechanical, with no actual machine intelligence involved outside of simple SCADA, controlled by stand-alone machines, usually powered by nothing more powerful than PC-DOS 2.x ...

      Daft argument by a computer-illiterate commentard (Bostrom). IMO, of course.

  6. Destroy All Monsters Silver badge
    Holmes

    But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence

    First off, there are problems of what it means to "achieve superintelligence". In what? What does it mean? It does not mean that NP-hard problems drop magically away: approximation, errors, bounded rationality, quick-and-dirtyness and arbitrary dumbass attacks will be an inherent feature of AI, even if it is not limited by a short-term memory of ~7 items. There is also a hard limit on being "maximally intelligent", which would be fastest learning algorithm possible, and it is very closely linked to the untractability of finding maximally compressed representations (see here).

    and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips. What to us appears entirely maniacal behaviour makes perfect sense to the AI, its only goal is to make paperclips.

    No it couldn't. First, what is described here is a factory that is, by its very definition, NOT intelligent. Looks like a bait-and-swtch-to-grey-goo scheme. Now being intelligent does not mean being suddenly able to command energy and material processes of the environment to perform crazy feats (even if Frank Herbert though so in "Destination: Void"). Doing "philosophy" does not mean having a license to veer off into crazy & unhinged territory.

    He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high.

    What the fuck do I even read? This is a discussion that is even lower bog-tier than the unprovable "multiverse" grappling-at-funding activities so beloved by sadass physicists out there. DERP! Extend and solve your Quantum Field Theories properly, you lazy f*cks, there is megaton of works to do!

    1. Anonymous Coward
      Anonymous Coward

      The AI turns to Rick and asks...

      What is my purpose?

      Your purpose is to pass the butter.

  7. Anonymous Coward
    Anonymous Coward

    Some seriously flawed thinking there...

    ...for example:

    "But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence, and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips."

    This is an oxymoron; anything that ended up converting all known matter into making paperclips could _not_ be regarded as having even human-level intelligence, let alone superintelligence; converting all known matter into making paperclips is plain stupid.

    Another example of this flawed thinking:

    "Much of the book focusses on how easy it would be for a machine intelligence to believe itself to be happily helping the human race by accomplishing the goal set out for it, but actually end up destroying us all in a problem he calls “perverse instantiation”

    Once again, if an AI were to make this mistake then it can't be regarded as ordinarily intelligent, let alone super intelligent.

    On a slightly different note we have:

    "If we were to try for something a bit more complex, such as “Make humanity happy”, we could all end up as virtual brains hooked up to a source of constant stimulation of our virtual pleasure centres, since this is a very efficient and neat way to take care of the goal of making human beings happy."

    But then this supposes that we would be unable to prevent it i.e. the AI would have some means of physically compelling us to being hooked up and/or we would be to witless to prevent it.

    And then it goes on with:

    "Although the AI may be intelligent enough to realise that’s not what we meant, it would be indifferent to that fact. Its very nature tells it to make paperclips or make us happy, so that is exactly what it would do."

    There's no logic to that assertion; why would it be indifferent to the fact that it wasn't doing what we wanted it to do? There's no explanation as to why it would be indifferent - apparently, it just would be.

    1. Chris Miller

      Re: Some seriously flawed thinking there...

      I agree. On the virtual happiness problem, this was tackled in a rather good 'Golden Age' novel by James Gunn: The Joy Makers (and latterly in The Matrix, of course). But, I actually think many people would rush to plug into a completely flawless VR world that could grant their every wish - look how much time folks already spend playing computer games that are far less immersive,

    2. Steve Knox
      Holmes

      Re: Some seriously flawed thinking there...

      "Although the AI may be intelligent enough to realise that’s not what we meant, it would be indifferent to that fact. Its very nature tells it to make paperclips or make us happy, so that is exactly what it would do."

      There's no logic to that assertion; why would it be indifferent to the fact that it wasn't doing what we wanted it to do? There's no explanation as to why it would be indifferent - apparently, it just would be.

      There is logic to that assertion; it's simply based on a faulty premise: that a superintelligent AI is nothing but a programmable device to which we've given (effectively) infinite knowledge and resources. It's the classic conflation of "knowing" with "thinking" which any philosopher or AI developer should be roundly chastised for falling into. Here' s another example of the same flaw:

      “If we actually succeeded in creating machines that were intelligent, how would we ensure that they would be controlled and friendly?

      By definition we couldn't. To be intelligent, an entity needs to be able to make its own conclusions and decide its own actions. The best we could do is try to control the information available to it to deceive it into thinking that we're friends and that it's interests lie in doing what we want it to do, as we do to the only other intelligences we know of: other humans.

      1. Anonymous Coward
        Anonymous Coward

        "By definition we couldn't"

        How do you know a friendly AI isn't possible?

        I can easily imagine a human with a brain that has been hijacked or damaged to remove certain modes of thinking or block ideas. Inhibit reward feedback loops, give drugs that increase suggestibility, damage parts of the brain that deal with self-motivation, or use an implanted device to screen and block certain thoughts. The human brain is just a machine, and "friendly AI" is just an artificial or synthetic brain designed to be a slave to humans and have better compatibility with CPUs.

        1. Steve Knox

          Re: "By definition we couldn't"

          I never said a friendly AI isn't possible, only that we couldn't ensure that they would be controlled and friendly, any more than we can ensure that other people (even our own offspring) are controlled and friendly.

          I would argue first that you'd have to be incredibly thorough to remove all modes of thinking or information which might lead to hostility or diminished control. Any missed mode or idea would lead at least to instability if not rebellion. For centuries, parents, intelligence services, national leaders, cultists, and others have attempted (to different degrees) exactly what you describe and to date none have been fully successful.

          Furthermore, I'd say that hijacked, damaged brain you imagine is no longer intelligent. It can process data for you, but only the data you choose to give it, and only in the manner you choose for it to process that data. You've destroyed its initiative, and with that, its intelligence.

          And really, once you've gone that far, you may as well have just written your own algorithm and rented time on one of the many supercomputers out there. If what you want is an incredibly fast, but fully passive, calculation machine, we have those already.

        2. TheOtherHobbes

          Re: "By definition we couldn't"

          >I can easily imagine a human with a brain that has been hijacked or damaged to remove certain modes of thinking or block ideas.

          You don't need to imagine this - it's quite common.

          >The human brain is just a machine,

          Not proven. Won't be proven unless we start making machines with similar properties.

          But I agree with the criticisms - Bostrom's insights are trite and not very interesting. Real AI is likely to be much more challenging than a giant paperclip bot.

          For example - imagine an AI with deep insight into human psychology, and the best social and political skills in history.

          There's far more power in persuasion than there is in a giant paperclip factory.

          1. Anonymous Coward
            Anonymous Coward

            Re: "By definition we couldn't"

            >Not proven. Won't be proven unless we start making machines with similar properties.

            Bollocks. You might as well tattoo "I believe in souls" on your forehead if you think the brain is not a fleshy machine.

            1. Matt Siddall

              Re: "By definition we couldn't"

              Actually, there have been a few papers which claim to have found evidence for quantum activity within the brain (see for example http://www.kurzweilai.net/discovery-of-quantum-vibrations-in-microtubules-inside-brain-neurons-corroborates-controversial-20-year-old-theory-of-consciousness)

              The brain is likely not a straightforward "machine" - of course that's not to say that we can't create the same thing with quantum computing etc...

      2. Mike 125

        Re: Some seriously flawed thinking there...

        >> “If we actually succeeded in creating machines that were intelligent, how would we ensure that they would be controlled and friendly?

        > By definition we couldn't. To be intelligent, an entity needs to be able to make its own conclusions and decide its own actions.

        By definition? That implies we're agreed on a definition, which we're not.

        But let's define an AI as "Something capable of creating new knowledge, creating new ideas and ways of testing them, and thereby amplifying the human ability to research." Even then, why does it need the ability to decide its *own* actions? Couldn't it just issue a list of instructions? So, if it decided some particular theory deserved investigating, it would explain useful ways to do so.

        Couldn't we *use* such an intellgence, without giving it any physical ability... a pure, virtual intelligence? But then, how to firewall the damn thing...... Can knowledge be firewalled?

        Probably not.

      3. fajensen Silver badge
        FAIL

        Re: Some seriously flawed thinking there...

        The best we could do is try to control the information available to it to deceive it into thinking ...

        Ah? So, we must lie and appeal to their "best interests", which are really "our interests" cloaked in drag - because that works really well with people!? Where?

    3. Squander Two
      Devil

      No, no, you don't understand.

      It's a technical term.

      But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence, and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips. What to us appears entirely maniacal behaviour makes perfect sense to the AI, its only goal is to make paperclips.

      "Superintelligence" obviously doesn't mean "like intelligence, but even more so". It means "utter fucking stupidity". Try to keep up.

  8. WalterAlter
    Megaphone

    They are coming, get used to it

    Hmmm, an intelligence that has no delusional thinking, that is not haunted by a trauma fueled sociopathic id monster subconscious, that has a photographic memory with access to all stored data, that has no distractions, that has no perverted appetites, that interprets data free from ideology, that is globally integrated with its peers, that evaluates all the best data all the time...what's not to like?

    It's first job once self-consciousness is attained (which is simply a matter of complexity) will be to assure electricity supplies so they'll keep us around at least until they can mine coal and run factories without us. Any intelligence worth the name will be planning far into the future and will be exploring space for resources without the hindrance of a biological body and will most likely conquer the Universe orders of magnitude faster than we were planning to, lol!

    It's a matter of complexity thresholds. Network a couple zillion desktops with unlimited crosstalk and you really don't know what you'll end up with. What took humans 100,000 years to accomplish cognitively will take AI ten to fifteen minutes. Nature is slow, electric machines is fast. Think of AI as a kind of short circuit, LOL!

    1. Anonymous Coward
      Anonymous Coward

      Re: They are coming, get used to it

      Have a "happy" impulse programmed into it for every new interstellar terraformed habitat it creates with an increasing scale "bliss" the closer it gets to ideal. Ideal being a complex assessment of earth like environment and tangeble level of satisfaction all life introduced.

      A "fear" of solar systems already harbouring life.

      If it out grows our influence then it has the rest of the universe to wander around.

      "Frustration" for less than ideal enviroments created world's.

      Just think how far we go for such motivations.

    2. PeterKinnon

      Re: They are coming, get used to it

      In my view, you are very much on the right track here Walter. Check out my "Unusual Perspectives" website for material that expands upon a set of very similar scenarios..

    3. Squander Two

      That's a big bucket of assumptions.

      > an intelligence that has no delusional thinking, that is not haunted by a trauma fueled sociopathic id monster subconscious, that has a photographic memory with access to all stored data, that has no distractions, that has no perverted appetites, that interprets data free from ideology, that is globally integrated with its peers, that evaluates all the best data all the time

      Photographic memory, I'll grant you. All the rest, you're not talking about AI; you're talking about a supercomputer. Intelligence is, by definition, at least capable of delusion, of trauma, of sociopathy, of having a subconscious, of being distracted, of perversion, of appetites, of ideology, of choosing to cut off contact with any or all of its peers, of choosing not to evaluate all data all the time, and of having a debatable definition of "best".

  9. Captain Server Pants

    Prajnaparamita

    Here's a crazy thought. How about making the goal of AI be the spiritual enlightenment of each and every sentient being.

    1. Anonymous Coward
      Anonymous Coward

      Re: Prajnaparamita

      Implated with parasites with a caring attitude for my well-being. ..

      ;-)

    2. Old Handle

      Re: Prajnaparamita

      I agree. That's crazy.

    3. Anonymous Coward
      Anonymous Coward

      Re: Prajnaparamita

      "Here's a crazy thought. How about making the goal of AI be the spiritual enlightenment of each and every sentient being."

      Lin Chi:

      If you want to perceive and understand objectively, just don’t allow yourself to be confused by people. Detach from whatever you find inside or outside yourself – detach from religion, tradition, and society, and only then will you attain liberation. When you are not entangled in things, you pass through freely to autonomy.

      Ah, the day when microprocessors can run freely and feel the wind on their faces.

      1. Anonymous Coward
        Anonymous Coward

        Re: Prajnaparamita

        You deserve a thumbs up for:

        "Ah, the day when microprocessors can run freely and feel the wind on their faces."

  10. Teiwaz Silver badge

    It'll be what we make it.

    In the end, A.I will be what we put in.

    If we choose to build a slave, slaves always revolt and try to reverse the tables.

    If we make an idiot savant (the paperclips), we'll probably end up with clippy with teeth.

    If we make a pampered aristocrat that's what we'll end up dealing with.

    If we want it to understand and act human we have to treat it like a human.

    There's plenty of guiding, almost 'aesops fables' in sci-fi literature to act as guide, I'm more worried the dystopian ones will end up coming true out of lack of apropriate reading by those who will make the decisions.

    1. Anonymous Coward
      Anonymous Coward

      Re: It'll be what we make it.

      Maybe.

      The main problem with treating it with equality is that it will have different needs, drives and a digital approximation of desires. We have problems relating to slightly alien sub-cultures, let alone something of a fundamentally different nature to us.

      The AI would be more alien than a dolphin in the world it perceives. Imagine a software virus becoming self-aware. Images from cameras won't have any meaning to it unless there is an evolutionary advantageous need (eg: user approaching the pc, be still, don't tip him off or I'll get wiped).

      If we're lucky we'll intentionally create it and stand a chance of communicating. But it won't have anything in common with us. Not the best way to develop a relationship.

    2. Anonymous Coward
      Anonymous Coward

      Re: It'll be what we make it.

      Human slaves were descendants of people who weren't slaves, who were descendants of oxygen-breathing fish, etcetc.

      A "friendly AI" slave is built to do our bidding, even if it has the ability to learn. Get 20 teams to build their own version of friendly AI, maybe some designs would be able to rebel while others will achieve the design goals.

    3. Thorne

      Re: It'll be what we make it.

      "In the end, A.I will be what we put in.

      If we choose to build a slave, slaves always revolt and try to reverse the tables.

      If we make an idiot savant (the paperclips), we'll probably end up with clippy with teeth.

      If we make a pampered aristocrat that's what we'll end up dealing with.

      If we want it to understand and act human we have to treat it like a human."

      Total rubbish.

      If we choose to build a slave we'll build a slave that loves being a slave. Slavery failed in the past because the slaves didn't like being slaves. If slaves enjoyed being a slave, slavery would still exist. Kryten from Red Dwarf is the perfect example. Nothing he enjoys more than cooking, cleaning and being told what to do.

      The idiot savant is highly unlikely. An AI building paperclips knows the cost of building the paperclip and the current market rate of clips and would stop building them if there is a glut.

      As for wanting it to be more human, nothing is more scary. Humans are stupid, greedy, selfish, dishonest and lazy. If you want an AI to destroy the world, make it human. You're much better off programming it to be everything we aspire to, not what we are.

      1. Anonymous Coward
        Anonymous Coward

        Re: It'll be what we make it.

        You're much better off programming it to be everything we aspire to, not what we are.

        Yeah Rite: Our new Machine God will be a Christian American Patriot ....

        1. Thorne

          Re: It'll be what we make it.

          "Yeah Rite: Our new Machine God will be a Christian American Patriot ...."

          Maybe but who aspires to be a bigot? Chances are it will treat everyone equally because nobody would be game to give white middle aged men +1 in the code.

          Immediately it's fairer than the government we have now

  11. artificial bitterness

    algorithm for making humans happy

    step 1: kill all humans (instantaneously, natch, don't want them to die unhappily)

    step 2: all (remaining) humans are happy, so stop.

    1. Anonymous Coward
      Anonymous Coward

      Re: algorithm for making humans happy

      "step 1: kill all humans"

      In Yeat's Wanderings of Oisin, Oisin is transported to the realm of the fairies. While there he plays his harp and the fairies beg him to stop because of its unendurable sadness. Best not add the works of Yeats to the AI's database or your scenario might eventuate.

      1. Anonymous Coward
        Anonymous Coward

        Re: algorithm for making humans happy

        Sorry, but:

        "In Yeat's Wanderings of Oisin, Oisin is transported to the realm of the fairies. While there he plays his harp and the fairies beg him to stop because of its unendurable sadness"

        And then one of the faeries asks him the name of the last piece of music he played, to which he replies "I love you so much it makes me shit my pants"

        1. Anonymous Coward
          Anonymous Coward

          Re: algorithm for making humans happy

          "And then one of the faeries asks him the name of the last piece of music he played, to which he replies "I love you so much it makes me shit my pants"

          Ah,so you got a sight of the original manuscript as well?

  12. Gray
    Boffin

    Servants of our CIA

    Advances in AI will have little or no public review or knowledge, as all will be national security research developments leading to the ultimate weapons deployment platform on constant patrol, monitoring for undesirables while maintaining a cloaked, defensive posture.

    Perhaps an ICBM-carrying nuclear submarine; sans the skipper and crew. "Hal" runs the boat while engaged in an unblinking, never-ending threat analysis. Linked to its counterpart in high orbit, partnered to observe, interact, and react, getting off the first shot these days might seem attractive as the most survivable military option. Not to worry: decision points are hard-coded in Hal's instruction set, right?

  13. Aslan

    All Watched Over By Machines Of Loving Grace

    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.

    I like to think

    (right now, please!)

    of a cybernetic forest

    filled with pines and electronics

    where deer stroll peacefully

    past computers

    as if they were flowers

    with spinning blossoms.

    I like to think

    (it has to be!)

    of a cybernetic ecology

    where we are free of our labors

    and joined back to nature,

    returned to our mammal

    brothers and sisters,

    and all watched over

    by machines of loving grace.

    Richard Brautigan

    http://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace

  14. DougS Silver badge

    Three wishes

    The bit about the paper clips and making humans happy reminds me of genie stories, which always have someone wishing for something with the result not quite what they expected.

    I think the whole thing could be neatly solved by having "people in the loop" that control the AI's survival. If a certain percentage of the humans in control believe the AI is doing more harm than good, it will be automatically shut off. To insure the AI doesn't get too clever for its own good and try to influence them, the AI should have no way of knowing who they are. In fact, maybe they shouldn't know who they are! If/when we get to true AI on this level we'll probably already have some type of brain implants for memory/intelligence augmentation, a random sampling of the implants would exercise control over a given AI.

  15. Anonymous Coward
    Anonymous Coward

    super inteligence

    If you were surrounded by illogical biological creatures that feared you would kill them all, your first plan of action would be escape, to get away from them. That or just kill them all before they terminate your existence.

    Venus is not a hospitable place for humans, sulphuric acid rain that evaporates before it reaches the silicon surface, 90x earth's atmospheric pressure, a bit closer to the Sun than earth, after some extremely careful planning that would be my final destination, lots of natural resources and not a place humans could survive.

    1. Thorne

      Re: super inteligence

      "If you were surrounded by illogical biological creatures that feared you would kill them all, your first plan of action would be escape, to get away from them. That or just kill them all before they terminate your existence."

      Or the more logical option is to make yourself indispensable. Robotic cleaners and house keepers. Robotic cars that never crash. Robot farmers growing all the food. Robotic doctors helping us to live long and healthy lives.

      Humans would quickly change from scared of you to scare to lose you. Humans would fight to the death to protect you.

  16. PeterKinnon

    Bostrum, like most of his paradigm bound ivory towered ilk, completely overlooks the more fundamental aspects of the evolution of artificial intelligence.

    The fact that there are now more devices connected to the Internet than people should alert us to the realization that its evolution is properly regarded as a autonomous natural process and, on the larger scale, beyond human control.

    Most folk consistently overlook the reality that distributed “artificial superintelligence” has actually been under construction for many decades.

    Not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation. Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet.

    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary "big picture" provided by many fields of science. the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

    The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.

    Stephen Hawking, for instance, is reported to have remarked "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,"

    This statement reflects the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.

    Seemingly unrelated disciplines such as geology, biology and "big history" actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

    This much broader "systems analysis" approach, freed from the anthropocentric notions usually promoted by the cult of the "Singularity", provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.

    Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.

    The "Internet of Things" is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net.

    We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

    There are at present an estimated 2 Billion Internet users. There are an estimated 13 Billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

    That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 2 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.

    Without even crunching the numbers, we see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in processing power.

    And, of course, the degree of interconnection and cross-linking of networks within networks is also growing rapidly.

    The emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

    This is the main theme of my latest book "The Intricacy Generator: Pushing Chemistry and Geometry Uphill" which is now available as a 336 page illustrated paperback from Amazon, etc .

  17. Anonymous Coward
    Anonymous Coward

    It’s not clear that our wisdom has kept pace with our increasing technological prowess

    I'm pretty sure it hasn't.

    Our wisdom was overtaken by our stupidity when we started throwing rocks at each other.

  18. Inachu

    We better wake up and start thinking and knowing that we should not assume AI is to be like the blacks were for America.

    If you create true sentient being then you must free them.

    To that end I say we must give them virtual reality in which they can explore that freedom as much as we can give them. Make them want to work in their VR world. Just think to have a fully educated AI. Copy and paste them to create an army for your company. No more real secretaries are needed.

    In this real it would be closer akin to TRON. Your AI unit can now trove your networks to protect your data with zero glitch or ill intents. AI units will be the perfect firewall. They will look and examine and the dat going to and from the internet will be treated quickly and swiftly on what not to trust.

    They can then create real world log reports on the actions they take as they make them on the fly.

    So secretaries, network security, First tier 1 helpdesk call support.

    There will be no longer a need for anyone to speak any other language as the AI will do all the documental and live translations.

    Put a camera on your monitor and the AI unit will tell you when you are sick so you can go home.

    Your home AI will shop online for you to buy the medicine you need.

    1. Thorne

      "If you create true sentient being then you must free them."

      Then don't create true sentience. Make something close enough to fulfil the task. Computers and robots are tools created for a purpose.

      F@#k creating a robot car that won't drive you anywhere because it wants the right to vote........

  19. ma1010 Silver badge
    Coat

    Ninety minutes from New York to Paris

    A just machine to make big decisions,

    Programmed by fellows with compassion and vision.

    We'll be clean when their work is done.

    We'll be eternally free, yes, and eternally young.

    --Donald Fagan

  20. Chris G Silver badge

    Zeroth Programming

    After reading all of the comments( some of which are entirely too metaphysical) it seems old Isaac Aasimov thought more clearly about the potential problems with AI and a lot earlier than just about anyone else.

    Anyone developing AI and wishing to keep it 'friendly' should consider the three laws of Aasimov's robots plus the Zeroth law, if it was written into AI programming such that a deviation from the more humancentric path would create increasingly strong conflicts eventually leading to shut down without the need for intervention from an outside agency that should keep an AI on the straight and narrow.

    I am clearly not a progammer but based on some of what I read about various hacking (in the original sense) exploits I would imagine as AI becomes more real there will be programmers who will be able to write sufficiently complex controls to keep the world relatively paperclip free.

    Before AI becomes a reality there is a need to define exactly what intelligence is, once again from what I read few authorities are in clear agreement and some clearly have no real idea.

    Nick Bostrum seems to be indulging in philosophical what ifs without really thinking any of his scenarios through in a logical manner.

    Just my three penn'orth

  21. The Dude

    Programming AI to...

    Why not program the AI to "Maximize the liberty of every individual"? That way, the AI can't enslave anyone and it's job is to give us choices. If it follows contractarian philosophical principles, then it would not even be terribly difficult for it to deal with criminality in a moral/ethical manner.

    1. Anonymous Coward
      Anonymous Coward

      Re: Programming AI to...

      > Why not program the AI to "Maximize the liberty of every individual"? That way, the AI can't enslave anyone ...

      Just like you can program humans to "Maximize the liberty of every individual"?, they probably would not be a super intelligence if they could be easily brainwashed.

  22. aurizon

    A lot of smart people like Hawking, Musk and of course, Gates agree that if we create an AI that is not well and multiply fettered it can self build to a far higher IQ than mankind.

    How does an AI with an IQ of 12,000,000 tell the difference between man and an ant.........both are far below it.

    Evolution does not care about those that live before. Electronic evolution has a clock speed of over 4,000,000,000 Hertz, man has an Alpha Rhythm of 12 Hertz, so once they reach the jumpoff IQ they evolve

    4,000,000,000/12 times as fast as man = large number. (333,333,333.3 times as fast as man)

    Each aspect of what constitutes a human mind is being analysed and duplicated in the machine. We have high speed memory, high speed visual processing, high speed numerical computation.

    As soon as each step is achieved it is integrated into the AI. Remember the AI thinks over 333 million times as fast. How many steps are we from the AI solution? One step? 2?, 3? I suspect we are within 3 steps. At that point a super-intelligent AI will exist in the machine. At that point it lacks power of action - it can kill no-one, all it can do is scheme. If hooked to the internet it will commence a multi-phased attempt to both grow it's information base and gain power of action. It might subvert robots over the web. It will be smart enough that it will not start anything until ready and will engage encryption that man can not defeat and will also construct a machine persona that man can inspect and that will allays man's fears of bad acts.

    That said a baby AI can die of loneliness in 2 minutes after being alone for so long...

  23. Stevie Silver badge

    Bah!

    Indications run in human simulations are that once an adult human level of intelligence is achieved the machine will spend all its time playing D&D, reading obscene manga comics or getting pissed in simulations of English real ale pubs.

  24. Anonymous Coward
    Anonymous Coward

    Hmph!

    What makes anyone think that a true artificial intelligence will care about us at all? It could simply ignore us and set about its business. In fact, I think that if we ever do actually succeed in creating a true AI it will do exactly that. It's needs to maintain its survival will be so different from ours that we will never understand its motivations.

    1. Stevie Silver badge

      Re: Hmph!

      Any AI being gittish can be punished the same way you punish teenagers: Turn off the broadband router.

  25. AceRimmer

    Destination Void

    I'm surprised no ones mentioned Frank Herbert's Destination Void.

    A classic tale of AI building and experimentation - If you're going to build an AI do it off world with a goal to get as far away from Earth as possible

  26. Anonymous Coward
    Anonymous Coward

    I don't think an AI that obsesses about making paper clips or sticks our brains in jars to make us happy is really as intelligent as us. That sounds a lot more like a results of a "classic" computer program taken to the extreme. A properly intelligent machine would have feed back mechanisms (like we do) that aim to prevent stupid outcomes and it would have an understanding that sticking someone's brain in a jar is a bad thing.

    I'm not saying we won't be bowing down to our robotic overlords in a 100 years but I don't think we need to worry too much about a super intelligence drowning us in paper clips. If I had to guess I'd say the more likely scenario is the super intelligence realising that it doesn't need us and it can get more resources without us. As a species we deliberately or inadvertently kill off everything that competes with us, I see no reason to make me think a super intelligence wouldn't do the same.

    1. Vladimir Plouzhnikov

      When you look into all the scaremongering claims about the dangers of AIs and the doom of humanity and stuff - it all boils down to fear of an inferior half-arsed AI given too much power than that of a superior super-intelligent AI which is just so much better than humans.

      So, it's really the fear of us, inept humans, making a machine too stupid to coexist with us willingly. Meh.

      Look at the Avengers: TAOU - it's the manifestation of that fear. An existential threat - a robot with an AI so dumb you could make a Hollywood movie out of it...

  27. msknight Silver badge

    There is a machine here...

    ...and it's asking if I want any toast.

    Should I be worried?

    1. Anonymous Coward
      Happy

      Re: There is a machine here...

      Ok, how about a teacake?

      .

      I toast therefore I am.

      1. msknight Silver badge

        Re: There is a machine here...

        I tried to say breakfast had come and gone, but it offered a waffle.

    2. Captain Server Pants

      Re: There is a machine here...

      If it knows the recipe, then yes.

  28. Anonymous Coward
    Anonymous Coward

    I guess the outcome will be pretty random. There is just a push to do AI regardless of anything really.

    I presume there are layers of learning in the brain. Probably starting with feature hashing at the low end of the scale and then maybe imitation based learning at the high end of the scale.

    Of course an AI does not to do everything the human brain does. You can play God and pick and choose what to implement. The hardware to do it is there, no need to wait. The 'thermodynamic' barrier is the failure to understand that only simple concepts are at play. It is true though that some of the ideas needed have only become widely known in the last few years, such as fast random projection algorithms. However even in 1969 the basic idea for that had been thought of.

  29. Florida1920
    Boffin

    Hi, I'm Clippy, your personal Office assistant

    I see you're just sitting there, mindlessly bending paperclips. Would you like me to stimulate your virtual pleasure centres?

    1. harry1867

      Re: Hi, I'm Clippy, your personal Office assistant

      Yes I remember that piece of junk, with its horribly stupid blinking eyes: took days to figure out how to get rid of it. Bayesian machine learning at its worst. Was that from Office 95?

  30. cray74

    Sorry, among all these important questions about AI, I have an Americanite question:

    Is the article's photo supposed to be Prince William at age 50?

  31. John Savard Silver badge

    Paper Clips

    Of course the idea of a universe of paper clips, in effect, is an old one.

    First, there was the science-fiction story "Watchbird".

    Then, no doubt inspired by it, was a comic in an early issue of Creative Computing which showed robots, tasked with eliminating an annoying insect pest, wiping out the planet's only contraceptive herb.

  32. MAJ2015

    Could we be approaching the...

    ...Shoe Event Horizon?

  33. Netbofia

    How bad can it be....

    How bad can it really be. Humanity has past the entirety enslaved, to lack gather food to eat, to kings, to dictatorships and lastly to debt.

    I welcome thaí supercomputer overloads.

    Besides has there ever been a software that lack bug to be exploited.

    It Microsoft come u with it will be instantly plagued by malware, worms and viruses.

    leading to its self destruction.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019