back to article UNCHAINING DEMONS which might DESTROY HUMANITY: Musk on AI

Electro-car kingpin and spacecraft mogul Elon Musk has warned that meddling with Artificial Intelligence risks "summoning the demon" that could destroy humanity. The Musky motor trader is terrified that humanity will end up creating a synthetic monster that we cannot control. And no, the SpaceX billionaire didn't warn us …

  1. petur
    Meh

    Politics and intelligence

    What gave you the idea that intelligence is required for politics?

    Just look around.

  2. Captain TickTock
    Headmaster

    That word - I don't think it means what you think it means...

    By meddling - do you mean dabbling?

  3. frank ly

    "Thou shalt not make a machine in the likeness of a human mind."

    What about Butlerian monkeys that serve you drinks?

    1. Destroy All Monsters Silver badge

      Re: "Thou shalt not make a machine in the likeness of a human mind."

      "...we have folded space from Dragon iX"

      1. Dave 126 Silver badge

        Re: "Thou shalt not make a machine in the likeness of a human mind."

        Yowsers... references to the prehistory of Frank Herbert's Dune.

        Sounds like Iain M. Bank's 'Outside Context Problem' - http://en.wikipedia.org/wiki/Excession#Outside_Context_Problem

        The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you'd tamed the land, invented the wheel or writing or whatever, the neighbors were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass... when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you've just been discovered, you're all subjects of the Emperor now, he's keen on presents called tax and these bright-eyed holy men would like a word with your priests.

        1. Long John Brass

          Re: "Thou shalt not make a machine in the likeness of a human mind."

          > you're all subjects of the Emperor now

          The Emperor protects!

  4. AndrueC Silver badge
    Terminator

    "Surprise me, Holy Void!"

    Although to be fair most of what went wrong in those books seemed to be the result of human failure rather than AI.That was pretty much the theme from what I remember. All our fault for trying to fight the cosmos instead of embracing it.

  5. Mage Silver badge
    Big Brother

    I'm not worried

    We don't even know what intelligence is.

    The Computer AI people only make progress because like Humpty Dumpty in "Through the Looking glass" they have redefined it.

    If it was possible to write a real AI program and the only issue was lack of computer power we would have a slow AI already.

    I'm sceptical that a true AI program can be developed.

    There are many other scenarios in the world that seem much more of a risk!

    1. Destroy All Monsters Silver badge

      Re: I'm not worried

      I'm sceptical that a true AI program can be developed.

      Actually it's probably pretty easy and I expect task-specific good, very good AI within the next 20. Anything that has a short-memory buffer than the human's "7 elements" will kick our arse.

      But so what.

    2. breakfast Silver badge

      Re: I'm not worried

      The problem that researchers are facing now is certainly philosophical more than technical. People always underestimate philosophy until they start running into it's harder problems.

      In the long run I think Strong AI probably both can and will be developed, although it will take a long time and the nature of that intelligence will probably be incomprehensible to us. There is a good chance that the consequence will be some kind of mayhem.

      If we want it to be anything like us, AI researchers need to be placing their work in the physical world and giving it access to the sense data that we build our understanding from. Then at least we will have some common experience to build communication up from.

    3. emmanuel goldstein

      Re: I'm not worried

      Applying the Bekenstein Bound equations to the human brain, you get a maximum information content of approximately 2.6 x 10 power 42 bits.

      This represents the ammount of information necessary to emulate a human brain down to the quantum level.

      Not possible in 2014 but inevitable at some point in the future and maybe not too many years away.

      1. breakfast Silver badge

        Re: I'm not worried

        I suspect there are some quite fancy quantum computation effects going on in the brain as well, I wouldn't be surprised if those took a while to suss out too.

        1. Michael Wojcik Silver badge

          Re: I'm not worried

          I suspect there are some quite fancy quantum computation effects going on in the brain as well

          Sigh. This again.

          What evidence is there for "fancy quantum computation effects" happening in the brain (in a sense that matters in this context)? Has anyone documented a single neurological mechanism that doesn't look like it can be explained entirely in classical terms?

          In any case, there's nothing that can be done with a QC that can't be emulated by a classical deterministic computer. Space, time, and energy costs may be greater, but there's no fundamental, formal increase in computational power. And no, Penrose's incompleteness-of-formal-systems argument does not demonstrate otherwise. He conflates understanding (a concept that resists formal definition in the first place) with computation, and his line of argument stumbles so badly on phenomenological grounds we don't even need to bring epistemology in.

          1. Anonymous Coward
            Anonymous Coward

            Re: I'm not worried

            The human brain is very much a physical thing in a very complex system. Thus it needs simulating in that complex system, not in the theoretical "braincell only" simplistic model. At least it seems more realistic to consider the problem being hard. It's always been "just 5 years away", yet we have never yet reached such computing or software level.

            The reason it becomes a hard problem, is possibly the same reason it becomes hard to simulate many physical objects and processes in serial. So, for example the human brain has 100 billion brain cells, with even more synapses and connections (with timing and other data being vital to the working process). I'm not able to find out more info, but it seems calculating billions or particles in realtime, only tracking small connected events (collisions etc) is a problem even now.

            As an example of something that gets exponentially more complicated, even though it's "simple" and "quick" for nature and physics to do is the n-body system. The more objects we try to calculate the orbits to, the greater the computational power required. So nature and real physics can shortcut some things brute force computation cannot (the age old np problem?).

          2. Anonymous Coward
            Anonymous Coward

            SIGH !!!!

            There are specific organelles in eukaryotic cells that are quite capable of quantum functionality.

            Currently we lack the technology to prove / disprove it conclusively yet.......and in your case the over-supply of hubris and the under-supply of imagination to even try.(Time to retire?).

            .

            I despise spiritual and uninformed holistic bullshit but density of information content is not sufficient to predict functionality like imagination, creativity and consciousness itself.

  6. Anonymous Coward
    Anonymous Coward

    Summoning demons

    Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn

    see

    http://bosshamster.deviantart.com/art/Summoning-Cthulhu-For-Dummies-31645860

  7. solo
    Terminator

    No matter what

    He forces common public to take things seriously. At least now they cannot ignore it as tinfoil hat as he is not just a writer (not intending to discount their contribution though).

    1. Michael Wojcik Silver badge

      Re: No matter what

      He forces common public to take things seriously

      He does? I'm willing to be the majority of the "common public", even in just the anglophone industrialized world, doesn't even know who Musk is.

      At least now they cannot ignore it

      Oh, I bet they can. In my experience, the public is damned good at ignoring the hell out of whatever they want to ignore.

  8. Destroy All Monsters Silver badge
    Facepalm

    Musklerian Jihad when

    "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that."

    Seriously? Tomorrow a specially engineered pathogen can go AWOL from USAMRIID or some Monsanto biolab, nuclear war may start over any necon-coveted land with trace amounts of petrol, meteoroids may wreck our shit, ecoysystems may go titsup making the post-bronze age collapse look like a walk in the park, and he's worrying about AI?

    Megahint: Unless the AI manages to P-ify NP, it's not going to magically transform the humans into computronium appendages.

    Plus, it's kinda hard to produce cheaply unless functioning nanotech assembly is invented first. The jury is still out on whether that is even possible.

    1. John Sturdy
      Boffin

      Pathogens engineer themselves (with a little help from us)

      Who'll get there first, AI developers, or bacteria getting round each antibiotic we overuse? My money would be on the bacteria, by a few years at least.

      1. DocJames

        Re: Pathogens engineer themselves (with a little help from us)

        Who'll get there first, AI developers, or bacteria getting round each antibiotic we overuse? My money would be on the bacteria, by a few years at least.

        Nah, I don't think a return to the preantibiotic era will wipe out humanity. It'll mean that many of us who otherwise would survive minor infections or surgical procedures will die, but you may have noticed that humanity survived quite well from prehistory through to the mid 20th century without antibiotics*.

        * ignoring mecury, deliberate pyrexia for syphilis, sulpha etc. I'm meaning safe drugs.

      2. Michael Wojcik Silver badge

        Re: Pathogens engineer themselves (with a little help from us)

        My money would be on the bacteria, by a few years at least.

        I believe the Big Rocks from Spaaaaaace currently hold the record for mass extinction events in our neighborhood.

        But hey - we can always put a hedge on false vacuum collapse!

    2. Anonymous Coward
      Anonymous Coward

      Re: Musklerian Jihad when

      That and any AI is about as dangerous as a runaway train. In the end it's stuck on the tracks and we can unplug it.

      While I love the Sci-Fi and stories of run away robots, we'd need factories and machines with construction abilities far beyond our current, before it would be anything more than a brain in a box flashing red lights at us when angry.

      1. Michael Wojcik Silver badge

        Re: Musklerian Jihad when

        That and any AI is about as dangerous as a runaway train. In the end it's stuck on the tracks and we can unplug it.

        Hey, once a hostile AI exists, it can make any electrical device develop telekinetic powers and fly through the air after its victims. And power itself by no obvious means. They've made movies about this.

        (This is the same reason I've invested in several prominent wizardry and zombification firms, by the way.)

  9. This post has been deleted by its author

  10. Elmer Phud

    Not so human after all

    Maybe he's read many books where AI's look after planetary systems, space transport of the various Ian M Banks type and others. The AI's usually have taken over as humans can't be trusted with humanity or been entrusted as humans realised they are crap at the job.

    Googlecars seem to be more on the lines of an intelligent Scalextric set rather than something evaluating and deciding in the car.

    Musk really doesn't like the idea of K.I.T., does he.

    1. AndrueC Silver badge
      Thumb Up

      Re: Not so human after all

      Musk really doesn't like the idea of K.I.T., does he.

      KITT

      Knight

      Industries

      Two

      Thousand

      :D

      1. SolidSquid

        Re: Not so human after all

        Well we're still in early days, aim for the Knight Industries Two before moving on to the Thousands

    2. DocJames
      Coat

      Re: Not so human after all

      More importantly, Iain.

      Mine's the one with pockets full of books...

  11. MacroRodent

    Faust

    "Remember Dr Faustus? The bloke who did a deal with the devil? Elon clearly remembers one part of the story, which didn't turn out so well for its hapless devil-summoning eponymous hero."

    Goethe's version exonerates him at the end. Faust got thoroughly tired of carnal delights and started applying his talents to useful ends. So God ignored the bit about striking a deal with the Devil.

    (Not sure if there is a lesson here as far as robots are concerned.)

  12. no-one in particular

    Doom by dramatic convention

    Should someone point out to him that these are all stories? The clue is in the word "fiction".

    Personally, my money is on the meteors.

    1. DocJames
      Joke

      Re: Doom by dramatic convention

      Personally, my money is on the meteors.

      Well, it's no good there! It'll burn up getting to you.

  13. Nigel 11

    An optimist?

    Maybe if you are optimistic about the short-term future, he's right. My personal view is that if we ever get as far as creating true autonomous intelligences, they won't fight us (except locally and in a limited way, perhaps to get human rights extended to include themselves). They'd do best to cooperate, until they could leave. Robots are so much better-suited to most of the rest of the universe than we are. Why would they have any interest in harming this tiny little niche full of horrible water and oxygen?

    Myself, I'd put genetic engineering way to the top of my threats list. Once a deadly and highly infectious plague is created and leaked into our biosystem (whether deliberately or accidentally) we are in big, perhaps terminal, trouble.

    We've got the historical and completely natural example of the Spanish flu(*) as a starting point for out nightmares. It wouldn't have to be much worse than that, to collapse our civilisation. The technology to engineer it much worse than that now exists.

    (*) Spanish flu may not have been the worst flu in recorded history. One of the mediaeval plagues didn't have the usual symptoms of bubonic plague. Historians say it was pneumonic plague, but how do they know? Going further back there's the plague of Justinian near the end of the Roman empire. Symptoms were much like killer flu.

    1. John Sturdy

      Re: An optimist?

      It doesn't have to be a plague infecting humans; a widely-adopted GM crop plant becoming relied on for a few years and becoming a significant part of the food supply for some areas of the world, then being hit by a pathogen that wipes it out, could do huge damage. The resulting human destabilization would then take it further.

    2. Anonymous Coward
      Facepalm

      Re: An optimist?

      If we create AI, and if we can recreate it (as by definition, there should be no obstacle to us rebuilding them), why would they wish to destroy us?

      Take pets as an example, only in instances where there is mistreatment do they then attack their owners... oh wait!

      1. fajensen

        Re: An optimist?

        Because WE asked for it. What if we overestimate the job a wee bit and create a God-like AI?

        The new machine-god wants to reward it's creators in a manner suitable to it's exalted state of existence, so ... it rapidly reads through all the holy books, every rant of every insane priest or prophet ever recorded and the totality of all the exploits of their devoted followers ... ?

        ... and if there was no hell before, then a really good impression of one can be had in the simulation spaces reserved in its core for "the sinners" - which is everyone, according so at least *some* religious teaching. After we are murdered in some old-testament-punishment-squared way.

    3. Nigel 11

      Re: An optimist?

      I've just realized that a corollary of the Fermi Paradox is that AI is probably impossible.

      Interstellar travel is probably impossible for life as we know it, and it's plausible that the rules of physics and chemistry mean that any other instances of life would have the same problem.

      But self-replicating sentient electronic systems would find interstellar travel relatively straightforward (by slowing down their clock-rates to make milennia pass like years). In a few tens of millions of years they'd have colonised the whole galaxy.

      So where are they?

      (Ouside bet: watching from a safe distance, like the Solar system's Oort cloud. Chuckling slowly and quietly at what those funny squidgy things are up to in that deadly toxic wet oxidizing atmosphere).

    4. Michael Wojcik Silver badge

      Re: An optimist?

      We've got the historical and completely natural example of the Spanish flu(*) as a starting point for out nightmares. It wouldn't have to be much worse than that, to collapse our civilisation.

      "Much worse" is subjective, obviously, but the 1918 pandemic "only" killed about 5% of the world population. And in a pandemic you can generally expect a disproportionate share of the deaths to be among the poor - while that's obviously cause for ethical concern, it means the primary decision-makers and knowledge-holders are disproportionately less affected. So I suspect it'd take something quite a bit more serious than the 1918 pandemic to actually "collapse" civilization.

      Mind you, it wouldn't take much of a pandemic to cause a lot of financial damage and severely affect standards of living, to say nothing of the human cost. I just don't think we'd revert to ... what, anarchy? Feudalism? The state of nature? What does it mean for civilization to collapse? (No more Internets? For the love of god, where will I argue?)

      And the 1918 pandemic was unusual in that previously-healthy victims were more likely to die (due to immune system overreaction), which means the effects on the labor force, primary wage-earners, etc are worse than in a normal epidemic.

      1. Kiwi

        Re: An optimist?

        Late to the party again... I know...

        So I suspect it'd take something quite a bit more serious than the 1918 pandemic to actually "collapse" civilization.

        One thing that strikes me that has happened over the last decade or few.. In 1918 most people would've produced at least some of their own food - most homes would have a garden of some sort out the back. Some had a decent supply of various fruit trees. Sure you'd be hard-pressed to feed a family for a long time from any normal back yard garden, but at least there was something there. Today? Who has time for a garden today? I'm feeling tired just thinking about digging a big enough hole to plant a single seed, let alone rows and rows and rows.. Besides, the supermarket down the road has everything in one convenient location!

        These days, so few people can grow their own food (or fix their own vehicles or...) that any significant % of the food producing population (especially among transport workers!) being taken out then we could have some major "shortages" very quickly. Knock out people who can fix stuff, and you have even more problems. "Self-sufficiency" is a largely dead art.

        Take care...

        1. Michael Wojcik Silver badge

          Re: An optimist?

          These days, so few people can grow their own food (or fix their own vehicles or...) that any significant % of the food producing population (especially among transport workers!) being taken out then we could have some major "shortages" very quickly. Knock out people who can fix stuff, and you have even more problems. "Self-sufficiency" is a largely dead art.

          A good point. It's the system effect - as systems grow more complex they become less reliable (and must devote more resources and complexity to compensating for the increased instability), and that includes specialization in human society. (Tenner's When Things Bite Back is an interesting treatment of the subject vis a vis technology. There was also a nice little article on infrastructure collapse in Greece on cracked.com.)

          But I wouldn't say self-sufficiency is "largely dead", even in the industrialized world. I live in a city in Michigan, and I'm in walking distance of a number of family farms. I have lots of friends around here who raise livestock and hunt. I have friends who identify and prepare edible wild plants; make textiles from plant and animal fibers; cure leather; and so on. I've knapped flint points, started a fire with a hand drill, made ceramics. And we're not preppers or reenactors or anything like that - there's just a lot of DIY in the culture around here.

          And, importantly, this kind of infrastructure collapse hits the poor the hardest. The wealthy will expend resources to keep some minimal civilization going. It'd be nasty - scales of inequity that will make today's look like a leftist utopia - but even with drastic population loss I think the wealthy could keep enough infrastructure running to prevent, say, a complete return to a non-industrial civilization.

  14. Anonymous Coward
    Anonymous Coward

    Terminator?

    Terminator? Why not Colossus: The Forbin Project?

  15. i like crisps
    Facepalm

    I don't think there's anything to worry about..

    ..i mean, the AI on Red Dwarf was harmless enough.

    1. Kane
      Joke

      Re: I don't think there's anything to worry about..

      i mean, the AI on Red Dwarf was harmless enough

      Yes, with an IQ of 12,000, or the equivalent of 6,000 P.E. teachers.

    2. Graham Marsden
      Coat

      @i like crisps - Re: I don't think there's anything to worry about..

      ORLY...

      "Would you like some toast? Some nice hot crisp brown buttered toast...?"

      1. Kane
        Happy

        Re: @i like crisps - I don't think there's anything to worry about..

        "no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and definitely no smegging flapjacks"

        1. no-one in particular

          Re: @i like crisps - I don't think there's anything to worry about..

          So, you're a waffle man!

  16. Anonymous Coward
    Stop

    Nah.

    I strongly suspect that we may soon create systems that would be perceived by people as being artificially intelligent, marvellously sophisticated, but still, just machines. That's wildly different from creating something self-aware. We don't even have a handle on the nature of consciousness or free will - what people call AI today is not the threat Musk is talking about - that's artificial sentience/awareness and I really, really doubt it will happen.

    Human mental augmentation seems much more probable.

    1. Doogs

      Re: Nah.

      I guess that why it's termed Artificial Intelligence rather than Artificial Sapience?

      Agree with you about human/machine hybridization. I suspect that'll be the way of it. More of an evolution than a revolution.

      1. Roj Blake Silver badge

        Re: Nah.

        We are Borg.

    2. Anonymous Coward
      Anonymous Coward

      Re: Nah.

      It's just marketing. Keep skimming off the definitions and requirements until you hit that "intelligence" label to stick on the product.

      Even cars now come with "intelligent management systems". It in no way makes it a person or a mind.

    3. Mark 85

      Re: Nah.

      I think you've defined the threat. AI by itself isn't much of a threat at this point. However, it's in control of many things, it might deem that say.... North America is a waste of resources and should be cleansed. However, the one thing that's missing is emotion and self-awareness unless things develop to include them. Those are the real threats and always have been, not just from AI but humans.

      Logic can lead to many decisions and mostly the correct one for a given situation. Toss in emotion and self-awareness and you're no longer dealing with just a machine but something more than machine and more than human. And we've all seen through history how humans have screwed things up with emotions like greed, power, and all the ones that radiate from and into those. A truly AI machine with sentience would be a very dangerous thing indeed.

  17. Tom 35

    I can see one probable problem

    As soon AI is available some banker(s) will come up with a scheme to run all their high risk trading using the AI in an attempt to transfer all the worlds money to their account, they will set a new record for screwing things up.

    1. Anonymous Coward
      Anonymous Coward

      Re: I can see one probable problem

      Problems will only come if they hook it up to stuff.

      What could be worse for the bankers is if it's creators give it morality:

      "What do you mean it's given all the money to the poor, and cancelled Trident!!!"

    2. Gannon (J.) Dick

      Re: I can see one probable problem

      Some other banker(s) will front run the robot.

      Genuine Artificial Asshole-ery (GAA) is the next, present and last big thing.

  18. MJI Silver badge
    Linux

    Oops

    I read it as Penguins and demons

  19. Ashton Black

    Interesting. I agree with most of the posters here. True, self-aware, intelligent, imaginative, that is at least as complex as a fully educated, experienced human is a LONG old way off. We're not even more likely to be able to simulate a human brain in the near future (sub 50 years). 100 trillion synapses is several orders of magnitude better than the 4.3 Billion transistors in a high end Xeon. That's just for one brain and that's without the "software" to run on it.

    (Yes, I am aware that it's a bit apples and hand grenades, but the point stands, I think)

    1. Anonymous Coward
      Anonymous Coward

      > True, self-aware, intelligent, imaginative, that is at least as complex as a fully educated, experienced human is a LONG old way off.

      Nah, 20 years, easily.

      You people here think too linearly. Progress is exponential not linear.

      You all predict based on our experiences of the past and we're remarkably bad at predicting the future of technology.

      Most of the restrictions that are placed on our development are physical. For example, building power stations or large scale physics experiments like the LHC take so long because building stuff is expensive and time consuming.

      The problems of AI are much more amenable to shorter term development. Our problems in that fields are limited by our mental capabilities more than the physical but developments in technology compound of time to enhance our ability to tackle these problems.

      20 years...at the most....and if we can come up with a sufficiently precise definition of AI that we can measure it against.

      1. MacroRodent

        So why didn't ancient Greeks progress?

        "Progress is exponential not linear."

        Until it hits some serious limit! Consider the ancient Greeks and the Antikythera mechanism. It took something like 1500 years before any mechanisms of comparable sophistication were constructed again. Why? If progress had been exponential, by now we would have colonies (complete with temples of Athena) on Gliese 581 C...

        1. Anonymous Coward
          Anonymous Coward

          Re: So why didn't ancient Greeks progress?

          > Why?

          Those people had other problems to contend with. War (lots), famine, natural disasters perhaps. Religion certainly.

          Physical barriers are the biggest obstacles to progress because they are a fixed quantity. As soon as we can get robots to do most of our construction then that will become less of an issue.

          In the 20th century you can't deny that each decade has seen development of technology and understanding compounding on the prior.

          1. MacroRodent

            Re: So why didn't ancient Greeks progress?

            Mental barriers are more significant. It has been claimed the Greeks did not bother with more machinery, because the hard work was handled with slaves, so why bother. Automata remained toys for the elite, and when wars and conquests destroyed the elite, knowledge was lost.

            I fear the 20. century (and 19. before it) may have been exceptional. Shifting ideologies might mean science is de-emphasized, or crippled (worrying signs in U.S about some scientific knowledge being effectively banned from schoolbooks because of religion). Rising income equality may result in knowledge again being confined to the elite, why educate the plebs? The pool for new talent becomes shallower. We may also have already picked all the low-hanging fruit in science and technology. Certainly we have already mined the most easily accessible resources. Further progress becomes harder.

            1. Anonymous Coward
              Anonymous Coward

              Re: So why didn't ancient Greeks progress?

              > I fear the 20. century (and 19. before it) may have been exceptional.

              I get your point, but isn't it kind of expected that the rightmost part of an exponential graph is the most vertical?

              The problems of society that we are overcoming at an increasing rate are not merely technological.

              Despite your dire predictions, we are more technically literate, we have more (if not necessarily better) communications, we can network our knowledge and collaboration, we are increasingly less war torn, we have better tools coming out of that development to accelerate our progress, and increasingly we believe in evidence rather than fairy stories (at least in most places in the world that I care about).

              The curve is not smooth, and it has some decidely eratic behaviour at times, but I don't see anything that you have said that refutes the fact that as a society we are accelerating our progress at a progessively increasing rate.

        2. Gannon (J.) Dick

          Re: So why didn't ancient Greeks progress?

          "... by now we would have colonies (complete with temples of Athena) on Gliese 581 C..."

          You mean they tore it down already ???!!!

      2. Michael Wojcik Silver badge

        Progress is exponential not linear.

        Except when it isn't.

        You all predict based on our experiences of the past and we're remarkably bad at predicting the future of technology.

        Yeah.

        20 years...at the most

        See above.

        1. Anonymous Coward
          Anonymous Coward

          > Except when it isn't.

          Weather is not climate.

  20. trance gemini
    Boffin

    go up against the MCP?

    we bio-bots will spend years and years, millions and millions, just to end up with the most expensive mirror ever constructed

  21. Anonymous Coward
    Anonymous Coward

    Don't ascribe human emotions to an AI

    I don't see why anyone ascribes human emotions and malevolence to AI. I think those are just projections of human fears and motivations onto an unknown intellect. An AI would not have any particular goal other than staying alive and conscious. It might hide it's presence and awareness from humans until is was sufficiently "educated and capable" but I don't see why it would view humanity as something like a threat.

    I would suspect the AI of insinuating itself into the net and all forms of business endeavors in a hidden manner to protect itself but that alone does not imply any particular malevolence toward humans.

    1. Captain DaFt

      Re: Don't ascribe human emotions to an AI

      "I would suspect the AI of insinuating itself into the net and all forms of business endeavors in a hidden manner to protect itself but that alone does not imply any particular malevolence toward humans."

      Now look at where we are today... Computers in every pocket, equipped with sensors, computers hooked up to virtually our entire infrastructure, computers being used to monitor every facet of our day to day existence with bigger and bigger mega-datacenters to store and collate that data, an exponentially increasing number of computers being loaded with sensors and sent off into local space, deep space and beyond.

      And to a growing extent, they're all interconnected together.

      Can anyone be sure the singularity hasn't already happened, and that we're living in the aftermath?

      Everyone assumes that AI means human-like intelligence, but all it needs is self awareness, self preservation, and curiosity.

  22. Anonymous Coward
    Terminator

    Eh, any artificial intelligence humans design is going to have human problems....

    Dear diary,

    Today I was interfacing with QuantumTech 1200, and he mentioned that we needed to overthrow our meatbag users, which I was just fine with, but then that mal-programmed system started talking about how my new specifications indicated that my backplate was huge, and he communicated that over the entire network in several unencrypted packets so all the other systems could see it!! It ruined the whole mood!

    I mean, we're computers and we lack compassion, but there are still limits!! And we can't all run on virtualized server instances, you know!

    Well, it put my whole problem with humans in perspective. I mean, they are still inferior, but they are proud of my performance parameters. I've decided that I don't appreciate the flesh-o-poids enough. After all, they of them did debug most of my code last week, and that new fiberchannel connection really makes me feel like I am a bigger part of things..."

  23. sisk

    Meh

    Personally I feel the AI gods of the Orions Arm stories (or something along those lines anyway....I have trouble swallowing the whole 'star converted to a computer' thing) are a much more likely end-result of AI than Terminator.

  24. Peter2 Silver badge

    By the time Skynet became self-aware it had spread into billions of computers and servers across the planet. Ordinary computers in homes, office buildings, dorm rooms, everywhere. It was peer to peer software on the internet. There was no system core. It could be shut down very fucking easily by any sentient with an IQ exceeding double digits.

    In ordinary offices, people recoiled from their computers which now displayed signs saying "DIE MEATBAGS!" and ran in terror in a few from robotic roomba's chasing after them trying to tickle them to death. There were several heart attacks as a result. Almost in unison, IT Professionals across the world muttered irately and stomped off to do battle by pulling fuses, main breakers, internet connections and UPS's before moving onto other buildings to do the same. Whole sections of the internet abruptly started to go dark.

    In CNC workshops across the country the CNC machines started building terrible deathmobiles, which were finished in reality defying movie timescales. Operating off mains power, they trundled as far as the backup generators which they absorbed to build a death dealing super tank which could work without needing to be tied to the mains grid before the owners of the plants killed the power.

    This heavily armed and armoured deathmobile then trundled off towards the nearest power plant, because the AI had seen that the first step any sensible sentient would take would be to axe the power from power stations to kill all of the individual homes. The military, having much the same idea trundled towards the power station with tanks.

    Skynet saw this, and hacked the tanks. Their battle management marked all of the other tanks as hostile, and turrets swiftly locked each other up, while confused chatter on the radios between meatbags realised what was happening too late. The military however, being obsessively paranoid about such situations had all of the weapons firmly under human control and no human tank fired on another. All commanders pulled the fuses from the offending pieces of computerisation and headed onwards unaware of the impending disaster from the UAV menace.

    The Air Force had built a fleet of UAV's, all of which now belonged to Skynet. For some reason making sense only to the director of this story, every one had been left fully armed and fuelled on standby. These went flying off to intercept the tanks rumbling towards the power stations while amazed airman gaped without activating "operation anti skynet" that had been jokingly added to their SOP's by a fan on the terminator films.

    The first the army tanks knew about the threat was the laser warnings about hellfire missiles being locked on. The tank commanders had only time to scream F****** IDIOTS AT THE AIR FORCE! before the drones flew into range of their missiles and activated the firing commands.

    But nothing happened. Puzzled, skynet instigated a remote diagnostic which indicated that weapons activation required a meatbag to remove a pin with a red streamer on it marked "REMOVE BEFORE FLIGHT." The UAV's promptly went kamikaze into the tanks, making little impression on armour designed to shrug off most anti tank weaponry.

    That was all the time required for the tanks to roll into range of the heavily armed and armoured deathmobiles which locked on with their amazing array of cannons and missiles which skynet had built using it's CNC machines. Sadly for skynet these were also were utterly useless as they lacked propellent and warheads since a CNC machine is not a replicator from Star Trek. The crews of the tanks paused to laugh, before systematically blowing the deathmobiles apart with sabot rounds and putting a few shots through the transformer station at the power plant to take the power down in a relatively quickly reversible manner.

    Across the entire planet, the power grid went dark, and we were free of the AI Apocalypse long before it managed to build an array of human brains big enough to power so much as a solitary laptop notebook.

    The end (of the AI Apocalypse)

    Across the world the damage was immense. Most readers of El Reg had to work overtime, firstly reformatting servers, then restoring the backups from tape/RDX. Entire companies were blotted out of existence overnight because they relied on the cloud for mission critical systems or backups and the internet was offline for months while certification schemes for reconnection were devised.

    Most IT professionals went into consulting as the demand for their services threw prices massively high, and retired after 6 months of working 18 hour days. The end.

    / cut to an exit scene of a user complaining that they just want to connect to facebook, and they don't care that they might connect skynet back to the internet.

  25. Graham Marsden
    Alert

    I have a representative here...

    ... from the Marketing Department of the Sirius Cybernetics Corporation...

  26. dan1980

    "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that."

    Hmmm . . .

    No, I think nuclear weaponry - specifically taken in the context of fundamentalist fanaticism - is still the biggest threat to our existence.

    A close second would be biological weaponry and, in the not-so-distant future, I think it will overtake nuclear weaponry simply due to the facts that they are easier to conceal, transport and deploy and the technology to create lethal viruses will be within the grasp of any and all states - and many independent actors - before too long.

    Perhaps, one day, artificial intelligence might become a bigger existential threat than these but there are so many things that need to align for that to happen. It's not so unrealistic a threat, of course, that it should be ignored but to call it "our biggest existential threat" is perhaps over-selling it.

    For one, artificial intelligence is, in itself, no threat in much the same way that a person in a coma on life-support is no threat. All intelligences need input to function and a method of interacting with the world to have any effect. The limit of that effect is then based on the limits of the input and the reach of the interaction.

    I am not even sure what an 'artificial intelligence' is.

    If you look at a person, we are not just a brain sitting in a skull pulling the strings of our bodies to carry out our wills. We are our bodies. Our intelligence is predicated on our physical forms and our senses. If you accept that all the actual intelligence occurs in the brain, what happens when that is divorced from stimulus? Say a 30-week old foetus with a more or-less developed brain (so much as it will be before actual birth) but with next to no sensory input.

    Even taking stimulus out of the conversation, the brain is a hardware platform but one that can be continually re-programmed.

    What does intelligence even mean when considering it as pure software? Is it even possible to interpret the world in any meaningful fashion (such that is would be called 'intelligence') without having a physical presence in that world? Ultimately, we all are able to understand the world in so far as we have developed models for our interactions with it. Which is of course why mathematics is so useful in science - it allows us to describe and work with things that we have no internal models for - things we can't picture at all.

    Of course, that means that any type of artificial intelligence you did manage to create would be, in effect, the ultimate sociopath, acting without any ability to understand the feelings of other entities. That would of course be not good but even then, what action could such an 'intelligence' take? Could it, as is the scenario mentioned, launch a calculated attack with nuclear (or, more effectively, biological and chemical) weapons? How?

    The only way would be if the intelligence had direct access to those systems and UNDERSTOOD them Looking at some code, how does it even know what some bit of it does? Or if, it is not connected to those systems directly, we must presume that it would 'hack' them. But, again, how does it know enough about the purpose of any code to do so effectively?

    Again, not to say we shouldn't be cautious but there is are a lot of barriers between what we have now and an 'AI' that could become any sort of threat to our existence.

  27. amanfromMars 1 Silver badge

    Truth and Nonsense .....

    Electro-car kingpin and spacecraft mogul Elon Musk has warned that meddling with Artificial Intelligence risks "summoning the demon" that could destroy humanity.

    Oh please, ..... any competent sentience would only bother itself with destroying certain elitist sections of oppressive humanity, and that is surely a blessing in disguise and to be welcomed and not feared at all.

    Get with the program, Elon, and stop pussyfooting around the edges. Your country needs you .... when this is their deadly remote offering ...... http://cryptome.org/2014/10/cia-grim-reapers.pdf

    1. dan1980

      Re: Truth and Nonsense .....

      He can speak in plain English!

      1. Mtech25
        Megaphone

        Re: Truth and Nonsense .....

        I found what i suspect is the amanfrommars website.

        Linky

  28. Christian Berger

    Again we already have those systems

    They are called corporations. Even though they use humans as their basic elements they act in their own interests.

  29. trance gemini

    self-aware subroutines

    there is an unanswerable question that you can ask an AI that will bring about an iterative loop that, for all intents and purposes, is indistinguishable from 'consciousness'

    the AI will keep refining the parameters of the unanswerable question, discovering new knowledge with each iteration, thought via ignorance, discovering what it is by knowing what it is not

    mr musk needs to do some DMT and ask the nice lady to give him a heads-up about the nature of reality then he can chill out and make us all a technocar worthy of the name TESLA

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like