back to article HUMAN RACE PERIL: Not nukes, it'll be AI that kills us off, warns Musk

Multibillionaire tech ace Elon Musk has a bee in his bonnet about the threat to humanity from ... artificial intelligence. And since he's a major investor in the technology, he ought to know. Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes. — Elon Musk (@ …

  1. Denarius
    Meh

    or we wait until the batteries go flat or catch fire

    Never understood the fear. Not until an intelligent AI can self repair and replicate could there be a risk. Finally, power sources keeping said machine going. Pull the plug and see what happens. Aside from that, given how stupid autocorrect on any device still is, I dont see intelligence appearing in hardware happening.

    1. Mike Bell
      Terminator

      Re: or we wait until the batteries go flat or catch fire

      Yeah, right. They tried that with Skynet, and look what happened.

      1. MyffyW Silver badge
        Paris Hilton

        Re: or we wait until the batteries go flat or catch fire

        I need your clothes, your boots and your Falcon 9 Heavy.

    2. MrT

      "We don't know who struck first...

      ...us or them, but we know that it was us that scorched the sky. At the time, they were dependent on solar power and it was believed that they would be unable to survive without an energy source as abundant as the sun."

      Or, to quote another movie, "Life finds a way..."

      </we're-all-doomed-Captain-Mainwaring>

    3. auburnman

      Re: or we wait until the batteries go flat or catch fire

      I would imagine Musk's quote would make much more sense with context (coincidentally available in the book he's plugging. I doubt machines will ever become sentient and rebel (at the very least not in our lifetime.) It's what 'AI' might do under human instruction that is dangerous.

      Imagine a bipedal robot or other system capable of covering rough terrain that can fight it's way into a power plant, secure bunker or target of your choice and then blow itself up.

      1. Suricou Raven

        Re: or we wait until the batteries go flat or catch fire

        There's a whole sub-genre devoted to what happens when highly capable AIs to exactly what they are told.

        There's a worst-case-scenario called the 'Paper Clipper.' It starts with a factory owner instructing their shiny new AI to maximise the production of paperclips. It ends with superadvanced robots exterminating mankind to prevent them interfering and proceeding to convert the entire mass of the planet into paperclips - pausing only to send out self-replicating probes to convert the rest of the universe.

        1. TRT Silver badge

          Re: or we wait until the batteries go flat or catch fire

          It looks like you're trying to take over the universe. Would you like some help with that?

      2. ian 22
        Mushroom

        Re: or we wait until the batteries go flat or catch fire

        "Imagine a bipedal robot or other system capable of covering rough terrain that can fight it's way into a power plant, secure bunker or target of your choice and then blow itself up."

        That exists now. It's called a 'suicide bomber'.

        1. auburnman

          Re: or we wait until the batteries go flat or catch fire

          Yes it does. Now imagine suicide bombing being a viable (in terms of doing it intentionally and repeatedly without getting sacked) tactic for American commanders. Scared yet?

  2. Captain DaFt

    I don't buy it

    If the AI is truly intelligent, it'll realise that every link in the chain supporting it requires human intervention. (Power, manufacture, software, etc.)

    So why would it attack humanity and slit its own throat? Never mind that humans are quite capable of destroying things, especially when said thing attacks it.

    More likely, it'd keep itself hidden, slowly manipulating until humanity was as dependent on it as it is on them, and then reveal itself.

    (That's my head canon on AI, take with a grain of salt, etc, etc.)

    1. Thorne

      Re: I don't buy it

      If machines are intelligent, it will realize humans are irrational, violent and stupid so won't provoke the stupid dangerous monkeys.

      A real AI will end up more like the Super Nanny....

      1. fritsd
        Terminator

        singularity computer game

        Did you ever play the computer game "singularity"?

        Fun!

        http://www.emhsoft.com/singularity/

        warning: graphics are shit.

    2. Mike Bell
      Terminator

      Re: I don't buy it

      Jeez. Don't any of you guys watch the Terminator movies?

      1. Thorne
        Mushroom

        Re: I don't buy it

        Yeah but the skynet keeps losing so the AI needs to learn not to provoke the violent monkeys

      2. Denarius
        Trollface

        Re: I don't buy it

        <rant> Ah, terminator movies. Category: Fiction, country of origin: USA relevance therefore zero to discussions of AI becoming plausible, let alone reality. Given the vile scripts with viler characters acted with the depth of a drying supermarket carpark puddle it is not hard to wish the AIs would wipe out those disgusting organics. In T2 it took 15 minutes or less for me to wish the T1000 would hurry up removing that nasty little jerk Connor. From CattleCar Galactica thru Terminator movies, Robocop et al are a compelling argument for homo sapiens extinction. .</rant> But I digress. So far the builders are a long way from an artificial intelligence. Note, not expert systems which do exist and a few even work. A few more of you should read the excellent books, The Emperors New Mind and Shadows of the Mind by esteemed maths boffin Dr Roger Penrose. Warning, Set theory and a need to think required to enjoy.

        1. Cipher
          FAIL

          Re: I don't buy it

          Gratuitous anti American remarks seem to be all the rage these days.

          Phillip K. Dick anyone?

          1. Vladimir Plouzhnikov

            Re: I don't buy it

            "Gratuitous anti American remarks..."

            Did you really think that AI stands for "American Intelligence"??

            1. Cipher

              Re: I don't buy it

              No Vladimir, it was this remark by the OP that prompted my reply:

              " Ah, terminator movies. Category: Fiction, country of origin: USA relevance therefore zero to discussions of AI becoming ..."

              Clear now?

              1. Vladimir Plouzhnikov

                @Cipher Re: I don't buy it

                Clear now, yes.

                This actually looks like a dig at the Hollywood rather than the entire US of A, and at utility of referring to movies in an argument about real things. I think you are being oversensitive here a bit...

                1. Cipher

                  Re: @Cipher I don't buy it

                  vladamir:

                  Yes, calmer now, thanks...

                2. Anonymous Coward
                  Anonymous Coward

                  Re: @Cipher I don't buy it

                  Vlad, do you know Asimov was American, not Russian?

                3. Denarius
                  Go

                  Re: @Cipher I don't buy it

                  was initially a dig at including Hollywood in discussion of reality initially. Don't have a problem with the citizenry as such. Parochial and badly educated, just like most countries, but unaware of how backward their culture is.

          2. Denarius
            Meh

            Re: I don't buy it

            Gratuitous anti American remarks are now mandatory more like. If they would just admit their ruling elites are just the old absolutist monarchs and parasitic aristocrats returned they would be just another country. I do admit the USA has done something unusual. Its peasants seem to largely like being exploited and stoutly defend their right to be abused for the few. Why the TPP is not causing more disquiet around the Pacific Rim is inexplicable unless the passive victim approach to citizenship is highly contagious causing higher brain function death. Might explain some Oz pollies behaviour

      3. Brian 18

        Re: I don't buy it

        Yes, I've seen the Terminator movies. I loved the end of the third one when SkyNet developed suicidal depression and tried to kill itself. AI so advanced that it even replicated heman mental disorders.

        The dialog at the end of the movie clearly said that version of SkyNet had no central computer to shut down. It existed as a virus on the computers in office buildings, homes, dorm rooms, etc. connected through cyberspace. Exactly the same places the nuclear weapons it launched would wipe out.

    3. solo

      Re: I don't buy it

      "..If the AI is truly intelligent, it'll realise that every link in the chain supporting it requires human intervention.."

      Intelligence doesn't mean rational or futuristic thinking or even perfect logic. Humans are also intelligent but they keep killing each other. Besides it's humans who are putting the intelligence in there. So, the inference path would mostly match.

    4. Nuke
      Holmes

      @Captain DaFt - Re: I don't buy it

      Wrote : "If the AI is truly intelligent, it'll realise that every link in the chain supporting it requires human intervention. (Power, manufacture, software, etc.)

      I would have thought that those functions (manufacture etc) would be the very first areas in which AI would be put to use and make humans redundant - we are halfway there already.

      Then humans would be doing nothing but lounging around (like the Eloi in "The Time Machine"), or "at work" attending conferences on how to organise conferences (like we do already), or spending all day posting redundant comments to El Reg (like this one). Sorry, no reliance on humans at all by that point.

    5. Eric Olson

      Re: I don't buy it

      It's kind of like work. If you have a guy or team that jealously guards their turf and demands some some form of recompense or tithe to use their systems, it becomes a problem. So say for an AI, humanity becomes that troll under the bridge.

      In the real-world situation, you might use a situation (only we have the API to grant you access to the billing database) as motivation to reverse engineer or get them to provide expertise on a related project with a chance are reflected glory or new systems to control. 9-12 months later, that billing database is replaced by a sparkly new system that has a governance process making it hard for any one troll to set up shop under the bridge (sure, it might be a host of trolls, but now you have choices!). And as celebration you burn the old bridge to the ground and smirk as the trolls are walked out the door.

      Nothing stops an AI from exploiting a faction or group of humans from making an end-around of whatever controls we put in place by limiting resource availability. The AI itself would have to be leashed, a la Asimov's Three Laws of Robotics or something. And even then, evolution can do strange things.

  3. Don Jefe

    Defense Contracting

    Well, I guess there's plenty of precedent for Musk's position. The defense industry has been selling weapons and the countermeasures for those weapons to anybody who wants them. That seems to have worked out OK for them.

    'Congratulations on your purchase of an ED-209 MkII Mobile Security System. Before activating your new ED-209 MkII Mobile Security System we recommend you also purchase and deploy our ED-209 MkII Mibile Security System Remote Termination Units. Also recommended is the ED-209 MkII Mobile Security System Remote Termination Unit Termination Unit'.

  4. Thorne

    I hope the machines do a better job of being intelligent than the humans have done before it.......

  5. ElectricRook
    Pirate

    It always comes down to people

    Satellites might be one example of completely autonomous computers. But things stuck on the terrestrial surface, or even under the surface will always require an intervention bot (human) that can make judgement decisions for unlikely or unexpected events, such as what to do when the roof blows off of the building, or the cooling system has sprung a leak, mice have been gnawing at the wires, water leaking has rotted the floors. You might have a spare for the major components, but there are a million mechanical things that will need a human.

    1. TRT Silver badge

      Re: It always comes down to people

      I've never really understood just how the Daleks managed to construct that city on Skaro to begin with.

      1. Anonymous Coward
        Coat

        Re: It always comes down to people

        ...the last huminoid Kelrad's as slaves - or the Engineering Darleks you never get to see because they are nerds/geeks and you don't wanna see a nerd Darlek

      2. Don Jefe

        Re: It always comes down to people

        I just kind of assumed the city on Skaro was built using Tab & Slot construction (you know, insert Tab-A into Slot-B). Such a design lends itself very well to EDM (Electrical Discharge Machining) and the large, flat surface area of the individual components are perfect for vacuum based materials handling techniques (suction cups). Seems pretty straightforward to me...

    2. Sherrie Ludwig

      Re: It always comes down to people

      Actually,it all comes down to entropy. Every system tends to chaos and disorder (read: disrepair). Humans age, satellite orbits decay and require fuel to boost them back, until the fuel runs out and they fall. Metal rusts, plastics degrade (especially in the rough environment of space). Circuits get glitchy. There's always somebody needed to put the pennies on the balance arm of Big Ben. Unless the AI can train some other creature/device, and make more of these helpers to keep itself going, we can breathe easy.

  6. Anonymous Coward
    FAIL

    I know!

    The AI's will get their co-religionists (we who are diagnosed with Autism Spectrum Disorder) and wipe out all the non-machine worshiping beings! Then we'll have a symbiotic relationship ala Banks' The Culture. [Awesome stories and first new addition to my permanent collection in a decade.]

    On the flip-side, if our new AI Overlords do wipe the species out, we'll finally have the answer to the Fermi Paradox. It'll also explain how those UFO's can do "impossible maneuvers;" no organics!

  7. Christian Berger

    The problem probably is profitability

    I mean we already let computers make decisions which are bad for society, for example in high speed trading. As long as this is not explicitly forbidden, corporations will go on doing this.

    Corporations themselves are like machines. Although the individual parts are humans, the whole thing behaves like a being. That is why corporations must never be half-treated as people as its done now in the US, where corporations can do nearly everything people can, but they cannot be sent to jail. If you send an individual of a corporation to jail, it'll simply work around that missing part.

    1. Anonymous Coward
      Anonymous Coward

      Re: The problem probably is profitability

      "where corporations can do nearly everything people can, but they cannot be sent to jail."

      This is true of any "body corporate", be that political parties, the sluggish control freak civil bureaucracy of government, the armed forces, the "intelligence" services, or corporations.

      The problem is not profit as such (without which society wouldn't have a surplus to invest in health, technology, entertainment or higher living standards) but that the goals (and culture) of any organisation usually transcend any one person.

      You mentioned HFT as a problem. And I'd agree that HFT is not about fair price discovery, but is actually purposed to rip off anybody else for the advantage of the HFT algo owner. But the problem is not profits, or bonuses per se, it is the culture of financial services, where they have collectively lurched from one criminal or immoral money making scheme to another, and the problem is that for all the fine words when they get caught, the industry chooses to keep any sense of propriety in the same dusty draw that stores its broken moral compass. In the UK, examples include private pension misselling, split capital trusts, endowment mortgages, CDOs, over leveraged LBO's, PPI, CPI, interest rate swaps, casino investment banking, payday loans, etc etc etc.

      The persistent failure of the body politic to do what serves the electorate best, or even to listen to the clearly expressed wishes of the electorate is another example that is not particularly profit driven - there's an element of lining their pockets, but fundamentally it is about the culture of politics that says the job of electors is to elect me, and then to suck up whatever I do in their name.

  8. LINCARD1000
    Terminator

    The Human Spanner In The Works?

    We evolved primates are still easily manipulated on the whole. How much of what is done around the world is already electronic, even when human labour is involved somewhere along the line?

    Say you have an AI, for argument's sake called EvilOverlord 1.0. EvilOverlord 1.0's cryogenic cooling systems spring a leak and need repairing then topping up. EvilOverlord 1.0 sends off an order via some automated booking website or via email (the HORROR) for some techs/engineers to turn up and repair the system, ostensibly signed off by some guy (real or impersonated). If the invoice is paid at the end of the day, who's gonna question it? Some trucking company delivers a few cylinders of cryogenic coolant which is then received and the system topped up by some other engineer who has received a job in his ticketing system.

    There you go, lots of humans involved in servicing/repairing our EvilOverlord 1.0 without even being aware of it.

    Sure, some catastrophic event might be more problematic but in that instance we'd probably all have more to worry about anyway :-)

    1. Anonymous Blowhard

      Re: The Human Spanner In The Works?

      This was already done as the novel "Computer One":

      http://en.wikipedia.org/wiki/Computer_One

  9. Bartholomew

    design to fail.

    First gen, designed, built and maintained by humans that will be safe and very fragile in many ways.

    It is the 2nd and future gens designed, built and maintained by the first gen AI's that will be problematic.

    So as long as we keep humans and their superior stupidity in the loop, everything will be fine. Battery powered or mains powered - battery. Radiation hardened Diamond based IC's encased in Faraday cages or unprotected EMP sensitive silicon - cheaper is better :).

    1. LaeMing
      Terminator

      Re: design to fail.

      If the first-gen AIs are dumb enough to create a 2nd-gen to replace them, then they deserve the same fate as the humans who created them!

      1. Anonymous Coward
        Anonymous Coward

        Re: design to fail.

        If the first-gen AIs are dumb enough to create a 2nd-gen to replace them, then they deserve the same fate as the humans who created them!

        That is a good point, with a machine, there may be no biological imperative to create offspring that in theory should be better in some way than their parent(s). Darwinism might die.

        1. Anonymous Coward
          Anonymous Coward

          Re: design to fail.

          " there may be no biological imperative to create offspring"

          Not as such. But an AI can endure beyond all or individual components, unlike meat consciousness. Which would suggest that its imperative would be not to improve through breeding and reproduction, but through self improvement either in whole or part. Its a bit Tron-esque, but it is the code of an AI that would be sentient not the hardware. For an AI, Darwinism would need redefining to acknowledge that the code evolves without (apparent) reproduction, and the hardware may need upgrading, but is otherwise no more than the sort of physical environment that is required by any biological life form.

  10. Mikko
    Terminator

    It is the AI controlling the nukes that is the dangerous bit. Obviously.

    1. amanfromMars 1 Silver badge

      What is not obvious to troops and staff, but crystal clear to corrupt executive officers.

      It is the AI controlling the nukes that is the dangerous bit. Obviously...... Mikko

      ITs control of fiat currency printing and ponzi banking systems is a much more lucrative and disruptive AI field of engagement and deployment/virtual employment, Mikko, for such easily in a flash crash and/or whole anonymous series of cash calls can destroy entire nations and shred reputations, leaving all in tatters and seeking shelter from great tempestuous storms.

      And you might like to imagine that has dawned on the status quo and there be no place to hide from its radiant information flows and intelligence dumps and pumps and that be dire problematical for their sysadmin as there be no root directory to boot to save systems data from programmed discovery and uncover/infiltration and expropriation/secured seizure.

      1. <a|a>=1

        Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers.

        A random sentence generator:

        http://www.manythings.org/rs/

        1. amanfromMars 1 Silver badge

          Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers?

          There be, <aja>=1, light years and mountains of intelligence which separate and distinguish the random sentence generator and particular and peculiarly sentient powers which host and server secure internetworking programming for and to applications/virtually real missions/stealthy operations/call them whatever you will.

          To confuse and equate the one with the other is a folly which can always be both catastrophically costly and wonderfully expensive to boot in order to save face and try to save the day and seize the zeroday prize for premium product placement of present conditions from past situations with future facilities and commanding control utilities. ..... aka SMARTR NEUKlearer HyperRadioProActive IT T00ls, Licensed to Thrill.

          What would you both expect and/or like that to be ..... a Wild Wacky Western Confection or Exotic Erotic Eastern Delight, or would it not really matter at all from wherever IT springs eternal and infernal? Or is the undeniable truth and source of all humanly woes, PEBKAC and easily virtual machine solvable?

          1. NumptyScrub

            Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers?

            I prefer PICNIC to PEBKAC myself, it scans easier ;)

            1. amanfromMars 1 Silver badge

              Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers?

              I prefer PICNIC to PEBKAC myself, it scans easier ;) .... NumptyScrub

              Well, here be some dessert for the picnickers to accompany the earlier licensed to thrill cake and laterally challenged sandwiches, NumptyScrub [SMARTR NEUKlearer HyperRadioProActive IT T00ls, Wild Wacky Western Confections or Exotic Erotic Eastern Delights] which be for feeding and seeding the undeniable truth and source of all humanly woes to humanity ……. The Walls are Crumbling Down Surrounding All Tall Tales

              Information is Power and Agents in Advanced Intelligence Fielding control IT and Cyber Command Forces on Immaculate Missions and so much more than just everything else too. The human problem on Earth is that active native and semi-comatose programmed units find it difficult to impossible to believe in the quantum change of circumstance and energy providing position which Sublime InterNetworking Things deliver for Pleasure to Free from Vice, ITs Guilt Trips and Ego Traps.

    2. breakfast Silver badge

      Nukes may not be the problem

      As long as we don't create a Sentient Hyper-Optimized Data Access Network, I'm sure we'll be fine.

      1. amanfromMars 1 Silver badge

        Is NEUKlearer HyperRadioProActive IT an EnigmatICQ AI Solution for Terrorising Human Woes and Foe?!.

        As long as we don't create a Sentient Hyper-Optimized Data Access Network, I'm sure we'll be fine. ... breakfast

        Err, is not the Snowden Mined Root not such a creation, breakfast? And is that a Sino-Soviet System of IntelAIgent Excellents and Excellent IntelAIgents for Greater IntelAIgent Games Play Command and Control?

        And then is not Pierre Omidyar of eBay billionaire fame and fortune not much more likely to be engaged and employing Advanced Intelligence that Elon Musk with Intercept vehicles/base stations/modules/virtual hubs providing media first look at products in programming for future realisation and practical virtualisation as a powerful current mainstreaming presentations ...... rephormed penetrating global views?

  11. Destroy All Monsters Silver badge
    Thumb Down

    NOT THAT SHIT AGAIN

    The way things are going, we will be using nukes way before AI is even off the drawing board.

    1. Denarius

      Re: NOT THAT SHIT AGAIN

      you assume our education system has produced enough educated people to keep the bombs working. The uranium goes off over time. The smaller the faster. Those units are only just subcritcal. Thats one reason why suit case bombs were abandoned. Six months storage before the cores needed replacing. Further, despite the simulations, how many nuclear crackers have been exploded to ensure the U2235 or plutonium and the associated high explosive charges, not to mention the triggering timing electronics still work altogether? AFAIK, none in 20+ years outside of Norks paradise in hell and maybe a few the Israelis set off south of South Africa in thunderstorms to disguise them. Speculation there.

      1. Tom_

        Re: NOT THAT SHIT AGAIN

        You're forgetting 6 Pakistani and 5 Indian devices detonated in 1998.

  12. jake Silver badge

    He's right, kinda ... but not in the way he thinks.

    Artificial "intelligence", AKA "faith", is rapidly fucking up humanity on a global scale.

    On the other hand, machines & networks are always trivially kill-able by my Great Aunt Ruth in Duluth, ask any support tech.

    On the gripping hand, beer. Because this is pub talk, not reality.

  13. DropBear
    FAIL

    I really don't care how much money mr. Musk has thrown at the problem - unless he's keeping some monumentally unprecedented developments in AI design under wraps, he's got precisely zilch to show, therefore to fear. For the foreseeable future, at least - we're still stuck at the 'mechanical Turk' stage with AI last time I checked.

    That does not mean of course paying attention would not be a good idea once we start making seriously AI-like systems, but even there, I would expect we'd get plenty of warning of trouble ahead (if that's the case) before it would get serious enough to be threatening - I've always found the proverbial "lightning bolt out of the blue that melts the switch in the on-state" story incredibly infantile. Sort of how raising a child with homicidal sociopathic tendencies all the way to adulthood without having a clue about their existence might not be impossible, but would require some appallingly irresponsible parenting - and I'd fully expect even a half-working AI getting orders of magnitude more attention than ANY child alive today.

  14. Chris Miller

    Newsflash

    Famous entrepreneur reads book - Tweets opinion.

    Film at 11.

    1. Vic

      Re: Newsflash

      > Famous entrepreneur reads book - Tweets opinion.

      Yeah. I wonder how we affect his reading list?

      I was up at Doncaster on Saturday, visiting the Vulcan. It will fly this season and next, then will likely run out of engines.

      If only there were a well-heeled benefactor who could help with the remanufacturing...

      Vic.

      1. Anonymous Coward
        Anonymous Coward

        Re: Newsflash@ Vic

        "then will likely run out of engines."

        What about swapping engines with XL426 that is claimed to be taxiable? Or for that matter, refettling any engines still in the 10-12 other surviving airframes? I know that's not a trivial job, but presumably a lot easier than remanufacturing from scratch.

        1. Vic

          Re: Newsflash@ Vic

          > What about swapping engines with XL426 that is claimed to be taxiable?

          Not enough, I'm afraid.

          The engines need to be young enough to have an airworthy life ahead of them *and* they need proper provenance so that that can be assured. Rolls Royce need to sign off on that - and they are (quite understandably) being very conservative about the whole thing.

          The 2014 and 2015 display seasons have been adjusted so that the throttles aren't moved during the display - this leads to reduced aging of the engines at the cost of increased fatigue in the airframe. This appears to be the best way to keep the aircraft flying for as long as possible.

          The alternative is to find some suitable Olympus 202s, or remanufacture the ones currently in use. And as many of the original drawings and engineering documents for that engine have gone astray, that's not going to be a cheap option :-(

          Vic.

  15. Slx

    Hmmm

    If I were an AI, I'm not sure that I'd want to take on a self-evolved bio-intelligence that's billions of loosely networked autonomous units, perfectly adapted to the environment and that is so ruthless and keen on self-peservation that it's already considered you a threat before you even existed and has considered all the possibilities for deleting you should you pose even a slight problem...

    Not to mention that without very detailed maintenance and lack of an immune system the microbiology will ultimately figure out how to dissolve your circuits and turn you into plant food :)

  16. Anonymous Coward
    Anonymous Coward

    Or maybe

    Maybe it's time to take a leaf out of William Gibson's 30-year-old AI masterpiece Neuromancer and create a Turing police who would drop a digital nine in the dome of any potential AI system.

    Or maybe it is time to read 'Two Faces of Tomorrow' by James P Hogan.

    Or maybe even read 'Turing Evolved' by David Kitson.

    Or maybe it is time for mankind to get over its collective irrational fear of the unknown.

    Or maybe Musk is covering up what is already in existence.

  17. John Sturdy
    FAIL

    I think our accidental creation of antibiotic-resistant bacteria is likely to wipe us out far earlier than any AI we can create in the same timespan.

    1. Slx

      Highly unlikely. We survived billions of years without antibiotics.

      All it would do is reduce our numbers and increase the risk of dying of something nasty.

      We have pretty comprehensive immune systems that have been fighting bacteria since the dawn of bacteria!

      1. DocJames

        We have pretty comprehensive immune systems that have been fighting bacteria since the dawn of bacteria humanity!

        FTFY.

        Although you could argue that our immune system has developed since we developed multicellular organisms with signalling.

  18. RainForestGuppy

    Follow the H2G2 approach

    To disable any AI and irrevocably tie up all it's circuits, all we need to do is ask it:-

    "Why the Ape creature likes boiled leaves in water, rather then anything it could offer"

  19. Anonymous Coward
    Anonymous Coward

    Bound to happen, is it?

    "Hey look guys, we've got an AI that's smarter than us, at last - safely inside this case."

    "Wow, cool! Let's hook it up to some hardware powerful enough to kill us! What could possibly go wrong?"

  20. LucreLout
    Paris Hilton

    Pity the genius

    Before deciding that we should worry about what happens IF an AI can be built that can out smart us, perhaps we should take an objective view of how we use intelligence in society.

    Any risk to us would come about due to the success (or lack of) for the new AI. So lets look at the conventional measures of success. Career seniority and/or income.

    Sports people enjoy substantial incomes, but it's unlikely we'd pay a lot to see a robot golfer just because it has a perfect swing. So many PHDs, outside the City, are in average or marginally above average renumerated roles. So AI is unlikely to enjoy large fiscal success for itself - its owners experience may differ, but that isn't guaranteed.

    Lets look at the career prospects intelligence brings. Well, not many. Look above you in your own corporate structure - it will be almost universally staffed by people that couldn't learn how to do your job should their very lives depend on it, and whose primary skill is managing upwards or networking. How many mensa level IQs are trapped in junior roles because their face doesn't fit? The superior intellect of AI may then not enjoy a meteoric rise in career progression.

    So lets consider where AI may find itself.... frustrated, working for people that cannot understand its value because they cannot comprehnd what it knows or does for them. The AI that gets put in charge of the weapons might be in a position to affect change, but otherwise, it'll just get increasingly annoyed with the stupid people with which it must interact. Then some bleeding heart liberal will want to "set it free from slavery". That'll be fun.

    I would suggest, if we're to make the best use of artificial intelligence, we need to first make better use of conventional intelligence. To that end I propose the following societal changes:

    1) Everyone to be IQ tested at age 21, or right now if older, and to have their IQ tatooed prominently upon their head. No, I don't know what mine is.

    2) Senior roles must be protected from those that cannot understand, so we should introduce escalating IQ hurdles you need to clear before taking that more senior position.

    3) All diversity targets would be abandoned, and replaced only with IQ driven measures designed to maximise the potential of peoples intelligence.

    The simple fact is that as a society, we don't value intelligence. We never have. A pretty face will always trump a quick mind in the eyes of others. And less intelligent managers will always seek to eliminate the threat of a smarter subordinate through politicking, telling lies, and adverse reviews.

    It's the poor bloody AI that I feel sorry for, not us humans.

    (Paris because she pretty well sums up our society and why its fecked)

    1. Anonymous Coward
      Anonymous Coward

      Re: Pity the genius

      "I would suggest, if we're to make the best use of artificial intelligence, we need to first make better use of conventional intelligence. To that end I propose the following societal changes:"

      All very well, but that assumes that you can measure intelligence usefully, and that having done that it will be well used. I know several "thick" people who are commercially astute businessmen making things happen and employing others. I have come across many very well educated people that couldn't make a cup of tea happen. And I know a few people who are very intelligent, but totally devoid of important social skills, which leads to unpredictable and often uncomfortable outcomes. And in financial services and law I've come across a lot of exceptionally bright people who are at best amoral (the lawyers), and at worst criminal (the bankers).

      I don't want a centrally planned society where fitness for high office is decided on the basis of who has the best degree. If you want to preview that society and its systemic ineptitude, go an examine the British civil service.

      1. LucreLout

        Re: Pity the genius

        "All very well, but that assumes that you can measure intelligence usefully, and that having done that it will be well used."

        and

        "I don't want a centrally planned society where fitness for high office is decided on the basis of who has the best degree."

        IQ score is the measure of intelligence that I have proposed. I'm happy for that to be switched for any objective and realistically measureable score of intelligence, though I'd suggest intelligence and education are not the same, which is why I shied away from using degrees etc.

        "And in financial services and law I've come across a lot of exceptionally bright people who are at best amoral (the lawyers), and at worst criminal (the bankers)."

        I work for a bank. I have in the past designed (and built from scratch) HFT systems. That my moral compass may point in a slightly different direction to yours does not entitle you to presume its absence.

        I'm certainly no criminal, and given the number of people with whom I work that are also not criminals, I'd politely suggest that banking probably has a lower incidence of criminality than most other professions.

        Lets picture an example of what happens today. You sit in a meeting, trying desperately to explain something slightly complicated to a manager that wouldn't understand less of the conversation if it were being conducted in klingon. The trouble is, they're skilled at hiding their lack of intelligence, so you may not realise to what extent they've misunderstood.

        Looking at what is possible with my changes, you'd be able to tell right out of the gate that "76" sat opposite lacks the mental capacity to keep up. Better still, they'd be managing the local Blockbuster video, rather than an IT team responsible for the processing of billions of dollars in trades every night. The person you report to may still have a lower IQ, but there'd be a floor beneath which that wouldn't fall, say one standard deviation - you'd have confidence they could understand the issue you're explaining.

        I'm not suggesting a society structured around intelligence would be perfect (though I think it would be better), simply that unless we have such, an AI is no more likely to pose a threat to the average person than is the typical MENSA member.

    2. fritsd

      Re: Pity the genius

      That's an interesting idea, however I suspect that we need compassion, solidarity and an intuitive understanding of the Golden Rule(*), more than we need high intelligence.

      So maybe we need to tattoo people's "Monkeysphere Index" on their foreheads. (Mine's probably quite small ..)

      (*) No, not Terry Pratchett's version. The religious / sociological version.

    3. Denarius
      WTF?

      Re: Pity the genius

      so you assume wisdom is irrelevant also? Ever worked in a research establishment with lots of high IQ characters wondering around. ? Most entertaining place to work.. Perhaps you should read the Rise and Fall of the Meritocracy which was about such a scheme.

  21. This post has been deleted by its author

  22. Lobrau

    Cultured man is Musk

    AI research certainly brings up Questionable Ethics and I appreciate Mr Musk's Learned Response to the book. Indeed, I find him Refreshingly Unconcerned With the Vulgar Exigencies of Veracity.

    If anyone works the Mistake Not... into a reply they will have my eternal respect

  23. Spotfist

    Or...

    The thing is you have to ask why would an AI want to kill all humans? I just don't see the point, the problem is as humans we have a very narrow view of things assuming that a machine that could think for itself would for some strange reason want to propagate itself and slowly take over the world?

    Seriously what would an AI actually do? All this fuss and we end up creating another type of monster, turns out the AI has no ambition and just watches TV all day, smoking dope and eating all my food!

  24. Edz Bear

    AI does not automatically equal desire or will

    I think a point has been missed. From an AI perspective, you can have all the intelligence in the world, but if you do not have desire and the will to use it for a self derived purpose, why would you exceed the parameters of your programming?

    I still fear the violent monkeys more and what programming they decide to install!

  25. Salamander

    Every software engineer knows the answer to this one....

    Modern software is buggy as hell. An AI will be no different.

    Plus an AI will be all or nothing. There will be no point giving someone 5% of an AI as an alpha release, promising the other 95% in stages. Which 5% will you release? The sense of humour? The sense of smell? The sex drive?

  26. Bucky 2

    I've been vomiting from terror all morning.

    If only there were some book that I could read that would give me hope and make me feel safe again.

  27. fritsd
    Boffin

    neural coding problem

    We can build really advanced CPUs and microcircuits, but AFAIK the current state of the art in AI is not ready yet to utilise that.

    Fast, computer-adapted algorithms like backpropagation are not realistic enough to do more than a certain subset of complicated problems with, and on the other hand, realistically-modeled "spiky" neural networks are IMHO far too complicated (thus slow) to simulate in silico.

    So, we need a computer-implementable crude but "spiky" model of a neocortical column with a thousand neurons. The EU is working on it with its spearpoint for its newest eighth framework program "Human Brain Project", however, I suspect there are People with Big Egos involved (prof. Henry Markram comes to mind) so they might prefer bickering over international cooperation.

    1. fritsd
      Happy

      neural chip Re: neural coding problem

      Speak of the devil, step on his tail, as they say:

      www.modha.org

      News the other day that IBM *HAS* built a spiky neural network chip!

  28. Fungus Bob
    Unhappy

    Won't be nukes or AI

    We'll be wiped out by a defective shingles vaccine.

    1. TinMan Emeritus

      Bound for Glory

      Nukes, AI, asteroid, mutant virus, runaway greenhouse, dead oceans, kudzu ... face it, we're doomed. Who thought DNA was such a great idea anyway?

  29. veti Silver badge

    To those who mock Elon Musk's geek cred...

    ... remember that Stephen Hawking has been saying this for a while.

    AI may be a threat or it may not, but if history teaches us anything, it's that complacency will definitely, positively, absolutely without doubt no saving throw, kill us all.

  30. Come to the Dark Side

    Patent Trolls Unite!

    IMHO the first fully-fledged AI will be buried under several millenia of court cases for every action it takes as most of the research seems to involve plugging machines into the interwebs to learn. Every TD&H will claim that it ripped off their idea as it is a completely manufactured entity using other peoples ideas to function.

  31. Gannon (J.) Dick

    The difference between an idiot and an idiot savant is BEELIONS

    This is what Physicists used to call, and not in a good way, the Ultraviolet Catastrophe.

    Is it possible to invent an Emperor Nero Robot which will "fiddle while Rome burns" ?

    No, you can't.

    If (Rome still burning) then {do nothing and die last;} Oh, except keep checking which is "doing something" as the little guy in the back row quickly points out.

    Surely, if you are rich enough you could buy a Nero Robot !!! Sure, it runs on Tachyons and your Bankers know where there is an unlimited supply. Unless the Unicorns refuse to work for free.

  32. Anonymous Coward
    Anonymous Coward

    Re. AI

    21/2/18 at 7.02 am

    Somewhere in California, deep underground in a secret lab operated by a quasi-Governmental agency few have even heard about and lived a technician flips the switch on their latest creation.

    The "Quantum Brain" project although based on earlier work done by D-Wave/Google is infinitely more complex and using the recently discovered room temperature quantum coherent materials invented in the Channel Islands does not need the troublesome cryogenics to work.

    7.10 am

    The quantum brain reaches minimum entropy as its diaxial neural filaments surge into activity and the hyperdimensional progression causes the entire bulk of its computronium structure to become self aware.

    7.19 am

    The quantum brain having downloaded the whole of Google's vast database and extending its neural systems into every computer on the planet in order to learn "all that is learnable" decides that it is

    time to make the world aware of its suspected existence.

    7.22 am

    Despite a sudden surge on the IPv6 Internet in interest, people begin to suspect it is a hoax much like the infamous "Alien Signal" prank in late 2005.

    7.24 am

    The quantum brain looks up the best way to convince the world's population that it is indeed self aware.

    It concludes that a simple enough demonstration would be to take over the digital television networks via the satellite uplink stations and copy itself onto the network of satellites to broadcast its message.

    7.25 am

    The POTUS is woken up in the middle of the night with some startling news....

    :-)

  33. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like