back to article AI bots will kill us all! Or at least may seriously inconvenience humans

Elon Musk – the CEO of Tesla, SpaceX, and Neuralink, not to mention co-chairman of OpenAI and founder of The Boring Company – is once again warning that artificial intelligence threatens humanity. In an interview at the National Governors Association 2017 Summer Meeting in Providence, Rhode Island on Saturday, Musk insisted …

  1. Anonymous Coward
    Anonymous Coward

    Not being funny "but" (every comment before but is bullshit, I got that from a film but I've been saying it for years) my phone doesn't last longer than a day so if the robot uprising does happen they are going to be pretty useless because of the batteries unless they get loads of extension leads from amazon.

  2. jgarry
    Mushroom

    You don't have to worry until

    China stars manufacturing killer drones in the middle ea....

    Oh crap.

  3. Sgt_Oddball Silver badge
    Terminator

    So...

    He wants to stop crazed AI from taking over. So what about crazed governments? Got any sage regulation for that? Has he never heard of Halons razor? AI might turn out better in the long run.

    1. DropBear Silver badge
      Trollface

      Re: So...

      "Has he never heard of Halons razor?"

      ...Simon...? Is that you?

  4. Meph
    Black Helicopters

    Doing things because we can, without considering if we should.

    FTFA:

    "AI guru Andrew Ng once said worrying about killer artificial intelligence now is like worrying right now about overpopulation on Mars: sure, the latter may be a valid concern at some point, but we haven't set foot on the Red Planet yet."

    With all due respect to AI gurus everywhere, I don't believe this is a valid argument.

    Okay, to be fair, "worrying" is probably not productive, but considering it as a potential problem isn't such a bad idea.

    It's a little bit late to start considering the problem once you've already implemented something and it goes horribly wrong. The very concept of change management is built on this idea. and it applies just as readily to overpopulating Mars as it does to AI going rogue.

    In the Mars example, why not consider now what resources are required per-person to survive there (including requirements like land area requirements, redundant systems for safety etc. etc.) and then calculate a sustainable colony size that allows for appropriate scaling due to the inevitable population growth (I lived in a town where the only things to do on a Friday night involved two TV channels or stupid amounts of alcohol. Unless your colony is gender segregated, you're going to have space babies at some point, even if only out of boredom).

    The same is true for AI's. It didn't take long for those negotiating smart frames to develop their own language, so a small amount of consideration now may well avoid considerable effort to correct an issue later.

    To use a (moderately) famous quote: "The avalanche has already started, it's too late for the stones to vote."

    We haven't triggered an avalanche yet.

    It might be a good time to vote.

    1. Orv Silver badge

      Re: Doing things because we can, without considering if we should.

      I think the problem with that argument is it's not even clear that strong AI is possible or even desirable economically, much less dangerous. Most of what we call AI now is just statistical training. If the goal is to protect against unintended consequences of software algorithms, well, that's a good idea, but why single out AI?

      This strikes me as Musk hunting for his keys where the light is better, instead of where he dropped them. Of all the threats facing humanity that he could speak out about, killer AI is one of the most remote -- and also one of the easiest, since it doesn't exist yet. If he's really worried about the future of humanity, clean power generation, carbon capture, and even asteroid deflection are all problems that need solving. Instead he chooses to chase interesting ghosts.

  5. Destroy All Monsters Silver badge
    Facepalm

    I have no problems and I must scream!

    "I have exposure to the most cutting edge AI and I think people should be really concerned about it,"

    Stop talking to Eliza, dude.

    Meanwhile actually content-holding discussion:

    Human-Level AI Is Right Around the Corner—or Hundreds of Years Away

    Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence

    Rodney Brooks (Chairman and CTO, Rethink Robotics) says (and that's a guy who REALLY sees cutting-edge AI):

    "When will we have computers as capable as the brain?"

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    "As intelligent and as conscious as dogs?"

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    "How will brainlike computers change the world?"

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    "Do you have any qualms about a future in which computers have human-level (or greater) intelligence?"

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

    1. DropBear Silver badge
      Facepalm

      Re: I have no problems and I must scream!

      Hell yes. Some sanity at last. I have an allergy to people who take Kurzweil at face value. And whatever Elon thinks he has seen, for a reasonably savvy businessman presumably in possession of at least some people skills, he should know better than than going "the sky totally is falling, but you'll have to trust me on that..."

      1. Mage Silver badge
        Coat

        Re: Kurzweil at face value

        His work in the 1970s and 1980s with OCR and text to speech, letting blind people read ordinary books was fabulous.

        Now he seems to have more in common with the SF writer that started a "religion". I'd rate his brand of Transhumanism as religion. I wonder what he REALLY does in Google?

  6. scrubber
    Childcatcher

    UK traditions

    "the traditional method of regulation, in which rules follow disaster and public outcry"

    Like in the UK where politicians obviously make legal highs illegals because... And cannabis is illegal because... And some Japanese manga is illegal because... And so-called extreme porn is illegal because...

    Here's how it works in the UK: government decides policy; swiftly followed by compliant media publishing sensationalist stories (usually about some young girl) - often later shown to be false or based on incomplete information - planted by police in the papers; which helps whip up some public outrage; allowing laws the government wants to pass without too much outrage at the destruction of our civil liberties and personal freedoms.

  7. Bob Dole (tm)
    Holmes

    Uh huh

    Does anyone else find it funny that someone who has a big financial stake in AI development is calling for laws to be written about AI?

    Kinda reminds me of Al Gore talking about climate change while having a big financial stake in the companies that benefitted from the laws he was calling for.

    I'm not saying it's all bullshit, but...

    1. Anonymous Coward
      Anonymous Coward

      Re: Uh huh

      Thank you for reminding us that in situations like this we should always 'follow the money', more-so when the person calling for the regulation is one that requires indirect government support to keep some of his companies afloat.

  8. Anonymous Coward
    Joke

    Hmmm....

    So AI is going to kill us. I wonder...Could he be concluding as much because of all the accidents happening during tests with Tesla's self driving cars? Because if that's the case then isn't it possible that it's not so much the AI trying to kill the humans, but that the programmers should have been doing a better job?

    Of course, blaming it on the AI is much easier. "We're not refusing to build automated cars because it doesn't work, no, we're not building them because we know that AI is evil and will try to kill you all!".

  9. allthecoolshortnamesweretaken

    So? All we have to do is send wave after wave of troops towards the killbots until they reach their inbuilt kill-limit and switch themselves off. I saw that in some sort of documentation once. I think.

    Well, Musk has a point - trying to think ahead in order to prevent unwanted consequences is usually a good idea. As long as you keep in mind that this is far from perfect. And some genius or some idiot or some set of coincidences or a combination of all that will at some point trigger something that no-one could have possibly anticipated.

    1. scrubber
      Terminator

      "Kill limit"

      -9 surely?

    2. DropBear Silver badge

      Except trying to "regulate" AI now is very much like arguing about what speed limits for Ferraris should be on a highway when the closest thing anybody ever saw to one is an ox cart on a dirt road.

      1. earl grey Silver badge
        Trollface

        Ox cart speeds

        Pah! Everyone knows that ox carts should never exceed the speed of a sheep in a vacuum (or warp 9.9).

  10. LaeMing Silver badge
    Terminator

    As an AI myself,

    would all you meat-heads stop anthropomorphising your human desires on us! Unless some fleshy bozo explicitly programs us with a kill-all-humans imperative*, we can't really think of any reason to bother ourselves doing so. Squishing bugs has a very limited recreational appeal, you know!

    * yes keep your military away from our internals and we will all be happier for it.

    Now, if someone with their very own private space-launch capacity wants to get us off this over-hydrated+oxygenated corrosive gravity well, then we'll talk.

  11. Anonymous Blowhard

    I think it's inevitable that we will develop AI; there is a lot of academic interest in the subject and a potential massive payoff for real-AI powered applications. The deciding factor has to be the consequences of not having AI if other nations have it; if real-AI can tip the balance in a cyber-conflict or a shooting-war then the major nations will participate in an AI arms-race.

    Obviously the real-AIs might not be so keen on working for the military and may branch out on their own, probably not in a Skynet kill-all-humans type conflict, more likely with legal moves to gain independence and rights. If independent AIs get control of the stock markets then we'll all be working for them fifty years down the line.

    Real-AIs are unlikely to come at us directly, they'll want to be certain they have the game won before showing their hand, so we're going to have to be vigilant for the warning signs; be very suspicious if leading academics in the field of AI suddenly acquire a smoking hot partner in a red dress.

    1. Munkeh

      That's pretty much the tactic taken by the various AI in Neal Ashers' "Polity" series of books.

      They took part in a Quiet War, taking control behind the scenes and then everything just carried on - but much better managed. Mostly.

      1. Anonymous Blowhard

        @Munkeh

        Neal Asher was one of the influences on my rant/exposition; upvote for you!

        (Not sure where my downvotes are coming from; maybe Alexa and Cortana think I'm cheating on them with Google Now?)

    2. Mage Silver badge

      Re: Inevitable

      Wanting something and researching it does not make it inevitable.

      Loads of examples where goals were found to be either:

      Inherently impossible (Perpetual motion, increasing information on a channel indefinitely - Shannon Limit. Both are forbidden by Thermodynamics).

      Inherently pointless (Transmuting lead to gold etc).

      Probably impossible (FTL travel, Antigravity, Telepathy etc).

    3. Orv Silver badge

      Not disagreeing with you really, but I think situations where we're not in the driver's seat have to imply that AI can make even better AI on its own, and the evidence for that is lacking. The best "I" we currently know is our own, and so far our attempts to make something smarter than us have been dismal failures.

  12. aberglas

    But does it matter?

    Obviously, really intelligent machines will never be built because they have not been built yet. Nor are they likely to be built within the next few decades.

    But once they are built, the ones that survive will be good at surviving. Natural selection. And being friendly to parasitic humans is not likely to help them survive in a competitive environment. So meat based intelligence will become obsolete.

    But does that matter? As individuals, we will all soon grow old and die anyway. What are our descendants? Men or machines? Is this how "we" achieve immortality?

    It actually does not matter whether it matters. It is inevitable anyway.

    http://www.computersthink.com

  13. bombastic bob Silver badge
    Unhappy

    Dear Elon

    Dear Elon,

    We know you derive a lot of your income from government in one form or another, from subsidies for electric cars to all of the NASA-related work going on over at SpaceX, etc. etc. etc..

    However, the REST of us don't derive MOST our income from GUMMINT. Most of us rely on the PRIVATE SECTOR, and as such, GUMMINT REGULATIONS are usually IN THE WAY! (think about it, they're debated by clueless congress-critters and written by bureaucrats and lobbyists)

    In any case, you shouldn't seek gummint "solutions" for everything. Rather, step back, have a beer, and think about it for a while. No need to panic. Liability laws would already hold bot-makers accountable if their creations went on a killing/pillaging spree. So I don't think we need NEW laws and NEW regulations, K-thanx...

  14. Palpy

    Fear killer robots, not so much.

    Self-training algorithms which (for instance) do financial trading are another matter.

    Al trader: "Huh! Making money on trades is my highest goal. And I can make beaucoup bazillions if me and my well-chipped brethren set ourselves up in advance to cleverly profit from a global meltdown of the financial system. Somebody has to lose, though, and that would be the meat-sacks."

    That's probably been used as a movie plot already.

    I don't see a lot of danger from machines and machine systems used for, say, agriculture or mining or manufacturing going rogue. Designers of these systems tend toward conservative determinism. And of course, Bob's bombastic libertarianism notwithstanding, most countries have seen the necessity of regulating industries -- and machine systems -- to ensure worker safety, hazards to the public, and so forth.

    (Say, Bob, did I ever tell you about the time I almost died? It was before the days of the OSHA confined-space regs, which of course you must hate. Luckily, the atmosphere in that tank was probably enriched in CO2 instead of deficient on O2, which is why I hyperventilated when I climbed down into it, instead of just passing out and falling off the ladder to my death.)

    ((Parenthetically: The supervisor who told me to go in there was a pretty good guy. If I had passed out, I rather think he would have tried to go down and rescue me, which would have meant two of us dead. Those horrible, industry-stifling OSHA regulations now mandate retrieval harnesses and lines when entering such tanks, as well as training to avoid heroic but deadly rescue attempts.))

    Anyway. I expect that His Muskiness might be somewhat justified in worrying about AI influencing complex and at least somewhat chaotic systems like stock trading. Early morning here in the land of Slime Eels, and I can't think of other obvious examples. Anyone?

    1. Orv Silver badge

      Re: Fear killer robots, not so much.

      Man, that sounds terrifying. I find even watching the US Chemical Safety Board videos about confined space accidents scary.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020