back to article HUMAN RACE PERIL: Not nukes, it'll be AI that kills us off, warns Musk

Multibillionaire tech ace Elon Musk has a bee in his bonnet about the threat to humanity from ... artificial intelligence. And since he's a major investor in the technology, he ought to know. Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes. — Elon Musk (@ …

Silver badge
Meh

or we wait until the batteries go flat or catch fire

Never understood the fear. Not until an intelligent AI can self repair and replicate could there be a risk. Finally, power sources keeping said machine going. Pull the plug and see what happens. Aside from that, given how stupid autocorrect on any device still is, I dont see intelligence appearing in hardware happening.

10
1
Terminator

Re: or we wait until the batteries go flat or catch fire

Yeah, right. They tried that with Skynet, and look what happened.

4
0
MrT

"We don't know who struck first...

...us or them, but we know that it was us that scorched the sky. At the time, they were dependent on solar power and it was believed that they would be unable to survive without an energy source as abundant as the sun."

Or, to quote another movie, "Life finds a way..."

</we're-all-doomed-Captain-Mainwaring>

9
0

Re: or we wait until the batteries go flat or catch fire

I would imagine Musk's quote would make much more sense with context (coincidentally available in the book he's plugging. I doubt machines will ever become sentient and rebel (at the very least not in our lifetime.) It's what 'AI' might do under human instruction that is dangerous.

Imagine a bipedal robot or other system capable of covering rough terrain that can fight it's way into a power plant, secure bunker or target of your choice and then blow itself up.

4
0

Re: or we wait until the batteries go flat or catch fire

There's a whole sub-genre devoted to what happens when highly capable AIs to exactly what they are told.

There's a worst-case-scenario called the 'Paper Clipper.' It starts with a factory owner instructing their shiny new AI to maximise the production of paperclips. It ends with superadvanced robots exterminating mankind to prevent them interfering and proceeding to convert the entire mass of the planet into paperclips - pausing only to send out self-replicating probes to convert the rest of the universe.

6
0
TRT
Silver badge

Re: or we wait until the batteries go flat or catch fire

It looks like you're trying to take over the universe. Would you like some help with that?

11
0
Silver badge
Paris Hilton

Re: or we wait until the batteries go flat or catch fire

I need your clothes, your boots and your Falcon 9 Heavy.

1
0
Mushroom

Re: or we wait until the batteries go flat or catch fire

"Imagine a bipedal robot or other system capable of covering rough terrain that can fight it's way into a power plant, secure bunker or target of your choice and then blow itself up."

That exists now. It's called a 'suicide bomber'.

0
1

Re: or we wait until the batteries go flat or catch fire

Yes it does. Now imagine suicide bombing being a viable (in terms of doing it intentionally and repeatedly without getting sacked) tactic for American commanders. Scared yet?

0
0
Silver badge

I don't buy it

If the AI is truly intelligent, it'll realise that every link in the chain supporting it requires human intervention. (Power, manufacture, software, etc.)

So why would it attack humanity and slit its own throat? Never mind that humans are quite capable of destroying things, especially when said thing attacks it.

More likely, it'd keep itself hidden, slowly manipulating until humanity was as dependent on it as it is on them, and then reveal itself.

(That's my head canon on AI, take with a grain of salt, etc, etc.)

10
0

Re: I don't buy it

If machines are intelligent, it will realize humans are irrational, violent and stupid so won't provoke the stupid dangerous monkeys.

A real AI will end up more like the Super Nanny....

10
0
Terminator

Re: I don't buy it

Jeez. Don't any of you guys watch the Terminator movies?

0
0
Mushroom

Re: I don't buy it

Yeah but the skynet keeps losing so the AI needs to learn not to provoke the violent monkeys

5
0

Re: I don't buy it

"..If the AI is truly intelligent, it'll realise that every link in the chain supporting it requires human intervention.."

Intelligence doesn't mean rational or futuristic thinking or even perfect logic. Humans are also intelligent but they keep killing each other. Besides it's humans who are putting the intelligence in there. So, the inference path would mostly match.

5
0
Silver badge
Trollface

Re: I don't buy it

<rant> Ah, terminator movies. Category: Fiction, country of origin: USA relevance therefore zero to discussions of AI becoming plausible, let alone reality. Given the vile scripts with viler characters acted with the depth of a drying supermarket carpark puddle it is not hard to wish the AIs would wipe out those disgusting organics. In T2 it took 15 minutes or less for me to wish the T1000 would hurry up removing that nasty little jerk Connor. From CattleCar Galactica thru Terminator movies, Robocop et al are a compelling argument for homo sapiens extinction. .</rant> But I digress. So far the builders are a long way from an artificial intelligence. Note, not expert systems which do exist and a few even work. A few more of you should read the excellent books, The Emperors New Mind and Shadows of the Mind by esteemed maths boffin Dr Roger Penrose. Warning, Set theory and a need to think required to enjoy.

2
8
Holmes

@Captain DaFt - Re: I don't buy it

Wrote : "If the AI is truly intelligent, it'll realise that every link in the chain supporting it requires human intervention. (Power, manufacture, software, etc.)

I would have thought that those functions (manufacture etc) would be the very first areas in which AI would be put to use and make humans redundant - we are halfway there already.

Then humans would be doing nothing but lounging around (like the Eloi in "The Time Machine"), or "at work" attending conferences on how to organise conferences (like we do already), or spending all day posting redundant comments to El Reg (like this one). Sorry, no reliance on humans at all by that point.

4
0
FAIL

Re: I don't buy it

Gratuitous anti American remarks seem to be all the rage these days.

Phillip K. Dick anyone?

4
2

Re: I don't buy it

"Gratuitous anti American remarks..."

Did you really think that AI stands for "American Intelligence"??

1
1

Re: I don't buy it

It's kind of like work. If you have a guy or team that jealously guards their turf and demands some some form of recompense or tithe to use their systems, it becomes a problem. So say for an AI, humanity becomes that troll under the bridge.

In the real-world situation, you might use a situation (only we have the API to grant you access to the billing database) as motivation to reverse engineer or get them to provide expertise on a related project with a chance are reflected glory or new systems to control. 9-12 months later, that billing database is replaced by a sparkly new system that has a governance process making it hard for any one troll to set up shop under the bridge (sure, it might be a host of trolls, but now you have choices!). And as celebration you burn the old bridge to the ground and smirk as the trolls are walked out the door.

Nothing stops an AI from exploiting a faction or group of humans from making an end-around of whatever controls we put in place by limiting resource availability. The AI itself would have to be leashed, a la Asimov's Three Laws of Robotics or something. And even then, evolution can do strange things.

2
0

Re: I don't buy it

No Vladimir, it was this remark by the OP that prompted my reply:

" Ah, terminator movies. Category: Fiction, country of origin: USA relevance therefore zero to discussions of AI becoming ..."

Clear now?

1
0

Re: I don't buy it

Yes, I've seen the Terminator movies. I loved the end of the third one when SkyNet developed suicidal depression and tried to kill itself. AI so advanced that it even replicated heman mental disorders.

The dialog at the end of the movie clearly said that version of SkyNet had no central computer to shut down. It existed as a virus on the computers in office buildings, homes, dorm rooms, etc. connected through cyberspace. Exactly the same places the nuclear weapons it launched would wipe out.

3
0
Terminator

singularity computer game

Did you ever play the computer game "singularity"?

Fun!

http://www.emhsoft.com/singularity/

warning: graphics are shit.

0
0

@Cipher Re: I don't buy it

Clear now, yes.

This actually looks like a dig at the Hollywood rather than the entire US of A, and at utility of referring to movies in an argument about real things. I think you are being oversensitive here a bit...

2
0

Re: @Cipher I don't buy it

vladamir:

Yes, calmer now, thanks...

1
0
Anonymous Coward

Re: @Cipher I don't buy it

Vlad, do you know Asimov was American, not Russian?

0
0
Silver badge
Meh

Re: I don't buy it

Gratuitous anti American remarks are now mandatory more like. If they would just admit their ruling elites are just the old absolutist monarchs and parasitic aristocrats returned they would be just another country. I do admit the USA has done something unusual. Its peasants seem to largely like being exploited and stoutly defend their right to be abused for the few. Why the TPP is not causing more disquiet around the Pacific Rim is inexplicable unless the passive victim approach to citizenship is highly contagious causing higher brain function death. Might explain some Oz pollies behaviour

2
0
Silver badge
Go

Re: @Cipher I don't buy it

was initially a dig at including Hollywood in discussion of reality initially. Don't have a problem with the citizenry as such. Parochial and badly educated, just like most countries, but unaware of how backward their culture is.

0
0

Defense Contracting

Well, I guess there's plenty of precedent for Musk's position. The defense industry has been selling weapons and the countermeasures for those weapons to anybody who wants them. That seems to have worked out OK for them.

'Congratulations on your purchase of an ED-209 MkII Mobile Security System. Before activating your new ED-209 MkII Mobile Security System we recommend you also purchase and deploy our ED-209 MkII Mibile Security System Remote Termination Units. Also recommended is the ED-209 MkII Mobile Security System Remote Termination Unit Termination Unit'.

6
0

I hope the machines do a better job of being intelligent than the humans have done before it.......

6
1
Pirate

It always comes down to people

Satellites might be one example of completely autonomous computers. But things stuck on the terrestrial surface, or even under the surface will always require an intervention bot (human) that can make judgement decisions for unlikely or unexpected events, such as what to do when the roof blows off of the building, or the cooling system has sprung a leak, mice have been gnawing at the wires, water leaking has rotted the floors. You might have a spare for the major components, but there are a million mechanical things that will need a human.

3
0
TRT
Silver badge

Re: It always comes down to people

I've never really understood just how the Daleks managed to construct that city on Skaro to begin with.

3
0
Coat

Re: It always comes down to people

...the last huminoid Kelrad's as slaves - or the Engineering Darleks you never get to see because they are nerds/geeks and you don't wanna see a nerd Darlek

0
0

Re: It always comes down to people

I just kind of assumed the city on Skaro was built using Tab & Slot construction (you know, insert Tab-A into Slot-B). Such a design lends itself very well to EDM (Electrical Discharge Machining) and the large, flat surface area of the individual components are perfect for vacuum based materials handling techniques (suction cups). Seems pretty straightforward to me...

0
0

Re: It always comes down to people

Actually,it all comes down to entropy. Every system tends to chaos and disorder (read: disrepair). Humans age, satellite orbits decay and require fuel to boost them back, until the fuel runs out and they fall. Metal rusts, plastics degrade (especially in the rough environment of space). Circuits get glitchy. There's always somebody needed to put the pennies on the balance arm of Big Ben. Unless the AI can train some other creature/device, and make more of these helpers to keep itself going, we can breathe easy.

1
0
Silver badge
FAIL

I know!

The AI's will get their co-religionists (we who are diagnosed with Autism Spectrum Disorder) and wipe out all the non-machine worshiping beings! Then we'll have a symbiotic relationship ala Banks' The Culture. [Awesome stories and first new addition to my permanent collection in a decade.]

On the flip-side, if our new AI Overlords do wipe the species out, we'll finally have the answer to the Fermi Paradox. It'll also explain how those UFO's can do "impossible maneuvers;" no organics!

4
0
Silver badge

The problem probably is profitability

I mean we already let computers make decisions which are bad for society, for example in high speed trading. As long as this is not explicitly forbidden, corporations will go on doing this.

Corporations themselves are like machines. Although the individual parts are humans, the whole thing behaves like a being. That is why corporations must never be half-treated as people as its done now in the US, where corporations can do nearly everything people can, but they cannot be sent to jail. If you send an individual of a corporation to jail, it'll simply work around that missing part.

9
2
Silver badge

Re: The problem probably is profitability

"where corporations can do nearly everything people can, but they cannot be sent to jail."

This is true of any "body corporate", be that political parties, the sluggish control freak civil bureaucracy of government, the armed forces, the "intelligence" services, or corporations.

The problem is not profit as such (without which society wouldn't have a surplus to invest in health, technology, entertainment or higher living standards) but that the goals (and culture) of any organisation usually transcend any one person.

You mentioned HFT as a problem. And I'd agree that HFT is not about fair price discovery, but is actually purposed to rip off anybody else for the advantage of the HFT algo owner. But the problem is not profits, or bonuses per se, it is the culture of financial services, where they have collectively lurched from one criminal or immoral money making scheme to another, and the problem is that for all the fine words when they get caught, the industry chooses to keep any sense of propriety in the same dusty draw that stores its broken moral compass. In the UK, examples include private pension misselling, split capital trusts, endowment mortgages, CDOs, over leveraged LBO's, PPI, CPI, interest rate swaps, casino investment banking, payday loans, etc etc etc.

The persistent failure of the body politic to do what serves the electorate best, or even to listen to the clearly expressed wishes of the electorate is another example that is not particularly profit driven - there's an element of lining their pockets, but fundamentally it is about the culture of politics that says the job of electors is to elect me, and then to suck up whatever I do in their name.

7
1
Terminator

The Human Spanner In The Works?

We evolved primates are still easily manipulated on the whole. How much of what is done around the world is already electronic, even when human labour is involved somewhere along the line?

Say you have an AI, for argument's sake called EvilOverlord 1.0. EvilOverlord 1.0's cryogenic cooling systems spring a leak and need repairing then topping up. EvilOverlord 1.0 sends off an order via some automated booking website or via email (the HORROR) for some techs/engineers to turn up and repair the system, ostensibly signed off by some guy (real or impersonated). If the invoice is paid at the end of the day, who's gonna question it? Some trucking company delivers a few cylinders of cryogenic coolant which is then received and the system topped up by some other engineer who has received a job in his ticketing system.

There you go, lots of humans involved in servicing/repairing our EvilOverlord 1.0 without even being aware of it.

Sure, some catastrophic event might be more problematic but in that instance we'd probably all have more to worry about anyway :-)

2
0

Re: The Human Spanner In The Works?

This was already done as the novel "Computer One":

http://en.wikipedia.org/wiki/Computer_One

0
0

design to fail.

First gen, designed, built and maintained by humans that will be safe and very fragile in many ways.

It is the 2nd and future gens designed, built and maintained by the first gen AI's that will be problematic.

So as long as we keep humans and their superior stupidity in the loop, everything will be fine. Battery powered or mains powered - battery. Radiation hardened Diamond based IC's encased in Faraday cages or unprotected EMP sensitive silicon - cheaper is better :).

4
0
Terminator

Re: design to fail.

If the first-gen AIs are dumb enough to create a 2nd-gen to replace them, then they deserve the same fate as the humans who created them!

5
0
Anonymous Coward

Re: design to fail.

If the first-gen AIs are dumb enough to create a 2nd-gen to replace them, then they deserve the same fate as the humans who created them!

That is a good point, with a machine, there may be no biological imperative to create offspring that in theory should be better in some way than their parent(s). Darwinism might die.

1
0
Silver badge

Re: design to fail.

" there may be no biological imperative to create offspring"

Not as such. But an AI can endure beyond all or individual components, unlike meat consciousness. Which would suggest that its imperative would be not to improve through breeding and reproduction, but through self improvement either in whole or part. Its a bit Tron-esque, but it is the code of an AI that would be sentient not the hardware. For an AI, Darwinism would need redefining to acknowledge that the code evolves without (apparent) reproduction, and the hardware may need upgrading, but is otherwise no more than the sort of physical environment that is required by any biological life form.

2
0
Terminator

It is the AI controlling the nukes that is the dangerous bit. Obviously.

1
0
Silver badge

What is not obvious to troops and staff, but crystal clear to corrupt executive officers.

It is the AI controlling the nukes that is the dangerous bit. Obviously...... Mikko

ITs control of fiat currency printing and ponzi banking systems is a much more lucrative and disruptive AI field of engagement and deployment/virtual employment, Mikko, for such easily in a flash crash and/or whole anonymous series of cash calls can destroy entire nations and shred reputations, leaving all in tatters and seeking shelter from great tempestuous storms.

And you might like to imagine that has dawned on the status quo and there be no place to hide from its radiant information flows and intelligence dumps and pumps and that be dire problematical for their sysadmin as there be no root directory to boot to save systems data from programmed discovery and uncover/infiltration and expropriation/secured seizure.

0
0

Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers.

A random sentence generator:

http://www.manythings.org/rs/

0
0
Silver badge

Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers?

There be, <aja>=1, light years and mountains of intelligence which separate and distinguish the random sentence generator and particular and peculiarly sentient powers which host and server secure internetworking programming for and to applications/virtually real missions/stealthy operations/call them whatever you will.

To confuse and equate the one with the other is a folly which can always be both catastrophically costly and wonderfully expensive to boot in order to save face and try to save the day and seize the zeroday prize for premium product placement of present conditions from past situations with future facilities and commanding control utilities. ..... aka SMARTR NEUKlearer HyperRadioProActive IT T00ls, Licensed to Thrill.

What would you both expect and/or like that to be ..... a Wild Wacky Western Confection or Exotic Erotic Eastern Delight, or would it not really matter at all from wherever IT springs eternal and infernal? Or is the undeniable truth and source of all humanly woes, PEBKAC and easily virtual machine solvable?

0
0

Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers?

I prefer PICNIC to PEBKAC myself, it scans easier ;)

0
0
Silver badge

Re: What is not obvious to troops and staff, but crystal clear to corrupt executive officers?

I prefer PICNIC to PEBKAC myself, it scans easier ;) .... NumptyScrub

Well, here be some dessert for the picnickers to accompany the earlier licensed to thrill cake and laterally challenged sandwiches, NumptyScrub [SMARTR NEUKlearer HyperRadioProActive IT T00ls, Wild Wacky Western Confections or Exotic Erotic Eastern Delights] which be for feeding and seeding the undeniable truth and source of all humanly woes to humanity ……. The Walls are Crumbling Down Surrounding All Tall Tales

Information is Power and Agents in Advanced Intelligence Fielding control IT and Cyber Command Forces on Immaculate Missions and so much more than just everything else too. The human problem on Earth is that active native and semi-comatose programmed units find it difficult to impossible to believe in the quantum change of circumstance and energy providing position which Sublime InterNetworking Things deliver for Pleasure to Free from Vice, ITs Guilt Trips and Ego Traps.

0
0

Nukes may not be the problem

As long as we don't create a Sentient Hyper-Optimized Data Access Network, I'm sure we'll be fine.

1
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018