back to article AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars

Artificial intelligence boffin Andrew Ng told engineers today that worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it. Ng – chief scientist at Chinese web search giant Baidu and an associate professor at Stanford University – said fretting …

If a human being is combined with a processor/controller, are they considered robots? Because they will take over.

Humans will embed themselves with chips (et al) and take over regular humans, likely before anyone has sex on Mars at all. What will the military do? Who will stop them?

I don't fear it, but it will happen.

3
5
Anonymous Coward

Although such a human could be awesome ("I know kung-fu"), they will be far less deadly than say, a nuke or a rocket launcher.

0
0

Is that you Captain Cyborg?

The thing is you don't even need real AI for killer robots. It would not take much of an upgrade to make the current UAVs fly around an area and shoot anything moving from a car to truck size.

8
1
Silver badge

"Humans will embed themselves with chips (et al) and take over regular humans, likely before anyone has sex on Mars at all. What will the military do? Who will stop them?"

This is already happening. In Germany alone, hundreds of people have already bought personal computers they use as an extension to their brains. Currently the interface is a typewriter-like keyboard and a TV-like screen. This can be rather efficient if you bother learning it.

It's also exactly why the German constitutional court has derived a right of "integrity and confidentiality of computational systems".

And it's also a thought we are all accustomed to from science fiction. Just look at 1980s sci-fi series "The Tripods" where "Beanpole" has a special device to make him able to see more refined.

3
0
Silver badge

You mean his glasses?

> "And it's also a thought we are all accustomed to from science fiction. Just look at 1980s sci-fi series "The Tripods" where "Beanpole" has a special device to make him able to see more refined."

How else would one describe glasses when the need concept, implementation and technology was forgotten.

Good TV series, shame it was never finished though. Based on the books by John Christopher.

1
0
TRT
Silver badge

I had sex on Mars once...

I wouldn't recommend it. The chocolate stains were a devil to get out of the duvet cover.

5
0
Silver badge
Trollface

This is already happening. In Germany alone, hundreds of people have already bought personal computers they use as an extension to their brains.

Oh, I'm way past that stage: I effectively consider Wikipedia the swap file of my brain.

4
0

I stuffed a bunch of chips into my body just last night, and while they were tasty, they don't appear to have given me superpowers.

More superpowers, I mean. Maybe the effect just isn't significant at my already awesome level.

1
0
Terminator

You'll be perfectly safe

We've got 30 cops in this building...

We all know how that ended

4
0

Re: You'll be perfectly safe

Actually, I don´t, as I spent the evening in this cool bar/disco called "Tech Noir" and couldn´t leave all night as there was some kind of mugging or something...

1
0
Silver badge
Mushroom

Finally some sense

Really, I feel the extreme need to go full ISIS on the people (and frankly with no holds barred, fire cage dancing, head drilling and all that, and no merit points for being differently abled) who push yet another manufactured problem onto the hoi polloi, mental cancer-like, that they suddenly feel is AMAZINGLY IMPORTANT AND WORLD-THREATHENING but is actually as important as Kim Kardashian's increasing case of cellulitis, while the ACTUAL problems are blowing up in our faces like flying RBMK cores, but appropriately passed over until the people in power have made their buck and safe getaway.

When you understand the burning rage at the disgusting stupidity of "western" event manufacturing ... you will shit bricks and know that you are old.

7
4

Re: Finally some sense

I'm trying to imagine an RBMK core flying in my face and - geez! - that IS scary!! What'll likely kill me first - the neutrons or the tons of glowing graphite and molten uranium?

You should be given an "analogy of the month" award!

1
0
Silver badge
Mushroom

Nothing to worry about...unless you're a brown person and live in a country with a lot of oil under the ground.

I chose the freedom and liberty icon.

4
6
Anonymous Coward

Robots will be alcohol fueled, so they won't bother taking over brown countries.

0
0
Silver badge
Trollface

Well

Alcohol might leak in the heat of battle, how about fueled by fat; what could possibly go wrong?

0
0
Anonymous Coward

I have been voted down before, because I pointed out that this hysteria over AI taking over is not just overblown but over-hyped.

I now suggest that it is absurd in the extreme.

I was never convinced that Hawking was a logical thinker outside his specialist field.

Please think independently and sheeplefree before down voting.

3
9

I am neither up or down voting you, but you deserve to be down voted simply for your use of the overwrought, overdone and overused term 'sheeple'.

12
2
Anonymous Coward

No, he used the more complex and nuanced term "sheeplefree".

4
1
Bronze badge

Hawking is barely logical even in his own field.

0
1

But I have happily down voted you for being an A.C. hairsplitter.

0
0
(Written by Reg staff)

"I have been voted down before, because I pointed out that this hysteria over AI taking over is not just overblown but over-hyped."

Ignore the downvotes, lots of readers know this, particular those working in robotics and machine learning. The hype didn't actually start with the technology people, for once. Look at "thinkfluencers", "opinion formers", policy wonks, PRs, journalists instead.

And see the Comments here, they are very good:

https://forums.theregister.co.uk/forum/1/2017/01/02/ai_was_the_fake_news_of_2016/

0
0
Silver badge

Intelligence is not sentience. Fine. But what is intelligence?

I appreciate the 'learning' that occurs in systems like the voice recognition software discussed but does that learning amount to actual 'intelligence'? Surely it just results in rules and probability distributions being automatically entered into a system rather than having to be manually programmed in, no?

I would think that intelligence consists of being able to make connections between things - even when they are unfamiliar.

0
0
Anonymous Coward

Ng's point is correct: don't coddle the luddite fearhypemongerers, kick the can down the road to when strong AI/sentient/sapient artificial life is actually close to becoming a reality, then we can talk "friendly AI"

It may be difficult to "know when you see" strong AI, because the Turing test is bunk, but strong AI will be more capable than machine learning and probably based on mimicking neuroscience in silicon anyway. Also consider hooking up grown neurons to computers.

2
1
Silver badge

@AC

Being the Devil's Advocate for a moment, one could reply that once "strong AI/sentient/sapient artificial life is actually close to becoming a reality" then it may be too late to start the discussion as there will a lot of vested interest and, given the kind of money this will cost and the groups investigating it, those interests will likely be rather powerful and influential with ludicrously well-funded lobbying arms.

These groups will hold up the discussion while progress continues unabated by the kind of caution that is being advocated, much as, for example, leaded petrol continued to be produced and used, with the powerful GM, Standard Oil and Dupont spending vast sums and influence to buy everyone from politicians to university deans who might help them discredit research into negative effects of lead.

My point is not that anything bad will happen if we continue our research into advancing 'machine learning' towards real artificial intelligence (whatever that means) and on towards sentience (again, whatever that means). My point, in my self-appointed role as Devil's Advocate, is that discussing the potential issues now is not necessarily too early.

That discussion, if it is to be had, must be reasonable and proportionate and there are clearly those who are far from either. That doesn't mean, however, that the discussion is not worthwhile, just that the public face of it has veered into sweeping statements of a generally extreme and often alarmist nature. Which is the way that media reporting of technology tends to go*, but that says more about them than the importance of the discussion itself. There are always people at the ready to claim - or extend - their 15 seconds and willing to say whatever makes for 'good' television/headlines in order to do so.

I think the starting point to any reasonable discussion must be an agreed-upon definition of the terms - especially 'intelligence'. Without that, it's impossible to debate the threat that it may or may not pose. Some definition of 'AI' are almost self-evidently dangerous but others - such as those used by marketing departments - are benign to the point of banality.

For me, intelligence is the ability to reason both from specifics to generalities and vice-versa and not just within one field but across all fields - to be able to see some examples and, from those, identify the important commonalities and create some general principles. Then, from those general principles, not only figure out further, unknown instances but even to imagine non-existent possibilities and applications; we can imagine things that might exist or don't exit or even could not exist.

But, again, all that discussion is not invalidated simply because some people wish to get their names in the media.

* - That the LHC has - so far - stubbornly failed to destroy the universe despite the squaw(r)ks of the kind of people our collective media rejoice in interviewing is much to my disappointment.

1
0
Silver badge
Devil

"once "strong AI/sentient/sapient artificial life is actually close to becoming a reality" then it may be too late to start the discussion"

Ah, yes - why not use the blunt end instead and just outright ban any research regarding AI then call it a day. At least that way we can be sure it will only be done in the form of black ops and it will emerge fully skynet-ready when it eventually does.</sarcasm>

See, part of the problem in my opinion is that we're so ludicrously nowhere in the ballpark about AI that we have no idea about the nature of the thing we would be talking about - if you have anything more serious then Asimov's laws in mind, you'll find it's difficult to come to any meaningful conclusions when we have no meaningful definition of the object of the discussion. It's like trying to establish traffic regulations when you haven't even invented the wheel...

1
1
Silver badge

Agreed. Hence:

"I think the starting point to any reasonable discussion must be an agreed-upon definition of the terms - especially 'intelligence'. Without that, it's impossible to debate the threat that it may or may not pose."

0
0

if we continue our research into advancing 'machine learning' towards real artificial intelligence (whatever that means) and on towards sentience (again, whatever that means)

Very little active ML research, and not much AI research, is aimed at "advancing ... towards sentience" (or what you're calling "real artificial intelligence", which I suppose we can gloss as what some people call "Strong AI" or "human-like AI"). At improving ML so it can take on more tasks normally delegated to humans, sure; but making a human-like machine intelligence has largely fallen out of favor in the research community. Where it persists, it seems to mostly be attempts to better understand human cognition by creating ever-more-complex machine analogs.

And "sentience" is probably not a useful term here. Etymologically and traditionally it simply means "feeling" or "capable of perceiving sensation", and as such applies to a vast range of entities, including arguably any cybernetic system - so we're surrounded by sentient machines already. More narrowly and recently it's been used to mean "having a sense of self", which is trickier, because our models of self (philosophical, psychological, and neurological) are conflicting and unsatisfactory, but again that very likely applies to lots of more-complex organisms and arguably to some machines as well, which contain logical state information that represents the functioning of their material incarnations.

In some quarters, for a century or so, "sentient" has been used to mean something like "a human-like sense of self and capacity for cognition", but that's a strained usage at best, and seems to come largely from SF writers trying to sound impressive.

A better term is probably "sapience", which etymologically means "wisdom" and is used as a term of art in philosophy to distinguish thinking beings - Dasein in Heidegger's sense - from all other entities. Even with sapience, though, it's really not clear what we mean when applying it to machines - and it's particularly unclear what attributes of it people like Musk are concerned with. Are they worried about machines that can desire? That can imagine? That can emote?

I think the starting point to any reasonable discussion must be an agreed-upon definition of the terms - especially 'intelligence'.

Good luck with that. European-derived philosophy - which is a relatively homogenous school of thought, compared to the entire range of human intellectual endeavor - hasn't come to any consensus there. And computer science doesn't show any signs of doing better. Since people are still arguing over the Turing Test (and generally completely failing to understand it in the first place), I wouldn't look to the tech disciplines to agree on the matter either.

1
0
Anonymous Coward

the human brain is a machine

The terminology doesn't matter much. Sentience, sapience, and strong AI talk about the same goal in the context of artificial intelligence research. Sapience is thought to be a better term than sentience, but it's also anthrocentric. Just as intelligent computers are going to have an uncertain bucket of emotions or none at all, the same goes for aliens or any animal that is far removed from the homo sapiens evolutionary path but somehow made it to "sentience". Even now we may be slightly underestimating the mental capabilities of animals, simply because animal language is hard to recognize and an ongoing controversy.

Even with sapience, though, it's really not clear what we mean when applying it to machines - and it's particularly unclear what attributes of it people like Musk are concerned with. Are they worried about machines that can desire? That can imagine? That can emote?

I think it's better to look at it the other way around. The human brain is a machine, with strong influences from other machines that comprise the human body. It uses electrical and chemical signaling to operate. So what does the human machine have that a digital computer system doesn't? How can it inspire the development of computers that work like human brains? The easiest answer is that it should be massively parallel (see SyNAPSE). Another answer is whole brain emulation/simulation, which requires less architecture innovation and more understanding of the basic rules of neuroscience and biology. This approach would need more resources but could make your creation just as emotional as any human by default, allow you to feed it virtual cocaine to see what happens, etc.

0
0
Anonymous Coward

Killer AI vs. Religious Wars...

Personally, I worry more about ISIS. Not so much the group but the philosophy behind the faction. How easily they suck in other radicalised groups like Boko Haram etc. What if this type of movement got their hands on drones, nuclear materials and cutting edge AI... Could they create autonomous killer drones? Just a thought.... Whatever the real danger is, we're unlikely to see it coming, and autonomous-fighting-machines may be as likely as anything else....

0
0
Anonymous Coward

Re: Killer AI vs. Religious Wars...

The president has to worry more about drunk secret service agents crashing cars and federal employees landing drones on the white house lawn than ISIS autonomous killer drones.

Nonstate actors will be doing some interesting things in the future thouhg

1
0
Anonymous Coward

I'm with Ng on this... But...

Hawking, Gates, & Musk are seriously bright guys, so there must be something behind their fears... That said, the current state of AI today is a joke compared to what we thought was possible in the 80's. So are Hawking, Gates, and Musk really just lobbying for more funds for the Military Industrial Spending complex. i.e. More dollars for weapons-tech to battle theoretical evil AI?

1
0
Anonymous Coward

Re: I'm with Ng on this... But...

There is a movement within AI research and transhumanism that wants to ensure "friendly AI". They are the source of the fear decades ahead of its time, and they believe that friendly AI research is important because they are believers in the Kurzweil prediction that strong AI will work, soon, and be very transformative in a short period of time. Therefore, strong AI is an existential threat and friendly AI must be researched to manage the threat. The solution could be far more complicated than Asimov's hypothetical "3 laws of robotics". It might involve programming emotion or endocrine-like systems into AI so that they can relate to us and our morality. It might involve the exact opposite, keeping them emotionless, without goals or purpose, and thus dependent on human direction. Or it could involve enhancing our own brains with computers so we can stand on the intellectual level of advanced AI. Or the strategy might be to create a friendly AI that will fight off inevitable unfriendly AIs - an arms race. Finally, research might indicate that no "AI" of sufficient "intelligence" can be controlled by some kind of friendly chip/mechanism, which is already the belief of some researchers. Good luck controlling AI (or even synthetic biology) the way you control nukes/wmd.

You can view them as scam artists but it's probably a legit area of research and worthy of some study. It's not funds for the military industrial complex, but you might see it as a waste. Now that transhumanist groups like H+ and Singularity University have big clout in Silicon Valley, they have the ear of people like Gates and Musk, who are able to spread the message to mainstream publications (I believe Hawking just likes to ponder about humanity's future, hence his warning that "active SETI" is a bad idea). Public reaction to this is mixed. Transhumanist opinion is mixed, because some see it as fearmongering and Ludditism, others are a bit more radical and like the idea of rule by "artilects". Ng's position is very sensible. We can kick the can down the road since the road is a long one. Then we can talk about "friendly AI". Now you know what's really going on.

3
0
Silver badge

Re: Re: I'm with Ng on this... But... @ Anonymous Coward

That post certainly deserved the up vote, AC. And as for what's really going on, who and/or what do you think is leading and investing in the West or the East showing the AI Way to the Future with Futures and Derivatives Hedging Risks and Capitalising Opportunities and Safe Harbouring Vulnerabilities with Enlightened and Enlightening Shiny Paths?

CyberIntelAIgent Security and Virtual Protection System of Operations for the glorious exploration and exhaustive engaging exploitation of ....

The virtual nature of reality and the role of media in ITs creation and leading of realities, universally connecting webs for/with canny dependent and co-interindependent spheres of influence supplying private proprietary intellectual property dominance fields with/for XSSXXXX Source Secrets.

A natural spin-off of which can and will be, in the fervent and feverish desire to maintain and retain an absolute type remote global fiat command and control, a virtually unbeatable and practically indefensible armaments industry with novel weapons of NEUKlearer HyperRadioProActive IT for the sublime capture and surreal moulding of mass hearts and minds, nation states and wannabe leading non-state actors with AI Secured International Network of Virtual Business Machines .....doing Great Deeds and Battle with Spooky Spyware/Rabid Ransomware/Venal Vapourware/Alien AIDware ..... Advanced IntelAIgent Danegeldware.

2
0
Anonymous Coward

Re: I'm with Ng on this... But... @ Anonymous Coward

@amanfromMars 1: That's something.

Certainly the cyberdomain is ripe for cashsploitation, and DARPA & Co. are ready to droneliberate all life. The battle to define sensible civilian/corporate cybersecurity has been lost. The hacks of 2014 (and media coverage) killed it.

The true test of military robotics will come when bots allow us to wage unlimited ground war at no cost to American lives (those not on the kill list), but at great cost to the American taxpayer, enriching the complex. Ground drones, exosuits. No strong AI required.

Silicon Valley is a bit removed from the complex but the money is there. For example, SpaceX will be doing launching classified payloads for the Air Force soon. The transhumanists and futurists eagerly lap up the latest DARPA neuroscience and Luke arm.

0
0
Silver badge

Re: I'm with Ng on this... But... @ Anonymous Coward

Spread the news, AC, which is quite definitely, highly disruptive and more revolutionary than simply evolutionary, for it requires a quantum leap in Applied Intelligence to lead and be effective with IT. And such is what protects its disciplines and conspiring practitioners......... Elite Inclusive Executives and EMPowering Entrepreneurships ........... Ring the Daily Bell

And what says the Register on the Future and how IT phish and phorms it in the likeness of the wishes and desires of Global Operating Devices?!.

0
0

Nothing a little Butlerian Jihad can't cure. But, but, but, what about the year 10000 problem?

1
0
Silver badge

Butlerian Jihad & the year 10000

Quality Sci-Fi references aside, resorting to fuelling war with religion is effective as humanities history well shows, but it will result only in an ensuing dark-age from which mankind will have to spend another thousand years climbing out of.

It might however prove to be necessary. AI could too easily be controlled by the few to oppress the many.

Since Ng actually works on AI projects, it's like asking a tobacco exec his opinion on the dangers of smoking, or a arms merchant on his opinion on the dangers of proliferation of arms.

0
0

Looking beyond the end of Ng's nose

Will the current project Ng is working on suddenly become intelligent? Of course not.

Will another 60 years of AI research produce a truly intelligent machine? Almost certainly.

After millions of years of evolution, we are within a generation or three of the end of not only humanity but, I suspect, biology.

And software does not have to be superintelligent to become very powerful at controlling people.

http://www.amazon.com/When-Computers-Can-Think-Intelligence/dp/1502384183/ref=sr_1_4

4
2
Anonymous Coward

Re: Looking beyond the end of Ng's nose

We already have unintelligent people who are powerful at controlling people. Hopefully, when the machines rise they will take out their creators first, but every decent sci-fi story I've read seems to diss that theory.

Perhaps best not to be TOO complacent.

0
0

Re: Looking beyond the end of Ng's nose

I don't think the first killer AIs will use guns. I think they will be used by bankers to funnel vast amounts of money into their own pockets.

0
0

Re: Looking beyond the end of Ng's nose

Will another 60 years of AI research produce a truly intelligent machine? Almost certainly.

For exceedingly small values of "almost", perhaps.

1
0
Silver badge

The problem already is here

a) We do have killer robots, they are called drones and no, they are not like remote controlled model planes with guns, since they are controlled via satellite which means a minimal latency of a quarter of a second, they need to aim and shoot themselves. Currently just the target is selected by a human.

b) We already have "artificial intelligence" in the form of large organisations. They act like beings, often even against the interests of the human beings they are made out of. If you look at the example of banks, you can see how devastating they can be.

6
1
Anonymous Coward

Arthur C Clarke

Remember Arthur C Clarke's 1st law:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Ok, he is not elderly (at least compared to me)

5
0
Terminator

ED-209

Ol 'Ed has become the poster-boy for the sci-fi utility killer robot.

I always liked the Mark 13 from HARDWARE, too.

2
0
Silver badge

Re: ED-209

Quite rightly so.

I believe an ED-209 is more likely than anything else due to individual ambition, corporate greed and stupidity. Very artificial, hardly intelligent at all, and armed.

1
0
Silver badge

Re: ED-209

Gotta love the ED-209, nasty bit of work he is, but has the same problem as Dalek's with stairs!

0
0
Silver badge

He's part of the conspiracy!

If you were actively planning to create a race of malevolent machines which would threaten humanity, then research like this bloke's is part of the environment that must be in place for the capability to develop.

Just as important, during the decades it takes to get technology to the right level, you need people to soothe the general populace that there's no need to panic,in case they bring development to a halt.

He is trying to kill us all! It's a plot! Etc.

0
0
Anonymous Coward

Re: He's part of the conspiracy!

My thoughts entirely, he's the AI killer robot version of what "daywalkers" are for vampires, he should be careful though it's a strategy didn't work out that well for the human version of Balthazar in the original Battlestar Galactica lol

0
0

Re: He's part of the conspiracy!

Don't mistake conspiracy for incompetence.

Ng just wants his next research grant, and does not want to deal with ethical issues, or people asking questions about AI research in general. Ng will probably be retired before truly intelligent machines are built.

Also, Ng seems to be doing big data statistical research, which is a long, long way from the core AI research which will eventually produce the AI.

0
1
Silver badge
Joke

Simple Safety Device

Proposed new British, CE, DIN, ANSI standard

"All potentially lethal AI Robotic equipment shall be fitted with a mains lead no longer than 10 metres."

Job Done; Next problem

4
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017