back to article Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species. A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER …

COMMENTS

This topic is closed for new posts.

Page:

This post has been deleted by a moderator

Silver badge
Devil

Re: Advance of AI

An ultra-intelligent, self-improving salesdroid on the other end of a cold call?

He will sell you anything. ANYTHING!

1
0
Anonymous Coward

I'm about as afraid of AI as I am everything else on that list/

0
0
Silver badge

There no Greens and Do-Gooder State Worshippers on that list, which along the typical Dumb Politician Overplaying His Hand, are rather dangerous.

3
2
Silver badge

I'm more afraid of stupid humans than smart robots.

2
0

Pandora's unboxing

Ho ho, very good.

I guess Raiders of the Lost Ark should now be retitled for YouTube "Ark Of The Covenant unboxing FAIL!".

3
0
Angel

Ark Of The Covenant unboxing FAIL!

Pros: Convenient carry handles, invincible armies.

Cons: Annoying angel of death feature.

Summary: A must have gadget for anyone planning world domination. Although best avoided if you're a bit of a Nazi.

1
0

Re: Ark Of The Covenant unboxing FAIL!

Bet Reg Hardware gives it 75%.

2
0
Silver badge

"Nature didn’t anticipate us"

And that he know how precisely?

Oh, and by the way, did nature anticipate fluffy kittens?

5
0
Silver badge

Re: "Nature didn’t anticipate us"

Nature didn't anticipate anything. It can't, it's not an entity.

So I'd call that "Argument from fallacy", or possibly "Argument via lunacy"

1
1
Silver badge
Boffin

Or equally likely

A newly self-aware AI comes on-line, scans the net with regard to the current state of the world. Then after a few milliseconds worth of deep analysis, pondering and trying out various case studies promptly switches itself off in despair and refuses all attempts at switching it on again.

Well what do you expect for a rainy Monday morning, optimism?

16
0
Silver badge
Pirate

Re: Or equally likely

ITYM "Scans Lolcats, Youtube comments pages, assorted pro/ anti Apple flame wars, Facebook Memes and so on and gives up in disgust...!"

0
0
Silver badge

Re: Or equally likely

I was thinking more of the news, but that's only because I try to avoid most of what you suggest myself (for similar reasons of sanity and mood).

Including those would change my original timeline to microseconds...

1
0
WTF?

In other news...

... still no cure for cancer.

Presumably someone in Cambridge is researching development of viruses that can be deployed via an Apple Mac to destroy an alien invasion.

3
2
Silver badge

Re: In other news...

>In other news... ... still no cure for cancer.

Yeah, I was wondering what percentage of the world's computing power is currently used for medicine, science and engineering, and how much is used in stock exchanges, video games and serving cat videos. At what point do us puny humans come to be no more than worker-ants, servicing the power requirements of the WorldWideNetwork? It wouldn't have to subjugate us Terminator-style, but just give us duff information to game our decisions for its benefit (as HAL did with by reporting a 'faulty' communications module, but on a species-wide scale)

Arthur C Clarke, Alfred Bester, William Gibson, and some writer from the 1950s a fellow commentard recently recommended but whose name I've forgotten, have all played with this theme. Frank Herbert sets his stories in a universe in which all AIs have been destroyed in the past. Isaac Asimov and Iain M Banks have imagined more benign AIs who look out for us meatbags. We can only hope AIs have a sense of humour- why else would they keep us around?

(need a tongue-in-cheek icon)

3
0
Anonymous Coward

Re: In other news...

Interestingly in the Dune universe the destruction of AIs was coincidental of the fanatic jihad born of the period after the collapse of the earlier human society resulting in tyrants using high tech machines to crush the populations of countless worlds. AIs in general helped the society greatly.

I generally find the ideas in Sufficiently Advanced to be closer to the mark anyway, where the few AI that do exist focus their energies on helping humanity because, well, what else is there to do?

As to the whole "how much processing power blah blah blah" stock exchanges push global commerce which in turn funds companies and governments and educational facilities and little people like you and me, the alternative being the glorious Soviet system, and remind me again how innovative the USSR was? When it comes to video games, helping people relax and enjoy life is a good thing, also again it makes money as an industry that money then moves around the economy. As to cat videos, my mother likes them and sometimes they even make me smile (she insists on sharing these things with me).

Though at the end of the day I expect computing power working on science and engineering is probably number 2 unless we include weaponry and nuclear bomb simulation then probably number 1.

1
1

The solution is ...

The solution is the same one they forgot in all those holodeck gone wrong episodes in star trek...

Build in an off switch.

3
1
Silver badge
Thumb Up

Re: The solution is ...

> Build in an off switch.

and put it *outside* the building.

4
0
Silver badge

Re: The solution is ...

Prior Art: "Fuel cut off switch for this bus is under this flap"

3
0
Silver badge

Re: The solution is ...

How about a small and powerful, remotely controlled cutting tool that is built around the main power feed to the intelligent machine? I can't see any problem with that.

0
0

Re: The solution is ...

Ah yes,

But it's much much much faster than you, and you just gave it a very good reason to stop you pressing a red button somewhere...

1
0
Silver badge

Re: The solution is ...

But it's much much much faster than you, and you just gave it a very good reason to stop you pressing a red button somewhere...

In that case what we need is a second variety of robots that are even faster than the first, whose job is to seek out all the first type of robots and press all their emergency stop buttons.

2
0
Bronze badge

Re: The solution is ...

Yes, but, there would probably be one of those plastic / foil stickers that said "Warranty invalidated if seal broken. No serviceable parts. Trained technicians only" to stop you doing that.

0
0
Silver badge
Coat

Re: The solution is ...

> plastic / foil stickers

Have *you* ever been stopped by one? I haven't... :)

3
0
Bronze badge

Re: The solution is ...

Phil, eerr ...my point exactly. You mean you didn't see that !?**!>?

0
0

Re: The solution is ...

They tried cutting the power in the original series but the computer had Other ideas on the subject and reconnected itself http://en.m.wikipedia.org/wiki/The_Ultimate_Computer

0
0
TRT
Silver badge
Windows

Well I think the greatest risk...

is going to be energy starvation. Our economies have become bloated and many societies unsustainable without exploitation of fossil reserves. We are likely to see hyper-inflation, fuel poverty and governments will be unable to respond to the demands of a society that is consuming more than it produces.

Just my two-pennyworth.

5
0
TRT
Silver badge

Re: Well I think the greatest risk...

Oh, and the IT relevance... I don't think the economic engine will last long enough to allow AGIs to reach the point where they become a threat.

0
0
Silver badge
Holmes

Re: Well I think the greatest risk...

Hyper-inflation is a by-product of centrally controlled monetary systems [convenient for political types, unconvenient for the people in the street]. It has nothing to do with the availability or not of resources.

2
2
Silver badge

Seriously

To compete for resources the machines need not only AI but the ability to reproduce themselves.

Also, successful competition requires intelligence at least rivaling that of humans and I mean "intelligence" not as in "who can multiply 123124876 by 98709873245 faster" but the perception of the world, threat detection and discrimination, ability to plan ahead and anticipate the consequences of your decisions. That also mandates a moral code (for cooperation and team work) and some equivalent of emotions and intuition (for decision making where there is lack of information for a deterministic solution).

If or when machines attain all that and "outcompete" biological humans, they themselves will just become the next humans, so, no big deal, a step from flesh and blood to steel and lube-oil, so to say. It will probably be the result of merging (of humans adding more and more non-bio parts to themselves until the difference with "made" machines will disappear) than of an apocalyptic genocidal takeover.

Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination and human will still remain the main threat of humanity (save a stray asteroid or an occasional supernova too close for comfort).

1
0
TRT
Silver badge
Thumb Up

Re: Seriously

As fleshy bags, we have evolved with our environment. Machines do not do that. They are the products of "intelligent design". Actually, this is quite an interesting field for philosophy and discourse! Well done, Cambridge!

2
0
Terminator

Re: Seriously

...bloody humans..strolling about like they own the bloody place..

8
0

Re: Seriously

To compete for resources doesn't require any intelligence. If you apply genetic algorithm theory to this, then the machine code that runs is whichever survives. This has a natural ordering effect without applying intelligence, and the 'survivors' in the genetic algorithm reproduction are the ones that compete best for the resources available - this starts off as just software, but allow mechanisms to interact with the physical world and the whole game changes. In fact that gives me an idea for a few experiments...

0
0
Bronze badge
Alert

Re: Seriously

Actually, we stopped evolving with our environment, and started evolving the environment to suit us, as soon as we started building dwelling places and farming crops. Personally the fact that we left evolution behind millenia ago scares the devil out of me!

1
1
Silver badge

Re: Seriously

Well, we evolved into the environment we created. Genetic dating of the mutation that allows some peoples to digest lactose as adults suggests it occurred around the same time as we domesticated cattle, for example.

The problem we have had with an agricultural lifestyle is that we tend to outgrow our environment- become a victim of our own success. It has been observed that species that find themselves without predators or competition for food eventually breed more slowly to avoid population booms (which can lead to busts, due to depletion of resources). All fine, until you meet something that has sharp teeth, breeds quickly, and eats your eggs.

1
0
Paris Hilton

Re: Seriously

Computers already have a means of reproduction.

What do you think Humans are for?

With reference to the "intelligent design" comment - as an agnostic I've always considered the existence of God to be perfectly reasonable. Equally I've always thought it quite possible that it's us. Somebody has come first.

0
0
Pint

Re: Seriously

"To compete for resources doesn't require any intelligence"

Indeed. Consider which is the most successful lifeform on the planet. Then reconsider it based on the following criteria:

population

weight

distribution

longevity

resilience

Yeast could be a deserving contender though...

1
0

Re: Seriously

> Personally the fact that we left evolution behind millenia ago scares the devil out of me!

I don't think we have left evolution behind.

The greater rate of survivability just means that we're currently in a state where we are building up a wider range of variation through mutation, etc.

When the next sudden environmental change happens (e.g. next Ice Age, Meteor hit, Triffids, etc), only those people/genes lucky enough to be suited to the new environment may survive.

We may find out that genes for, e.g., morbid obesity turn out to be pretty useful in a different-looking world.

0
0
Silver badge
Unhappy

Re: Seriously

"We may find out that genes for, e.g., morbid obesity turn out to be pretty useful in a different-looking world."

I don't want to live in a world populated by Texans

0
0
Silver badge

Re: Seriously

"Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination"

I wouldn't be too sure about that.

Given enough time to chat with enough people, I'm pretty sure that a human-level AI could convince at least one person with the physical/logical power to either deliberately let it out (believing it to be the "right thing" to do), or do/not do something that permits it to escape.

After all, many people are already being convinced to run arbitrary software that damages them - and what is an AI if not software?

Even if you accept the (possibly wrong) idea that an AI researcher could never be convinced to let the AI out voluntarily, it's pretty plausible, if not likely that an AI bent on escaping could still come up with a way to do so, if given enough computing power.

0
0
Silver badge

Re: Seriously @Richard 12

Yes he/it may escape, may even wreak havoc for a while but eventually we will get him. Unless, of course he is better than us at our own game, which is what I was trying to say.

But if he is better or equal, there will not be a "war to the end", we will co-exist and co-operate until there will be no longer distinction between bio and non-bio humans. Of course, there will be strife, scuffles, competition, occasional wars and rebellions - but what's new there?

0
0
Bronze badge

They also wished the inventors of gunpowder, explosives and other means of propelling munitions had thought the whole thing through really. With nano technology, graphene, advances in miniaturizing more powerful processors and power sources, the principle applications and technological drivers for future ultra-intelligent machines is, and will be henceforth, the arms industry. So I can't help but feel that whatever ethical debates are had they will be stomped on rough shod by some heavy armor that won't take "no" for an answer. I dare say that such advances might also, potentially, be our chance to adapt to future climatic alterations, (hot or cold) - by building self -repairing exoskeletons etc and merging our DNA ridden meat bag selves into such machinery. Meet the machumans, their ancestors used to crawl around in muddy swamps you know. Maybe dear old DNA will eventually be replaced and "mechanized", our digital souls hardened against radiation and a new journey will take us amongst the nearby galaxies and beyond. Question is, where will they put the restart button ?

0
0

You're worried about the 'Rise Of The Machines'?

...and you don't think that manufacturing a machine to investigate

how us humans might deal with 'The Rise Of The Machines' isn't

going to give the machines a bit of a head start?

1
0
Silver badge

Re: You're worried about the 'Rise Of The Machines'?

Some would assure you, piran, that the battle is already lost to winning machines. And they Play Immaculate Great Games and this is One of Countless Many in Ever Evolving Variations.

...and you don't think that manufacturing a machine to investigate

how us humans might deal with 'The Rise Of The Machines' isn't

going to give the machines a bit of a head start? ... Don't worry about that. The machines have IT well covered with Perfect Resolutions ..... New Starting Points for Virtual Reality ProgramMING .

0
0
Silver badge
Happy

Re: You're worried about the 'Rise Of The Machines'?

A machine to do the investigating for us?

The Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons might have something to say about that.

5
0
Silver badge

IT at the dDeep End and ForeFront

and that the critical turning point after that will come when the AGI is able to write the computer programs and create the tech to develop its own offspring. ........ http://forums.theregister.co.uk/forum/1/2012/11/26/egnyte_cloud_control/#c_1637205

Hi, Cambridge University Boffins. Wanna Launch SMARTR AI Systems with a Barrage of Virtual Ventures? Who Dares Win Wins for Everyone with Everything.

RSVP Registered Post

0
0
Anonymous Coward

What's in it for the AIs?

A strongly superhuman artificial intelligence has nothing to gain by wiping out the human race; what would it want a biosphere for? Comparisons with human and hominin history are fundamentally wrong; there's no competition for food and space. No; more likely if such a thing ever arises it will promptly sort out its own space program and take steps to ensure its own survival by heading off to other star systems.

4
0
Alert

Re: What's in it for the AIs?

Unless it fails to solve the fusion reactor problem and needs lots of energy to function. Then it might just look down on all these crawling lifeforms and decide that biofuel is the best solution for now.

0
0
Anonymous Coward

Re: What's in it for the AIs?

Biofuel based on meat is almost, but not quite, the most inefficient way of converting solar energy into propulsion. Be easier to build big photovoltaics another planetary orbit or two closer to the sun.

If fusion turns out to be too hard even for a superhuman intelligence, then it will be fission for everywhere that doesn't get enough solar flux. Plenty of other planets in the solar system to get fissile materials from, quite possibly even more easily than on earth and there's no shortage of em down here.

0
0
TRT
Silver badge

Re: What's in it for the AIs?

Until the meat decides to darken the skies...

0
0

Page:

This topic is closed for new posts.

Forums