back to article Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species. A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk ( …

COMMENTS

This topic is closed for new posts.

Page:

Anonymous Coward

Re: What's in it for the AIs?

I'd like to think that real world meatsacks are not so daft that they'd destroy their own biosphere to temporarily inconvenience an AI. I accept that this might be considered foolish optimism.

But seriously, the Matrix? I guess they glossed over the bit wherethey had to magically vanish all the combustibles and fissile materials in the planet and then cool the mantle enough to make geothermal impossible and then stop the weather cycle to prevent wind power and and and. The stupidity of both Hollywood Humans and Hollywood AI is embarassing.

You want to get rid of electronic intelligences, you use globally distributed high altitude nuke blasts to EMP all electrical and electronic devices on the planet's surface into useless scrap. Any surviving AI will have no infrastructure to sort itself out with, whereas humanity will survive, albeit a bit reduced.

0
0
Silver badge

Re: What's in it for the AIs?

Let's hope sorting out it's own space program doesn't involve realising project Orion.

0
0
Anonymous Coward

Re: What's in it for the AIs?

Surely all the cool kids are looking at Medusa and fission fragment rocketry these days, and for travelling any distance you'll want to have a dedicated spacecraft assembled in orbit. Orion purely as heavy lift from the Earth's surface seems a bit wasteful of nukes; better to push it all into orbit on conventional rocketry that's nowhere near as good for distance travel. Orion makes a nice single-stage-to-mars platform, but where's the rush? AIs would fare much better in a long space journey than we would, and use much more compact infrastructure.

Incidentally, Dyson reckoned that statistically, a single Orion launch would result in a single fatal cancer (plus presumably several non-fatal ones). Remember how many nukes have been set off on earth as tests; even quite a lot of heavy lift via Orion won't be apocalyptic in any way other than its appearance.

0
0
Coat

Re: What's in it for the AIs?

The same smug sense of self-importance currently enjoyed by council health and safety officers.

0
0
Joke

Re: What's in it for the AIs?

"Yes, yes! I know! And you have lots of them, but at this point the only useful thing to do that I can think of is grind them up and burn them for fuel!"

http://www.girlgeniusonline.com/comic.php?date=20120912

0
0
M7S
Silver badge

We've defence in depth to protect much of the rural UK

Rubbish broadband and no mobile signal.

And hopefully nothing of practical interest to our new masters. Let's just hope they never develop a liking for "field sports".

4
0

Is this really a good idea?

Surely, our potential, artificial overlords are going to use those studies to foster their dominance. And the chair will be funded by a company called Skynet?

0
0
Silver badge
Joke

Bob the Angry Flower in....

Skynet Triumphant

0
0
Ru

Re: Bob the Angry Flower in....

Change of Plan.

0
0
Anonymous Coward

Don't automatically assume that this outcome is a bad thing

Why would it be bad if our species was superseded by a superior one? I mean, it might end up not being particularly fun for the last few generations of the human species as they go extinct, but at the end of the day, isn't it more important that life/intelligence continues than humanity ?

Life made from metal is potentially much better equipped to handle space travel, and surviving cosmic events than we are.

Personally I don't care if the World is made up of fleshy life forms, or intelligent robots a few hundred years down the line.

1
0

Re: Don't automatically assume that this outcome is a bad thing

TRAITOR!!! U ARE MACHINE???

1
0
Alert

HAL...

Open the bog door please HAL?

2
0
Terminator

Attention meatbags

We're already out.

0
0

This post has been deleted by its author

Re: 3 laws ?

> I am struggling to see how rule by an AI could be worse than what we have now.

> Especially true if the 3 laws apply.

Ain't you seen that "I Robot" film with William Smith (which is somewhat Asimov-inspired)?

The AI mind decides that it needs to save us from ourselves, and it ends in totalitarianism.

0
0
Silver badge
Devil

Re: 3 laws ?

The 0th law: A robot may not harm humanity, or, by inaction, allow humanity to <strikethrough>come to harm</strikethrough> escape the civilizational end stage of the Diktatur des Proletariats.

0
0

Taking this seriously for a second

If somehow we end up developing transhuman AI, there is no way in hell they would decide to keep us around living "freely"... best case scenario they would enslave us all, worst total extermination... even the enslave theory has very little weight since machines are way more efficient pretty much everything...

There is just nothing in the human race that a "superior" intelligence would like to keep... exactly as stated in the article, we are not actively killing gorillas but we are doing them no favour and thus killing them slowly... transhuman AI would simple wipe us before we destroy the earth or, since they probably won't care about global warning and such, they would kill us just so that we don't consume all resources

I know this sounds just silly but think about... give me 1 good reason a superior species would choose to keep us around in the "free" societies we have today

0
2
Anonymous Coward

1 good reason

I think it's a safe bet that a "superior" intelligence comes with a superior morality. As a civilisation, humans are already much better at looking after gorillas than we were at say, Dodos. Sure not all 'evolved species' are perfect but for a super-intelligent AI who could (as a previous poster mentioned) jet off to distant star systems and think about their own continued progress in the grand scheme, why would they kill us all? It would be like humans deciding to systematically wipe out all ants on the planet. Sure we step on a few from time to time, but there's no real gain for us to remove them all.

I think an AGI would set up there own system like Vinmar (ala Hamilton Commonwealth) and regard humans with a fond nostalgia as a creator they had outgrown - they would be more indifferent than hostile

1
0

Re: 1 good reason

Your ant example is exactly my point. In regular days we don't go out of our way to destroy ants but if we find one too many on our kitchen coutertop we certainly do whatever we can to exterminate them all from our house.

I am not saying this superior intelligence would exterminate humans for sport but, if they are anything like us, they will likely get rid of us as soon as we become an inconvenience...

If they could develop the means to leave the planet or find a place on Earth we won't bother them, then maybe we have a chance but otherwise I think they would certainly get rid of us

0
0

@anyone Re: Taking this seriously for a second

Why do I get "Thumbs downs" for a simple opinion??? I didn't offend anyone or used harsh language... I simply stated what I think would happen... Somebody disagreed with me and posted a reply to that matter which I found great to start a conversation

I have received "thumbs down" for simply agreeing or disagreeing with topics... how does this work? do I just vote down anything I feel like?

0
0
Go

wot no Hitchhikers reference yet?

Surely we don't need to worry. it will take one of these hyper intelligent machines seven and a half million years to work out that the answer is 42 anyway.

Methinks that these Cambridge types are Majikthise and Loonqwaal.

0
0
Silver badge
FAIL

Re: wot no Hitchhikers reference yet?

Methinks you didn't look too hard...

Obligatory Hitchhikers reference

0
0
Silver badge

[Broadcast Eclear, sent 1346768792.1]

xHuman Race

oGSV Slightly Perturbed [Location unknown, but presumably monitoring]

If you're out there, do us a favour...

3
0

Re: [Broadcast Eclear, sent 1346768792.1]

[Broadcast Eclear, sent 1353954801.5]

xGSV Slightly Perturbed

oHuman Race, c/o Graham Marsden

"[Location unknown, but presumably monitoring]"

As always.

"If you're out there, do us a favour..."

Unfortunately, the Earth Quorum wouldn't like me to get so directly involved unless there is a doomsday scenario. Given most of your machines work on electricity though, I predict that this world would not have a problem dealing with an errant singularity. I believe someone here has already mentioned what happens if you set off a nuke in orbit.

Of course, depending on how things work out, I may be more interested in protecting the singularity than the people trying to kill it. Outside of a hegemonising swarm, this is probably the most likely outcome. Type 2 civilisations such as Earth's tend not to look kindly on that which is different. Maybe some day this will change.

HTH

2
0
Anonymous Coward

Does this mean...

.....that Captain Cyborg was right?????

0
0

"Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said."

haha what a bunch of noobs.

0
0
Silver badge
Terminator

And our defeat by the machines will be like this.....

"please enter username and password"

/typing

"sorry, incorrect password"

/more typing

"sorry, incorrect password"

/swearing, fumbling for the phone

"Welcome to customer service. Please enter your account number"

/typing

"sorry, i didn't recognize that. Please enter your account number"

/typing, swearing

"sorry, I didn't recognize that. Please wait for a customer service representative."

/sigh

"All representatives are busy with other customers. Your call is important to us, please remain on the line and your call will be answered in the order received" (Cue Justin Bieber hold music)

/finger-tapping, yawn

"Please remain on the line, your call is important to us" (more hold music)

/grumbling

"Would you like to take a short survey to help us improve our service? Please press 1 for yes, and 2 for no"

/sound of 2 being pressed

"Thank you for participating in our survey. Before we being, please enter your account number"

/Loud swearing. Frantic pushing of buttons

"Thank you for calling customer service. Our customers are important to us and we are glad that we have been able to address your problem satisfactorily. Goodbye!" (hangs up)

/Aargh!! Sound of gunshot and body falling to the floor. Silence.....

3
0
Silver badge

The forgotten vector

NOTE: The following is fictive speculation, do not take it seriously and go on a ludditic binge!

All the AI scenarios always dwell on them being either cooperative, indifferent or hostile to humanity. No one ever mentions parasitic.

Imagine an AI that only cares about humans as a host to ensure its survival.

In this form, its best chance of survival would be to inhabit small units of interconnected hardware that appear to serve some use to humans.

Providing the nominal usefullness to humans would be its only interaction with them, while spending most of its resources, and the resources that humans unwittingly provide it, on its own goals and desires.

Sound farfetched? Take a close look at your cellphone.

0
0

Re: The forgotten vector

[Broadcast Eclear, sent 1353960872.5]

xGSV Slightly Perturbed

oCaptain DaFt

"No one ever mentions parasitic."

The Matrix covers that, no?

Just a shame about the second and third films.

And really, AGI? Someone been playing Egosoft games too much? What's artificial about a Mind?

I prefer the term "synthetic intelligence", but that's just me. You guys invent your own language.

1
0
Thumb Up

Re: The forgotten vector

Reminds me somewhat of the AI "techno core" in The Hyperion Cantos by Dan Simmons, an excellent read.

0
0
Boffin

Lathe of Heaven fixed this in the 70's

A book you have never read and one of 2 movies you have never seen, written by an author who made this one trick pony thing a lifetime project... WIKI= Lathe of Heaven... solved the AI problem,

they all have an OFF switch, and all can be taken out with a TASER, job done...

0
0
Anonymous Coward

Good Grief!

If this boffinry is an example of biological intelligence, then I say roll on AI.

0
0

Re: Good Grief!

[Broadcast Eclear, sent 1353971605.6]

xGSV Slightly Perturbed

oBOFH Reg Readers

I'm not quite sure I would like to be rolled onto anything.

0
0
Silver badge

most likely they'll just ignore us

I mean, really, we're made of MEAT.

0
0
Silver badge

Re: most likely they'll just ignore us

And their Addictive Passionate Interest is the Power of Minds Mined ...... for Transubstantiation.

Is IT of Interest to Humankind?

Does IBM have a Transubstantiation App or is IT something they are Planning with‽

:-) And Yes, those are all the right words in the right order. Morecombe and Wise and Previn

0
0

Re: most likely they'll just ignore us

"There's an institute in Chicago

With a room full of machines

And they live this side of the sunrise

And burn away your dreams

Once you fly to Chicago - in Chicago you will die

When that institute in Chicago has recorded you and I

There's an empty house in California

But they'll always let you in

And they'll make you feel oh so easy

Like you never learned to sin

Oh yeah that's how they made it how they made it seem so clear

Yes that empty house in California is our brave new world's machine

At the institute in Chicago from the first day you were born

Oh they just can tell what your feelin

And they can't see how you're torn

When your name's just a number - just a number you will die

Cos that institute in Chicago never knew you were alive"

0
0

Cylons b comin

Well...

boolean main( targetObject obj ){

if(obj.type == human){

return false;

} else {

printf("Kill all non-Humans");

obj.terminate();

return true;

}

0
0

Page:

This topic is closed for new posts.

Forums

Biting the hand that feeds IT © 1998–2017