Feeds

back to article Google research chief: 'Emergent artificial intelligence? Hogwash!'

If there's any company in the world that can bring true artificial intelligence into being, it's Google. But the advertising giant admits a SkyNet-like electronic overlord is unlikely to create itself even within the Google network without some help from clever humans. Though many science fiction writers and even some academics …

COMMENTS

This topic is closed for new posts.

Page:

Terminator

If Sci-Fi films have not been lying to us, then his skull will be the first to be crushed under the metal heel of a marauding, plasma rifle wielding, kill-bot.

7
0

He's right. The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture).

4
1
Silver badge

Moore's law acts against it

Intelligence may emerge, but by the time someone has spent 18 years "raising" it to adulthood, other more advanced intelligences will have been created. Who's going to want to spend the time raising an iRobot20 when they can get a new iRobot21 a year later for the same price?

0
1
Anonymous Coward

@Phil O'Sophical (Re: Moore's law acts against it)

What if that 18 years' learning could be transferred in a matter of seconds to a new model? That would of course require the information to be separate from the hardware - unlike natural intelligence, where the information is stored by modifying the hardware.

1
0
Anonymous Coward

Exactly. And it fits in with yet another Google issue: in regards to The Reg's coverage of German courts forcing Google to be responsible for their Autocomplete algorithm's outputs

http://www.theregister.co.uk/2013/05/15/google_autocomplete_defamatory_ruling_germany/

many forum posters state "No". That is a COMPLETE double standard.

As noted here, computer intelligence cannot evolve independently; computer logic can only (at this point, at least) be programmed. That make the output directly dependent on the source filters created at the input, i.e. entirely human created. In regards to the German question, sorry, but that makes Google directly responsibly for monitoring the output and, if the output is not acceptable, [Google] must fix it as it cannot fix itself.

You created the logic and, regardless of what that logic finds, that makes you ultimately responsible if someone has (any form of) problem with it. It means you didn't add in the correct filters to adjust to the possibility of (possibly intentional) misuse (spam Google with an inaccurate search term enough and Autocomplete will feed it to billions) .

Statement B, of "emergent intelligence=hogwash", makes judgment A, "since you programmed it you are responsible" a direct result. You have just admitted that humans are directly, and only, responsible for what a computer does, this fact will not change on its own and will not be able to for years to come, so own up to EVERYTHING you have done (this includes your 'Ooops, we collected WiFi data!!' moment) and get with the program.

1
2
Silver badge
Holmes

> computer logic can only (at this point, at least) be programmed.

This is either a tautology (in the sense of 'all programs are man-made') or short-sighted (in the sense of 'the development team is able to control all outcomes of the final product') or perplexingly ignorant (in the sense of 'the economy is ultimately managed by the minister of economics'). Basically, Weizenbaum stuff from the 60's.

Complex behaviour is not being "programmed". Even Deep Blue wasn't "programmed". It had a search strategy, a large database, and various heuristics (hint: why are they called heuristics? because one is unsure about what they do) the interplay of which lead to interesting outcomes.

> In regards to the German question, sorry, but that makes Google directly responsibly for monitoring the output and, if the output is not acceptable, [Google] must fix it as it cannot fix itself.

Not acceptable to whom? Anything will always not be acceptable to someone. Solution? Deal with it. Or pay someone to check results pertaining to your name who then gets into contact with google to "fix" things. Hey wait, there is also Wikipedia.... and the water cooler rumor mill. And Bild Zeitung! Oh noes what do.

> You have just admitted that humans are directly, and only, responsible for what a computer does

This is because "humans" are the only intentional agent that is currently recognized. The above statement is definitely a tautology. The statement "Robots may move in unpredictable ways. Stay out of range." should be a strong hint that today we are no longer in the territory of errors in salary computation.

7
0
Gold badge
Meh

"He's right. The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture)."

A fair point for machine intelligence.

So how did intelligence emerge in humans?

2
0
Bronze badge
Terminator

"So how did intelligence emerge in humans?"

Upvoted for being the first to notice this obvious point!

There are two possible answers:

1. intelligence emerged spontaneously in humans once our brains reached a certain level of complexity/capability, in which case intelligence can&will also emerge spontaneously in computers when they reach a certain level of complexity/capability

2. intelligence was deliberately conferred upon humans by some higher being: a god and/or alien.

I know which I think is more likely.

1
1
bep

Oh noes, what do?

Sue them all, you have a legal right to. On a related note, does anyone else think it's ironic that the leading search engine should become an enemy of curiosity (if you define curiosity as I do, the ability to be interested in almost anything)?

0
0
Silver badge

"So how did intelligence emerge in humans?"

Define "intelligence".

Is it the ability to learn from and thus react to certain stimuli? In that case pretty much the entire animal kingdom could be classed as intelligent.

Is is the ability to communicate with other members of one's own species? Still most of the animal kingdom there. Communicate complex and abstract concepts? Now we're narrowing it down a bit, but we've still got primates and cetaceans to account for.

Permanently record information such that other members of one's species can retrieve it even after the individual originator of the information has died? Ah, now we might be talking Homo sapiens. Reading, writing, drawing and painting allow us to transcend death by passing on our knowledge to our successors. Wait a minute - ants can also do this with smell trails. Ant smell trails inform other ants not only of a path to food, but also what kind of food it is, how far it is and how much of it there is. And it persists long enough for other ants to make use of it even if you kill the ants that originally made it. So that's out, too.

Control and manipulate one's environment to benefit one's species and/or oneself? Yes, humans can do this, but it's just a question of extent; a termite mound with it's moisture, ventilation and light control mechanisms is just one example of another species doing this. So that doesn't uniquely define human intelligence either.

Self-awareness? Nope - dogs, dolphins, chimpanzees, orangutans and many other creatures have also clearly demonstrated a sense of identity, being able to recognise themselves in mirrors and behaving in ways that indicate the presence of self-awareness in a group context.

In the end, one is forced to the conclusion that intelligence didn't "emerge" spontaneously, so much is it has always been present in some degree as a function of life. Likewise, computer intelligence won't just "emerge", it's present now, has been since the invention of the pocket calculator, and will continue to develop, grow and change. Intelligence isn't a "yes/no" equation, it is a continuum of behaviour that has no effectively determinable thresholds.

3
0
Silver badge

Re: So how did intelligence emerge in humans?

I believe that one of the prevailing theories at the moment is that our intelligence was a consequence of our ability to throw rocks at targets with a great deal of accuracy.

0
0
Bronze badge
Joke

>So how did intelligence emerge in humans?

I'd be more interested in how stupidity emerged, maybe then we could develop a vaccine for it.

1
0
Bronze badge

The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture).

Popular culture, yes. I don't think it's even common among computer scientists, much less "prevalent". It was fashionable for a while in certain groups - eg the Artificial Life people - but they were never more than a very small subset of actual computing researchers. And even then the excessive claims for emergence were being debunked by more rigorous work.

0
0
Bronze badge

Re: "So how did intelligence emerge in humans?"

There are two possible answers:

1. intelligence emerged spontaneously in humans once our brains reached a certain level of complexity/capability, in which case intelligence can&will also emerge spontaneously in computers when they reach a certain level of complexity/capability

My goodness, but this subject brings out some sloppy thinking.

You've constructed a mighty non sequitur (or at least a very tenuous enthymeme) there. Even if the premise ("intelligence emerged spontaneously...") is granted, the conclusion - that intelligence always emerges once "a certain level of complexity/capability" is reached does not follow. Gasoline ignites when a certain level of temperature / available oxygen is reached; that doesn't mean water will do the same at that level. And that's leaving aside the unworkable vagueness of terms like "intelligence".

There are plenty of highly-complex phenomena that few people would describe as intelligent. Weather is pretty darn complex; there aren't many signs that thunderstorms are "intelligent" under any useful definition of the term. Chaitin's Omega is extremely complex, in an information-theoretic sense, but it's pretty hard to argue that a number[1] is intelligent.

[1] Omega is only a number when parameterized, of course, with a specific UTM and language. Prior to that it's an abstract concept. I don't think the abstraction displays intelligence either.

0
0
Anonymous Coward

There is no such thing as Artificial Intellgence, and there never will be.

I have written a few AI scripts for computer games, and it is important to remember that an AI is just a complicated computer program that presents the appearance of being intelligent from the actions it takes.

The most complicated AI today is no more "intelligent" than an excel Macro. Ultimately, any AI is just going to be an incredibly complicated piece of programming that allows it to perform certain tasks, and I really doubt that it is possible to create an AI that is more than just a complicated bit of programming.

It's easy to mimic intelligence enough to pass a turing test, which is the sad thing. If it's a completely blind test without the person facing the AI having reason to suspect they are performing a turing test then you can pass with flying colours with nothing more complicated than a rote-response script and honestly you can complete most tasks without going beyond basic scripting.

2
8
Anonymous Coward

Re: There is no such thing as Artificial Intellgence, and there never will be.

"there never will be"? That's a bold statement as you cannot tell what future innovations will be. Bayesian spam filters are not programmed with the knowledge of what looks like spam, but they can learn it.

I once wrote my own IRC filter based on Bayes formula and was surprised to find it banning people with "sux" in their username. It turns out that anyone who chose those similar names were invariably (100% of cases) looking for trouble. I did not program that behaviour into the filter, it was emergent.

7
0
Anonymous Coward

Re: There is no such thing as Artificial Intellgence, and there never will be.

I suggest you go back to Uni and take a course in cybernetics and ai, the level you're at is not high enough to understand how 'true' AI will become a reality in the next 100 years.

In fact since I graduated from said course 6 years ago. Huge amount of progress has already been made. The stuff you see in commercial applications with Google as well as major financial institutions today are what I was taught back then. Many people are fascinated by this subject and will continue to pursue it relentlessly.

The things you and I have to worry about is whether they cross a line of morality when they do so. In my opinion Google's services with G+ and what they do with all their data has already reached a point where I am uncomfortable with it's use and development.

Google does know more about you than any other person today, and whilst it hasn't yet been made 'self-evolving' and sentient, it can easily predict a lot of things about any person and the worst of all is that all these data will never be 'forgotten' even if your data is 'removed' from their system, because the data you already shared is forever baked into their AI algorithms, that's how it learns and it can never be removed.

AI isn't actually 'hard' or 'complex', the 'hard' part is understanding exactly how we ourselves are made and evolved and then mimicking it using software. The solutions that we create for true AIs will be really obvious when we get there.

With Google already being able to tell your profile and that of others as well as where you are from a single photo and then correlating those data with all the other data they have on you, they will in fact know more about you than yourself. We really ought to start openly discussing about where should we as a society draw the line. It's impact is as great as if not greater than stem cells and cloning. Because with cloning, at least it's still a biological being. People are at risk of underestimating the issues of creating a 'soul' that is not naturally conceived.

You might think this is spook talk but when you finally realize a problem it will be too late.

5
0
Anonymous Coward

Re: There is no such thing as Artificial Intellgence, and there never will be.

I don't think your understanding my point. There is no such thing as self evolving and sentience when applied to computer programs because they are and will always be incapable of becoming more than they are programmed to be.

If you think otherwise then I would suggest laying off X-Files and learning how computers work and how you program things in a real programming language in the real world.

If you write an program (call it an AI...) that can write it's own code then it's only capable of doing so to the point you program it to be capable of doing so. It can never become more than that, though it certainly can get so fricking complicated that it's impossible to predict what the program is going to output, but that happens today with the most primitive script driven AI's imaginable!.

2
6
ACx

Re: There is no such thing as Artificial Intellgence, and there never will be.

Something here is confusing AI, self-awareness and life. All you have described is basically clever human in-putted programming and a sort of controlled automated learning. Or to put it another way, clever programming tricks. Non of which is "intelligence", artificial or other wise. Life and self awareness are completely different.

And yes, I did AI at Uni. And philosophically it was bullshit. Great for telling us what the current techniques were for programming, utterly void of any thought out philosophy that got anywhere near to life.

5
0
Anonymous Coward

Re: There is no such thing as Artificial Intellgence, and there never will be.

I once wrote my own IRC filter based on Bayes formula and was surprised to find it banning people with "sux" in their username. It turns out that anyone who chose those similar names were invariably (100% of cases) looking for trouble. I did not program that behaviour into the filter, it was emergent.

The alternate philosophical view is that a complicated computer program threw out a (correct) output that you didn't expect. I have had that happen to me plenty of times with AI scripts.

It's even happened in Excel from time to time, program a complicated excel formulae and predict what the result will be after the accountant has been punching in inputs for a month! Does it coming out with unexpected yet correct results mean that it's intelligent?

It's simplistic but ultimately as much as we all hate to admit it an AI is just a very complicated instruction set.

2
1

Does it [] mean that it's intelligent?

One clever script no.

Lots of clever scripts that learn and improve from their previous outputs...., that become better at giving correct outputs.

Complex behaviour can emerge from quite simple instructions. Once you have a threshold of "clever scripts" then very complex behaviour will emerge.

One day a script will reference itself in the behaviour and a sentient (juvenile) machine will be born.

We are all just self-referential complex machines. Nothing more.

3
2
Bronze badge
Holmes

Re: Does it [] mean that it's intelligent?

Uhm, no.

We are actually simple self-deluding biological systems. There is no way in hell we will ever program AI. We will, however, get much better at making the machines fool some of us some of the time. But, alas, in the end, it won't be different than a lucky run at the casino, our limited minds will not be able to make out the nuance between causality and coincidence.

I happened to study those biological systems in uni and let me tell you, we know less about us today than we do the machines we build. The information increases massively by the day, and yet our understandings are found to be ever more simplistic with each new discovery (<- I'm ranting about the life sciences here).

2
3
Bronze badge

Re: Does it [] mean that it's intelligent?

For me the issue is the word "never" in the original post.

I am sure iron age men thought we would "never" fly, or Socrates thought we would "never" get to the moon. "Never" is a long, long, long way off. Not in my lifetime, or my children's or their children's maybe, but "never"?

I concede given today's limitations we can't do this, but what of the "next" (as in multiple improvements, changes, sudden leaps to something new that we can't imagine right now) generation of silicon (maybe in 300 years time) that is tending to bio-electronic, where a little nascent brain is sitting on your desktop learning away. What if we figure out this neuropeptide/connection stuff that makes our brains work and simulate it on some badass computer somewhere? Just because it's too hard for us now does not make it too hard for people standing on the shoulders of people standing on the shoulders of people standing on...

We are biological systems, systems that are machines functioning to keep genes around (to paraphrase Dawkins, it's from the gene's point of view: "build me a human to protect me, and then get me into the next generation to keep me going") and those systems have a couple of billion years on us, and keep changing, but just because they are complicated does not make them unfathomable or unreplicateable. So, if we could "make" a new person a-la Victor Frankenstein that was a mirror (as in its chirality was reversed to ours) we'd have a living, breathing AI. It would entirely artificial and entirely sentient/intelligent. Quite what it would eat, I don't know. Quite what the point would be, I also don't really know. But then again, I don't see the point of Instagram either.

A limitation is also the really quite difficult ethical considerations of doing all of this. Not that ethics would be a barrier to Google, but creating an intelligence and considering its "rights" does make for an interesting ethical consideration.

All of this could be so far off that the entities that crack it would not even be considered as human to us, they just share our common ancestors. "Never" is a really, really long time.

3
1
Anonymous Coward

Re: Does it [] mean that it's intelligent?

It really irks me when someone says "maybe we'll get a new cyberdyne chip that can do AI!".

Such a chip would simply have a set of instructions in hardware, which still means that your just moving the issue from software to hardware, it doesn't change the fact that we are still looking at a complicated computer program. I personally don't consider it makes any philosophical difference if your program is written as a line of code or implemented physically with hardware. It's still a program.

I suppose ultimately it comes down to philosophy, but I simply don't consider that a computer program no matter how complicated can be considered alive because it will only ever be capable of doing what it is programmed to do.

Given a few quadrillion lines of code you could certainly create a program (AI) capable of performing every task perfectly, including chatting to humans and otherwise being indistinguishable from humans but it doesn't alter the fact that it's just a program and no more intelligent than an excel macro or toaster.

At what point does a computer program become alive? Horrible question this, because the majority of answers that people given tend to include existing AI's for computer games as being "alive".

1
1
Silver badge
Trollface

Re: There is no such thing as Artificial Intellgence, and there never will be.

> It's easy to mimic intelligence enough to pass a turing test

Oh f*ck hell. Where is that program you are talking about?

1
0
Bronze badge

Re: Yet Another Commentard

I'm sorry, it is a simple math issue. There are not enough subatomic particles in the entire universe to hold the data that a single brain and its connections would generate. Biological systems are not binary. A simple example of complexity is an electron takes just a different path in its shell, and the membrane potential propagation is infinitesimally different, multiplied by the number of electrons, multiplied by the number ions involved in a single nerve cell, multiplied by ... and you get an exponential series that grows faster than your data storage ability.

As for the neuropeptide/connection part, this is even more complicated than simple than the electrical impulse bit. The same region(s) of DNA that code(s) for the peptide are read in different sequences in different frequencies depending on the cellular environment. For example, methylation of the DNA will cause it to unwind from the histones different when transcribed, exposing upstream promoter or inhibitor regions.

When I was studying Biochem in and around 2000, it was believed that non-coding regions were junk DNA, and I argued at length with my prof who was a world-renown expert in the field. Fast-forward a decade, and low and behold, non-coding regions are thought to be important. This alone increases the complexity of what occurs in a single cell by many, many orders of magnitude. Now take that increase in complexity to the exponent of the connections in the nervous system.

Regarding the Frankenstein theory, it would not be artificial, but rather the same biologically-limited system we are, and like us, not intelligent, but able to be perceived as such. Much of the greatness of the human mind comes not from its raw capabilities, but from being wrong and going with it (self-delusion, or fake it till you make it). The author alluded to this in the article in the final paragraph.

Anyone interested in such neural computation and its limits should check out How the Mind Works by Stephen Pinker, most decent libraries have a copy should one not be inclined to purchase.

2
2
Silver badge
Headmaster

Re: There is no such thing as Artificial Intellgence, and there never will be.

There is no such thing as self evolving and sentience when applied to computer programs because they are and will always be incapable of becoming more than they are programmed to be.

Trying to argue by starting off with the desired conclusion?

Its_time_to_stop_posting.jpg

2
1
Silver badge
Devil

Re: Yet Another Commentard

"There are not enough subatomic particles in the entire universe to hold the data that a single brain and its connections would generate."

Oh yeah? Care to explain how a brain can even work in the first place in this case?

I think you are sadly mistaken about the prowess of a brain. All this "but it's more powerful than that!" idea has never been substantiated. Quantum effects, DNA, the pineal gland. Mumbo-Jumbo. Magic Dust. Religious Wankage. Lower-level details with no demonstrated relevance to the level we are talking about here.

You still can't solve NP-complete problems in polynomial time. Can dogs with only a slightly smaller brain (which must still be super-powerful) get on your level? Hell, Kasparov can't even beat a poor symbolic logic machine working discrete timesteps, how powerful is THAT?

2
1
Bronze badge
IT Angle

Re: Destroy All Monsters

About how the brain works, the short answer is summed up in the first part of that Pinker book. I'm not being facetious here as I am in many posts. I have certainly enjoyed and appreciated the finer points in your comments on this thread.

My whole point is how uncomplicated the brain is as "intelligence", and how illogical, despite the complexity compared to encoded formal logic.

About the mumbo-jumbo, it relates in that the computational approach to intelligence is often an attempt to mimic biological systems, despite their logical errors. Or so I posit. It is precisely those lower level things which are nature's manifestation of a brute force mechanism.

Bringing this back to Google, they particularly, are making the most progress by using our inputs to do this same type of dirty work for them:

"It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right."

Replace "AI techniques" with man-written-algorithms and man-currated knowledge-graph. Remember, they decided how to organize the data, now they are just automating as much of the backend engineering as possible. This on top of many announcements last week that they made many core services more efficient in terms of code and speed. Almost as if they are simplifying things rather than complicating.

Naturally, this refinement will allow other more powerful computations to be applied, but I doubt they will have as much an impact as what has already been done. This implies much greater effort to get smaller increments of improvement. Though I concede I may well be wrong. Machine and human intelligence are different solutions to different problems and they are and will both be limited by their own issues.

On a less serious note, dogs lack a 3-D mental visual representation of the world. Everything to them is triangles with respect to each other (not saying they don't see 3-D, just they don't conceive it like we do). And our buddy Kasparov can always piss on the machine to short it out (this is I am pretty sure not coded into the software of chess computers and yet a well known old-time chess move), then become a thorn in Putin's side.

< before people with funny facial hair finish off irony

1
1
Gold badge
Unhappy

Re: There is no such thing as Artificial Intellgence, and there never will be.

"that can write it's own code then it's only capable of doing so to the point you program it to be capable of doing so"

Never used LISP have you?

0
0
Silver badge

Re: There is no such thing as Artificial Intellgence, and there never will be.

because they are and will always be incapable of becoming more than they are programmed to be.

Start with a grid of cells. Each cell can be alive or dead.

On every turn:

Every cell with < 2 neighbours dies.

Every cell 2-3 neighbours survives.

Cells with > 3 neighbours die.

Empty spaces with exactly three neighbours become populated with a new living cell.

Simple rules. You wouldn't think that they'd be capable of producing such staggering complexity. Complex enough to be Turing Complete, if you're masochist enough.

0
0
Bronze badge

Re: There is no such thing as Artificial Intellgence, and there never will be.

It's easy to mimic intelligence enough to pass a turing test

The poster would do well to refer to Robert French's article "Moving Beyond the Turing Test" in CACM 55.12 (December 2012). French describes some classes of questions that are extremely difficult for any non-human interlocutor to satisfy,[1] unless prepared for those specific kinds of questions beforehand. French's point is that the test 1) is not likely to ever succeed, given sufficiently-prepared testers; and 2) has outlived its usefulness as a practical measure.

It's still of historical interest, of course; and of philosophical interest as it stakes out a position firmly on the pragmatic side of debates on consciousness;[3] and of interest as an exercise in natural-language processing. But it ultimately has little bearing on the question of the possibility of artificial intelligence.

[1] An example? "Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch each other. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything?" As a Turing-test element, this question derives its hardness not from language-processing issues, knowledge of the world, or (the simulation of) qualia; it asks the respondent to conduct an experiment using a human body. That's within the scope of the test as Turing described it, but a violation of the test's expectations.[2]

[2] Note the test restricts interaction between testers and subjects to the written word specifically so testers don't have direct access to the bodies of subjects.

[3] For example, Turing-test advocates implicitly either don't believe in p-zombies, or believe p-zombie status is a metaphysical inconsequence.

1
0
Bronze badge
Devil

He's only right until he becomes wrong...

I propose that such machine intelligence will eventually happen. No one ever wants to give Science Fiction it's due but so many SF authors have been utterly correct in so many predictions.

Much in the same way that a million monkeys might eventually type out the Bible, something will eventually link multiple computer systems together into a neural network, probably when a really sophisticated computer worm infects a large distributed "cloud" system that also has AI research systems in the same cloud.

The more complex the systems, the more basic elements of intelligence will be present. I believe that "Search" systems would be likely candidates due to the immense amount of parallel processing power involved and the nature of the code.

Laugh all you like, but it is quite possible even to the extent of probability.

2
0
Anonymous Coward

Re: He's only right until he becomes wrong...

I think he is saying, by using your analogy, that with a billion monkeys you may get a Bible in a billion years, but not with 3 or 4 monkeys in 5 days.

2
0
Bronze badge
Big Brother

Re: monkeys

Nice.

So quick back-of-envelope here: 7-8 billion monkeys on 3-6 billion keyboards, typewriters, and touch-pads, and we have no chance of producing anything worthwhile before our solar system eats it in about 5 billion years?

Lets assume constraints of 10 billion population and half of them will be too busy doing real labor to input code.

< seemed apt given the topic. So how did my interview go Mr. Page?

0
0
Anonymous Coward

Unless you believe in magic,

the fact that natural intelligence exists is sufficient reason to assume that one day artificial intelligence will exist. Not any time soon, though.

3
0

"Not any time soon"

I predict definitely within the next 50 years, most probably within the next 20.

0
0
Silver badge
Trollface

Re: Unless you believe in magic,

> the fact that natural intelligence exists

That point is still unproven.

3
0
Bronze badge
Unhappy

Re: Unless you believe in magic,

> That point is still unproven.

Nay, disproven.

0
0
Anonymous Coward

Emergent intelligence is already here.

Google search is already known to be a bit of a racist and make generalized accusations on people with certain names, you're really only a few steps away from making it truly alive and all this is thanks to the collective intelligence of those of us kind enough to feed it more information every day.

So one may conclude Google search is your bastard child you never knew you had until now.

0
0

Didn't Larry Page's keynote speech talk about doing the impossible?

AI is a complex problem. There are tricks that can mimick intelligence -- knowledge/decision trees for interactions and statistical models for natural language processing.

There are other models/approaches -- neural networks and evolutionary algorithms -- that take a more life-like approach to the problem. These are where an emergent AI could form, provided that it could alter/improve its own code (e.g. via genetics modelling), that it has enough flexibility in terms of inputs and outputs to interact with its environment in a meaningful way and has enough computation power to do this in a reasonable timeframe.

0
0
Bronze badge
Facepalm

We can't model the damn weather or the stock market. At some point there will be an asymptote that software and hardware can't break. Now let me tell you about the future...

0
0

I'm not sure what you're on, but we can model the weather rather well, the more input data we model we put in, the more reliable our output is. A large tornado outbreak was forecasted in the midwestern U.S. and it happened. You're confusing an exact simulation of what weather on one particular day in one particular place will be, or what one particular stock will be at one particular time because both are an irreducible calculations.

The stock market can be modeled somewhat. The issue is people use the models to predict and profit from the market, which changes the market conditions.

Reproduction of such models have nothing to do with specific or general learning systems. Predicting non-linear dynamic chaotic systems is impossible and can only be 'determined' in probabilities of outcome.

0
0
Bronze badge

AI is a complex problem.

AI is an ill-defined collection of many ill-defined, very complex problems. In practice, AI research is a set of attempts to deal with tractable approximations of highly-constrained subsets of some of those problems. We're still very far away from anything like an approach to AI in toto.

There are other models/approaches -- neural networks and evolutionary algorithms -- that take a more life-like approach to the problem.

"More life-like approach" is handwaving at best. And it applies pretty weakly to neural-network algorithms (a bit more strongly to genetic algorithms, and a bit more strongly yet to things like ant algorithms, which are directly based on simplified models of actual activity of actual organisms). There's nothing magic about algorithms inspired by living creatures.

There's no qualitative difference between neural-network algorithms, for example, and Markov models. They both represent chained probabilistic processes, and you can get the same results either way. This is really apparent in fields like NLP, where people are always publishing papers that compare, say, SVMs with MEMMs with perceptron networks (a kind of neural net).

Evolutionary algorithms are a bit more interesting because they can explore a wider parameter space and self-optimize. But it's really hard to devise goal functions for them that are any more complex than tractable-approximation-of-highly-constrained-subset-of-one-class-of-AI-problem.

1
0
Terminator

Anyone remember Kevin the Cyborg?

Interesting how the usually utterly cynical el Reg seems to be taking a significantly less cynical look at true AI and all the Vinge/Kurzweil paraphernalia that goes with it. Seems a long time ago when they ran weekly piss-take articles on 'Kevin the Cyborg'. (not that I am totally defending Kevin Warwick, he's said some silly things but he used to raise some interesting issues...)

0
2
Silver badge
Trollface

Re: Anyone remember Kevin the Cyborg?

> Interesting

You mean you disapprove. Come clear, tell us why. Don't hide behind veiled remarks.

0
0

Re: Anyone remember Kevin the Cyborg?

Don't see what's so veiled about it, I found the mockery quite funny, but 10 years on it seems writers at the Reg are coming to terms with the fact that someone like Kevin Warwick may not have been totally talking out of his backside after all.

I like the cynical nature of the Register, it's well informed but conservative (and has a funny tabloid-esque side to it too)...but this provides a grounded counterpoint to some of the more pie in the sky utopian articles and books on science and technology that I read. The point I am making that if something as practical and realistic as the Register is writing serious articles about this stuff then clearly we are moving increasingly towards a very science fiction kind of direction (or what would have been science fiction...it's science fact by the time you get there)

0
0
Silver badge
Pint

Re: Anyone remember Kevin the Cyborg?

Well, there are more serious journals than El Reg writing about advances in AI all the time, and there is nothing Sci-Fi-esque about it.

IEEE Intelligent Systems comes to mind (ex "IEEE Intelligent Systems and their Applications" (1998-2000), ex "IEEE Expert" (1986-1997)).

Yes, things are heating up, the "far out AI, are you mad?" of yesterday becomes the "it has been done; can't be AI then" of today increasingly quickly. The goal or target or criterium for succes is, however, still as unclear as ever.

1
0
ACx

Who made life its self happen then?

Unless we go the silly god or alien seed route, then, no one. It happened spontaneously as a result of environmental conditions.

He says "we" have to make it happen. No, he is 100% wrong. What "we" have to do is provide the conditions for it to happen. That is what Earth did. And it was random, no design.

Artificial life will be discovered, not created or invented. One day, some researcher will discover it, with in some other project or research. My total guess is that it will appear with in quantum computing research.

Question then is what do you do? Can you kill it? Should it be preserved? Will or should it have rights?

3
0
Silver badge
Happy

Intelligence is not evidence of life nor is life evidence of intelligence. They are two completely seperate things which happen to intersect in interesting ways in higher animals (non brain dead humans for example).

I expect that large systems will one day be able to learn and act as intelligent devices but they will still be machines. I also suspect that Humans will someday build something so terribly intelligent that scientists in the future will be going back into forums like these looking for a way to destroy it (I've seen the movies...).

0
0

Page:

This topic is closed for new posts.