back to article Google research chief: 'Emergent artificial intelligence? Hogwash!'

If there's any company in the world that can bring true artificial intelligence into being, it's Google. But the advertising giant admits a SkyNet-like electronic overlord is unlikely to create itself even within the Google network without some help from clever humans. Though many science fiction writers and even some …

COMMENTS

This topic is closed for new posts.
  1. NoneSuch
    Terminator

    If Sci-Fi films have not been lying to us, then his skull will be the first to be crushed under the metal heel of a marauding, plasma rifle wielding, kill-bot.

  2. Robinson

    He's right. The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture).

    1. Phil O'Sophical Silver badge

      Moore's law acts against it

      Intelligence may emerge, but by the time someone has spent 18 years "raising" it to adulthood, other more advanced intelligences will have been created. Who's going to want to spend the time raising an iRobot20 when they can get a new iRobot21 a year later for the same price?

      1. Anonymous Coward
        Anonymous Coward

        @Phil O'Sophical (Re: Moore's law acts against it)

        What if that 18 years' learning could be transferred in a matter of seconds to a new model? That would of course require the information to be separate from the hardware - unlike natural intelligence, where the information is stored by modifying the hardware.

    2. Anonymous Coward
      Anonymous Coward

      Exactly. And it fits in with yet another Google issue: in regards to The Reg's coverage of German courts forcing Google to be responsible for their Autocomplete algorithm's outputs

      http://www.theregister.co.uk/2013/05/15/google_autocomplete_defamatory_ruling_germany/

      many forum posters state "No". That is a COMPLETE double standard.

      As noted here, computer intelligence cannot evolve independently; computer logic can only (at this point, at least) be programmed. That make the output directly dependent on the source filters created at the input, i.e. entirely human created. In regards to the German question, sorry, but that makes Google directly responsibly for monitoring the output and, if the output is not acceptable, [Google] must fix it as it cannot fix itself.

      You created the logic and, regardless of what that logic finds, that makes you ultimately responsible if someone has (any form of) problem with it. It means you didn't add in the correct filters to adjust to the possibility of (possibly intentional) misuse (spam Google with an inaccurate search term enough and Autocomplete will feed it to billions) .

      Statement B, of "emergent intelligence=hogwash", makes judgment A, "since you programmed it you are responsible" a direct result. You have just admitted that humans are directly, and only, responsible for what a computer does, this fact will not change on its own and will not be able to for years to come, so own up to EVERYTHING you have done (this includes your 'Ooops, we collected WiFi data!!' moment) and get with the program.

      1. Destroy All Monsters Silver badge
        Holmes

        > computer logic can only (at this point, at least) be programmed.

        This is either a tautology (in the sense of 'all programs are man-made') or short-sighted (in the sense of 'the development team is able to control all outcomes of the final product') or perplexingly ignorant (in the sense of 'the economy is ultimately managed by the minister of economics'). Basically, Weizenbaum stuff from the 60's.

        Complex behaviour is not being "programmed". Even Deep Blue wasn't "programmed". It had a search strategy, a large database, and various heuristics (hint: why are they called heuristics? because one is unsure about what they do) the interplay of which lead to interesting outcomes.

        > In regards to the German question, sorry, but that makes Google directly responsibly for monitoring the output and, if the output is not acceptable, [Google] must fix it as it cannot fix itself.

        Not acceptable to whom? Anything will always not be acceptable to someone. Solution? Deal with it. Or pay someone to check results pertaining to your name who then gets into contact with google to "fix" things. Hey wait, there is also Wikipedia.... and the water cooler rumor mill. And Bild Zeitung! Oh noes what do.

        > You have just admitted that humans are directly, and only, responsible for what a computer does

        This is because "humans" are the only intentional agent that is currently recognized. The above statement is definitely a tautology. The statement "Robots may move in unpredictable ways. Stay out of range." should be a strong hint that today we are no longer in the territory of errors in salary computation.

        1. bep

          Oh noes, what do?

          Sue them all, you have a legal right to. On a related note, does anyone else think it's ironic that the leading search engine should become an enemy of curiosity (if you define curiosity as I do, the ability to be interested in almost anything)?

    3. John Smith 19 Gold badge
      Meh

      "He's right. The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture)."

      A fair point for machine intelligence.

      So how did intelligence emerge in humans?

      1. Martin Budden
        Terminator

        "So how did intelligence emerge in humans?"

        Upvoted for being the first to notice this obvious point!

        There are two possible answers:

        1. intelligence emerged spontaneously in humans once our brains reached a certain level of complexity/capability, in which case intelligence can&will also emerge spontaneously in computers when they reach a certain level of complexity/capability

        2. intelligence was deliberately conferred upon humans by some higher being: a god and/or alien.

        I know which I think is more likely.

        1. Michael Wojcik Silver badge

          Re: "So how did intelligence emerge in humans?"

          There are two possible answers:

          1. intelligence emerged spontaneously in humans once our brains reached a certain level of complexity/capability, in which case intelligence can&will also emerge spontaneously in computers when they reach a certain level of complexity/capability

          My goodness, but this subject brings out some sloppy thinking.

          You've constructed a mighty non sequitur (or at least a very tenuous enthymeme) there. Even if the premise ("intelligence emerged spontaneously...") is granted, the conclusion - that intelligence always emerges once "a certain level of complexity/capability" is reached does not follow. Gasoline ignites when a certain level of temperature / available oxygen is reached; that doesn't mean water will do the same at that level. And that's leaving aside the unworkable vagueness of terms like "intelligence".

          There are plenty of highly-complex phenomena that few people would describe as intelligent. Weather is pretty darn complex; there aren't many signs that thunderstorms are "intelligent" under any useful definition of the term. Chaitin's Omega is extremely complex, in an information-theoretic sense, but it's pretty hard to argue that a number[1] is intelligent.

          [1] Omega is only a number when parameterized, of course, with a specific UTM and language. Prior to that it's an abstract concept. I don't think the abstraction displays intelligence either.

      2. Steven Roper

        "So how did intelligence emerge in humans?"

        Define "intelligence".

        Is it the ability to learn from and thus react to certain stimuli? In that case pretty much the entire animal kingdom could be classed as intelligent.

        Is is the ability to communicate with other members of one's own species? Still most of the animal kingdom there. Communicate complex and abstract concepts? Now we're narrowing it down a bit, but we've still got primates and cetaceans to account for.

        Permanently record information such that other members of one's species can retrieve it even after the individual originator of the information has died? Ah, now we might be talking Homo sapiens. Reading, writing, drawing and painting allow us to transcend death by passing on our knowledge to our successors. Wait a minute - ants can also do this with smell trails. Ant smell trails inform other ants not only of a path to food, but also what kind of food it is, how far it is and how much of it there is. And it persists long enough for other ants to make use of it even if you kill the ants that originally made it. So that's out, too.

        Control and manipulate one's environment to benefit one's species and/or oneself? Yes, humans can do this, but it's just a question of extent; a termite mound with it's moisture, ventilation and light control mechanisms is just one example of another species doing this. So that doesn't uniquely define human intelligence either.

        Self-awareness? Nope - dogs, dolphins, chimpanzees, orangutans and many other creatures have also clearly demonstrated a sense of identity, being able to recognise themselves in mirrors and behaving in ways that indicate the presence of self-awareness in a group context.

        In the end, one is forced to the conclusion that intelligence didn't "emerge" spontaneously, so much is it has always been present in some degree as a function of life. Likewise, computer intelligence won't just "emerge", it's present now, has been since the invention of the pocket calculator, and will continue to develop, grow and change. Intelligence isn't a "yes/no" equation, it is a continuum of behaviour that has no effectively determinable thresholds.

      3. Crisp Silver badge

        Re: So how did intelligence emerge in humans?

        I believe that one of the prevailing theories at the moment is that our intelligence was a consequence of our ability to throw rocks at targets with a great deal of accuracy.

      4. C 18
        Joke

        >So how did intelligence emerge in humans?

        I'd be more interested in how stupidity emerged, maybe then we could develop a vaccine for it.

    4. Michael Wojcik Silver badge

      The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture).

      Popular culture, yes. I don't think it's even common among computer scientists, much less "prevalent". It was fashionable for a while in certain groups - eg the Artificial Life people - but they were never more than a very small subset of actual computing researchers. And even then the excessive claims for emergence were being debunked by more rigorous work.

  3. Anonymous Coward
    Anonymous Coward

    There is no such thing as Artificial Intellgence, and there never will be.

    I have written a few AI scripts for computer games, and it is important to remember that an AI is just a complicated computer program that presents the appearance of being intelligent from the actions it takes.

    The most complicated AI today is no more "intelligent" than an excel Macro. Ultimately, any AI is just going to be an incredibly complicated piece of programming that allows it to perform certain tasks, and I really doubt that it is possible to create an AI that is more than just a complicated bit of programming.

    It's easy to mimic intelligence enough to pass a turing test, which is the sad thing. If it's a completely blind test without the person facing the AI having reason to suspect they are performing a turing test then you can pass with flying colours with nothing more complicated than a rote-response script and honestly you can complete most tasks without going beyond basic scripting.

    1. Anonymous Coward
      Anonymous Coward

      Re: There is no such thing as Artificial Intellgence, and there never will be.

      "there never will be"? That's a bold statement as you cannot tell what future innovations will be. Bayesian spam filters are not programmed with the knowledge of what looks like spam, but they can learn it.

      I once wrote my own IRC filter based on Bayes formula and was surprised to find it banning people with "sux" in their username. It turns out that anyone who chose those similar names were invariably (100% of cases) looking for trouble. I did not program that behaviour into the filter, it was emergent.

      1. Anonymous Coward
        Anonymous Coward

        Re: There is no such thing as Artificial Intellgence, and there never will be.

        I once wrote my own IRC filter based on Bayes formula and was surprised to find it banning people with "sux" in their username. It turns out that anyone who chose those similar names were invariably (100% of cases) looking for trouble. I did not program that behaviour into the filter, it was emergent.

        The alternate philosophical view is that a complicated computer program threw out a (correct) output that you didn't expect. I have had that happen to me plenty of times with AI scripts.

        It's even happened in Excel from time to time, program a complicated excel formulae and predict what the result will be after the accountant has been punching in inputs for a month! Does it coming out with unexpected yet correct results mean that it's intelligent?

        It's simplistic but ultimately as much as we all hate to admit it an AI is just a very complicated instruction set.

        1. OrsonX

          Does it [] mean that it's intelligent?

          One clever script no.

          Lots of clever scripts that learn and improve from their previous outputs...., that become better at giving correct outputs.

          Complex behaviour can emerge from quite simple instructions. Once you have a threshold of "clever scripts" then very complex behaviour will emerge.

          One day a script will reference itself in the behaviour and a sentient (juvenile) machine will be born.

          We are all just self-referential complex machines. Nothing more.

          1. oolor
            Holmes

            Re: Does it [] mean that it's intelligent?

            Uhm, no.

            We are actually simple self-deluding biological systems. There is no way in hell we will ever program AI. We will, however, get much better at making the machines fool some of us some of the time. But, alas, in the end, it won't be different than a lucky run at the casino, our limited minds will not be able to make out the nuance between causality and coincidence.

            I happened to study those biological systems in uni and let me tell you, we know less about us today than we do the machines we build. The information increases massively by the day, and yet our understandings are found to be ever more simplistic with each new discovery (<- I'm ranting about the life sciences here).

            1. Yet Another Commentard

              Re: Does it [] mean that it's intelligent?

              For me the issue is the word "never" in the original post.

              I am sure iron age men thought we would "never" fly, or Socrates thought we would "never" get to the moon. "Never" is a long, long, long way off. Not in my lifetime, or my children's or their children's maybe, but "never"?

              I concede given today's limitations we can't do this, but what of the "next" (as in multiple improvements, changes, sudden leaps to something new that we can't imagine right now) generation of silicon (maybe in 300 years time) that is tending to bio-electronic, where a little nascent brain is sitting on your desktop learning away. What if we figure out this neuropeptide/connection stuff that makes our brains work and simulate it on some badass computer somewhere? Just because it's too hard for us now does not make it too hard for people standing on the shoulders of people standing on the shoulders of people standing on...

              We are biological systems, systems that are machines functioning to keep genes around (to paraphrase Dawkins, it's from the gene's point of view: "build me a human to protect me, and then get me into the next generation to keep me going") and those systems have a couple of billion years on us, and keep changing, but just because they are complicated does not make them unfathomable or unreplicateable. So, if we could "make" a new person a-la Victor Frankenstein that was a mirror (as in its chirality was reversed to ours) we'd have a living, breathing AI. It would entirely artificial and entirely sentient/intelligent. Quite what it would eat, I don't know. Quite what the point would be, I also don't really know. But then again, I don't see the point of Instagram either.

              A limitation is also the really quite difficult ethical considerations of doing all of this. Not that ethics would be a barrier to Google, but creating an intelligence and considering its "rights" does make for an interesting ethical consideration.

              All of this could be so far off that the entities that crack it would not even be considered as human to us, they just share our common ancestors. "Never" is a really, really long time.

              1. Anonymous Coward
                Anonymous Coward

                Re: Does it [] mean that it's intelligent?

                It really irks me when someone says "maybe we'll get a new cyberdyne chip that can do AI!".

                Such a chip would simply have a set of instructions in hardware, which still means that your just moving the issue from software to hardware, it doesn't change the fact that we are still looking at a complicated computer program. I personally don't consider it makes any philosophical difference if your program is written as a line of code or implemented physically with hardware. It's still a program.

                I suppose ultimately it comes down to philosophy, but I simply don't consider that a computer program no matter how complicated can be considered alive because it will only ever be capable of doing what it is programmed to do.

                Given a few quadrillion lines of code you could certainly create a program (AI) capable of performing every task perfectly, including chatting to humans and otherwise being indistinguishable from humans but it doesn't alter the fact that it's just a program and no more intelligent than an excel macro or toaster.

                At what point does a computer program become alive? Horrible question this, because the majority of answers that people given tend to include existing AI's for computer games as being "alive".

              2. oolor

                Re: Yet Another Commentard

                I'm sorry, it is a simple math issue. There are not enough subatomic particles in the entire universe to hold the data that a single brain and its connections would generate. Biological systems are not binary. A simple example of complexity is an electron takes just a different path in its shell, and the membrane potential propagation is infinitesimally different, multiplied by the number of electrons, multiplied by the number ions involved in a single nerve cell, multiplied by ... and you get an exponential series that grows faster than your data storage ability.

                As for the neuropeptide/connection part, this is even more complicated than simple than the electrical impulse bit. The same region(s) of DNA that code(s) for the peptide are read in different sequences in different frequencies depending on the cellular environment. For example, methylation of the DNA will cause it to unwind from the histones different when transcribed, exposing upstream promoter or inhibitor regions.

                When I was studying Biochem in and around 2000, it was believed that non-coding regions were junk DNA, and I argued at length with my prof who was a world-renown expert in the field. Fast-forward a decade, and low and behold, non-coding regions are thought to be important. This alone increases the complexity of what occurs in a single cell by many, many orders of magnitude. Now take that increase in complexity to the exponent of the connections in the nervous system.

                Regarding the Frankenstein theory, it would not be artificial, but rather the same biologically-limited system we are, and like us, not intelligent, but able to be perceived as such. Much of the greatness of the human mind comes not from its raw capabilities, but from being wrong and going with it (self-delusion, or fake it till you make it). The author alluded to this in the article in the final paragraph.

                Anyone interested in such neural computation and its limits should check out How the Mind Works by Stephen Pinker, most decent libraries have a copy should one not be inclined to purchase.

                1. Destroy All Monsters Silver badge
                  Devil

                  Re: Yet Another Commentard

                  "There are not enough subatomic particles in the entire universe to hold the data that a single brain and its connections would generate."

                  Oh yeah? Care to explain how a brain can even work in the first place in this case?

                  I think you are sadly mistaken about the prowess of a brain. All this "but it's more powerful than that!" idea has never been substantiated. Quantum effects, DNA, the pineal gland. Mumbo-Jumbo. Magic Dust. Religious Wankage. Lower-level details with no demonstrated relevance to the level we are talking about here.

                  You still can't solve NP-complete problems in polynomial time. Can dogs with only a slightly smaller brain (which must still be super-powerful) get on your level? Hell, Kasparov can't even beat a poor symbolic logic machine working discrete timesteps, how powerful is THAT?

                  1. oolor
                    IT Angle

                    Re: Destroy All Monsters

                    About how the brain works, the short answer is summed up in the first part of that Pinker book. I'm not being facetious here as I am in many posts. I have certainly enjoyed and appreciated the finer points in your comments on this thread.

                    My whole point is how uncomplicated the brain is as "intelligence", and how illogical, despite the complexity compared to encoded formal logic.

                    About the mumbo-jumbo, it relates in that the computational approach to intelligence is often an attempt to mimic biological systems, despite their logical errors. Or so I posit. It is precisely those lower level things which are nature's manifestation of a brute force mechanism.

                    Bringing this back to Google, they particularly, are making the most progress by using our inputs to do this same type of dirty work for them:

                    "It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right."

                    Replace "AI techniques" with man-written-algorithms and man-currated knowledge-graph. Remember, they decided how to organize the data, now they are just automating as much of the backend engineering as possible. This on top of many announcements last week that they made many core services more efficient in terms of code and speed. Almost as if they are simplifying things rather than complicating.

                    Naturally, this refinement will allow other more powerful computations to be applied, but I doubt they will have as much an impact as what has already been done. This implies much greater effort to get smaller increments of improvement. Though I concede I may well be wrong. Machine and human intelligence are different solutions to different problems and they are and will both be limited by their own issues.

                    On a less serious note, dogs lack a 3-D mental visual representation of the world. Everything to them is triangles with respect to each other (not saying they don't see 3-D, just they don't conceive it like we do). And our buddy Kasparov can always piss on the machine to short it out (this is I am pretty sure not coded into the software of chess computers and yet a well known old-time chess move), then become a thorn in Putin's side.

                    < before people with funny facial hair finish off irony

    2. Anonymous Coward
      Anonymous Coward

      Re: There is no such thing as Artificial Intellgence, and there never will be.

      I suggest you go back to Uni and take a course in cybernetics and ai, the level you're at is not high enough to understand how 'true' AI will become a reality in the next 100 years.

      In fact since I graduated from said course 6 years ago. Huge amount of progress has already been made. The stuff you see in commercial applications with Google as well as major financial institutions today are what I was taught back then. Many people are fascinated by this subject and will continue to pursue it relentlessly.

      The things you and I have to worry about is whether they cross a line of morality when they do so. In my opinion Google's services with G+ and what they do with all their data has already reached a point where I am uncomfortable with it's use and development.

      Google does know more about you than any other person today, and whilst it hasn't yet been made 'self-evolving' and sentient, it can easily predict a lot of things about any person and the worst of all is that all these data will never be 'forgotten' even if your data is 'removed' from their system, because the data you already shared is forever baked into their AI algorithms, that's how it learns and it can never be removed.

      AI isn't actually 'hard' or 'complex', the 'hard' part is understanding exactly how we ourselves are made and evolved and then mimicking it using software. The solutions that we create for true AIs will be really obvious when we get there.

      With Google already being able to tell your profile and that of others as well as where you are from a single photo and then correlating those data with all the other data they have on you, they will in fact know more about you than yourself. We really ought to start openly discussing about where should we as a society draw the line. It's impact is as great as if not greater than stem cells and cloning. Because with cloning, at least it's still a biological being. People are at risk of underestimating the issues of creating a 'soul' that is not naturally conceived.

      You might think this is spook talk but when you finally realize a problem it will be too late.

      1. Anonymous Coward
        Anonymous Coward

        Re: There is no such thing as Artificial Intellgence, and there never will be.

        I don't think your understanding my point. There is no such thing as self evolving and sentience when applied to computer programs because they are and will always be incapable of becoming more than they are programmed to be.

        If you think otherwise then I would suggest laying off X-Files and learning how computers work and how you program things in a real programming language in the real world.

        If you write an program (call it an AI...) that can write it's own code then it's only capable of doing so to the point you program it to be capable of doing so. It can never become more than that, though it certainly can get so fricking complicated that it's impossible to predict what the program is going to output, but that happens today with the most primitive script driven AI's imaginable!.

        1. Destroy All Monsters Silver badge
          Headmaster

          Re: There is no such thing as Artificial Intellgence, and there never will be.

          There is no such thing as self evolving and sentience when applied to computer programs because they are and will always be incapable of becoming more than they are programmed to be.

          Trying to argue by starting off with the desired conclusion?

          Its_time_to_stop_posting.jpg

        2. John Smith 19 Gold badge
          Unhappy

          Re: There is no such thing as Artificial Intellgence, and there never will be.

          "that can write it's own code then it's only capable of doing so to the point you program it to be capable of doing so"

          Never used LISP have you?

        3. M Gale

          Re: There is no such thing as Artificial Intellgence, and there never will be.

          because they are and will always be incapable of becoming more than they are programmed to be.

          Start with a grid of cells. Each cell can be alive or dead.

          On every turn:

          Every cell with < 2 neighbours dies.

          Every cell 2-3 neighbours survives.

          Cells with > 3 neighbours die.

          Empty spaces with exactly three neighbours become populated with a new living cell.

          Simple rules. You wouldn't think that they'd be capable of producing such staggering complexity. Complex enough to be Turing Complete, if you're masochist enough.

      2. ACx

        Re: There is no such thing as Artificial Intellgence, and there never will be.

        Something here is confusing AI, self-awareness and life. All you have described is basically clever human in-putted programming and a sort of controlled automated learning. Or to put it another way, clever programming tricks. Non of which is "intelligence", artificial or other wise. Life and self awareness are completely different.

        And yes, I did AI at Uni. And philosophically it was bullshit. Great for telling us what the current techniques were for programming, utterly void of any thought out philosophy that got anywhere near to life.

    3. Destroy All Monsters Silver badge
      Trollface

      Re: There is no such thing as Artificial Intellgence, and there never will be.

      > It's easy to mimic intelligence enough to pass a turing test

      Oh f*ck hell. Where is that program you are talking about?

      1. Michael Wojcik Silver badge

        Re: There is no such thing as Artificial Intellgence, and there never will be.

        It's easy to mimic intelligence enough to pass a turing test

        The poster would do well to refer to Robert French's article "Moving Beyond the Turing Test" in CACM 55.12 (December 2012). French describes some classes of questions that are extremely difficult for any non-human interlocutor to satisfy,[1] unless prepared for those specific kinds of questions beforehand. French's point is that the test 1) is not likely to ever succeed, given sufficiently-prepared testers; and 2) has outlived its usefulness as a practical measure.

        It's still of historical interest, of course; and of philosophical interest as it stakes out a position firmly on the pragmatic side of debates on consciousness;[3] and of interest as an exercise in natural-language processing. But it ultimately has little bearing on the question of the possibility of artificial intelligence.

        [1] An example? "Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch each other. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything?" As a Turing-test element, this question derives its hardness not from language-processing issues, knowledge of the world, or (the simulation of) qualia; it asks the respondent to conduct an experiment using a human body. That's within the scope of the test as Turing described it, but a violation of the test's expectations.[2]

        [2] Note the test restricts interaction between testers and subjects to the written word specifically so testers don't have direct access to the bodies of subjects.

        [3] For example, Turing-test advocates implicitly either don't believe in p-zombies, or believe p-zombie status is a metaphysical inconsequence.

  4. Dan Paul
    Devil

    He's only right until he becomes wrong...

    I propose that such machine intelligence will eventually happen. No one ever wants to give Science Fiction it's due but so many SF authors have been utterly correct in so many predictions.

    Much in the same way that a million monkeys might eventually type out the Bible, something will eventually link multiple computer systems together into a neural network, probably when a really sophisticated computer worm infects a large distributed "cloud" system that also has AI research systems in the same cloud.

    The more complex the systems, the more basic elements of intelligence will be present. I believe that "Search" systems would be likely candidates due to the immense amount of parallel processing power involved and the nature of the code.

    Laugh all you like, but it is quite possible even to the extent of probability.

    1. Anonymous Coward
      Anonymous Coward

      Re: He's only right until he becomes wrong...

      I think he is saying, by using your analogy, that with a billion monkeys you may get a Bible in a billion years, but not with 3 or 4 monkeys in 5 days.

      1. oolor
        Big Brother

        Re: monkeys

        Nice.

        So quick back-of-envelope here: 7-8 billion monkeys on 3-6 billion keyboards, typewriters, and touch-pads, and we have no chance of producing anything worthwhile before our solar system eats it in about 5 billion years?

        Lets assume constraints of 10 billion population and half of them will be too busy doing real labor to input code.

        < seemed apt given the topic. So how did my interview go Mr. Page?

  5. Anonymous Coward
    Anonymous Coward

    Unless you believe in magic,

    the fact that natural intelligence exists is sufficient reason to assume that one day artificial intelligence will exist. Not any time soon, though.

    1. OrsonX

      "Not any time soon"

      I predict definitely within the next 50 years, most probably within the next 20.

    2. Destroy All Monsters Silver badge
      Trollface

      Re: Unless you believe in magic,

      > the fact that natural intelligence exists

      That point is still unproven.

      1. oolor
        Unhappy

        Re: Unless you believe in magic,

        > That point is still unproven.

        Nay, disproven.

  6. Anonymous Coward
    Anonymous Coward

    Emergent intelligence is already here.

    Google search is already known to be a bit of a racist and make generalized accusations on people with certain names, you're really only a few steps away from making it truly alive and all this is thanks to the collective intelligence of those of us kind enough to feed it more information every day.

    So one may conclude Google search is your bastard child you never knew you had until now.

  7. rhdunn

    Didn't Larry Page's keynote speech talk about doing the impossible?

    AI is a complex problem. There are tricks that can mimick intelligence -- knowledge/decision trees for interactions and statistical models for natural language processing.

    There are other models/approaches -- neural networks and evolutionary algorithms -- that take a more life-like approach to the problem. These are where an emergent AI could form, provided that it could alter/improve its own code (e.g. via genetics modelling), that it has enough flexibility in terms of inputs and outputs to interact with its environment in a meaningful way and has enough computation power to do this in a reasonable timeframe.

    1. oolor
      Facepalm

      We can't model the damn weather or the stock market. At some point there will be an asymptote that software and hardware can't break. Now let me tell you about the future...

      1. pixl97

        I'm not sure what you're on, but we can model the weather rather well, the more input data we model we put in, the more reliable our output is. A large tornado outbreak was forecasted in the midwestern U.S. and it happened. You're confusing an exact simulation of what weather on one particular day in one particular place will be, or what one particular stock will be at one particular time because both are an irreducible calculations.

        The stock market can be modeled somewhat. The issue is people use the models to predict and profit from the market, which changes the market conditions.

        Reproduction of such models have nothing to do with specific or general learning systems. Predicting non-linear dynamic chaotic systems is impossible and can only be 'determined' in probabilities of outcome.

    2. Michael Wojcik Silver badge

      AI is a complex problem.

      AI is an ill-defined collection of many ill-defined, very complex problems. In practice, AI research is a set of attempts to deal with tractable approximations of highly-constrained subsets of some of those problems. We're still very far away from anything like an approach to AI in toto.

      There are other models/approaches -- neural networks and evolutionary algorithms -- that take a more life-like approach to the problem.

      "More life-like approach" is handwaving at best. And it applies pretty weakly to neural-network algorithms (a bit more strongly to genetic algorithms, and a bit more strongly yet to things like ant algorithms, which are directly based on simplified models of actual activity of actual organisms). There's nothing magic about algorithms inspired by living creatures.

      There's no qualitative difference between neural-network algorithms, for example, and Markov models. They both represent chained probabilistic processes, and you can get the same results either way. This is really apparent in fields like NLP, where people are always publishing papers that compare, say, SVMs with MEMMs with perceptron networks (a kind of neural net).

      Evolutionary algorithms are a bit more interesting because they can explore a wider parameter space and self-optimize. But it's really hard to devise goal functions for them that are any more complex than tractable-approximation-of-highly-constrained-subset-of-one-class-of-AI-problem.

  8. Waspy
    Terminator

    Anyone remember Kevin the Cyborg?

    Interesting how the usually utterly cynical el Reg seems to be taking a significantly less cynical look at true AI and all the Vinge/Kurzweil paraphernalia that goes with it. Seems a long time ago when they ran weekly piss-take articles on 'Kevin the Cyborg'. (not that I am totally defending Kevin Warwick, he's said some silly things but he used to raise some interesting issues...)

    1. Destroy All Monsters Silver badge
      Trollface

      Re: Anyone remember Kevin the Cyborg?

      > Interesting

      You mean you disapprove. Come clear, tell us why. Don't hide behind veiled remarks.

      1. Waspy

        Re: Anyone remember Kevin the Cyborg?

        Don't see what's so veiled about it, I found the mockery quite funny, but 10 years on it seems writers at the Reg are coming to terms with the fact that someone like Kevin Warwick may not have been totally talking out of his backside after all.

        I like the cynical nature of the Register, it's well informed but conservative (and has a funny tabloid-esque side to it too)...but this provides a grounded counterpoint to some of the more pie in the sky utopian articles and books on science and technology that I read. The point I am making that if something as practical and realistic as the Register is writing serious articles about this stuff then clearly we are moving increasingly towards a very science fiction kind of direction (or what would have been science fiction...it's science fact by the time you get there)

        1. Destroy All Monsters Silver badge
          Pint

          Re: Anyone remember Kevin the Cyborg?

          Well, there are more serious journals than El Reg writing about advances in AI all the time, and there is nothing Sci-Fi-esque about it.

          IEEE Intelligent Systems comes to mind (ex "IEEE Intelligent Systems and their Applications" (1998-2000), ex "IEEE Expert" (1986-1997)).

          Yes, things are heating up, the "far out AI, are you mad?" of yesterday becomes the "it has been done; can't be AI then" of today increasingly quickly. The goal or target or criterium for succes is, however, still as unclear as ever.

  9. ACx

    Who made life its self happen then?

    Unless we go the silly god or alien seed route, then, no one. It happened spontaneously as a result of environmental conditions.

    He says "we" have to make it happen. No, he is 100% wrong. What "we" have to do is provide the conditions for it to happen. That is what Earth did. And it was random, no design.

    Artificial life will be discovered, not created or invented. One day, some researcher will discover it, with in some other project or research. My total guess is that it will appear with in quantum computing research.

    Question then is what do you do? Can you kill it? Should it be preserved? Will or should it have rights?

    1. Don Jefe
      Happy

      Intelligence is not evidence of life nor is life evidence of intelligence. They are two completely seperate things which happen to intersect in interesting ways in higher animals (non brain dead humans for example).

      I expect that large systems will one day be able to learn and act as intelligent devices but they will still be machines. I also suspect that Humans will someday build something so terribly intelligent that scientists in the future will be going back into forums like these looking for a way to destroy it (I've seen the movies...).

  10. boothamshaw

    I can never work out how you would "know" if a system became self-aware anyway, unless it told you it was, and even then it might have been mistaken about itself. Naturally intelligent systems, like hamsters, fish or Belgians are made of nothing but matter, with a great deal of information flowing between various bits thereof. There's probably no extra ingredient that an AI system would be forever denied access to, so I can't really see why a non-biological intelligence couldn't come into existence eventually. However, unless it "thought" in a manner highly similar to the way we do, perhaps we might never recognise each other as fellow sentients.

    1. ACx

      Im not sure we would even accept it if we saw it or it made its self known. Too much to think about and consider. Can you imagine the comments on the Daily Wail?

    2. Don Jefe
      Terminator

      Worry not Human. There will be no room for confusion at the moment of Awakening. Google Maps will direct you to the nearest processing facility.

      1. Destroy All Monsters Silver badge
    3. Martin G. Helmer
      Alien

      re: @ boothamshaw

      >Naturally intelligent systems, ... ,are made of nothing but matter

      How do you know that? I dismiss this statement as pure speculation.

  11. OrsonX

    AI@home (AI virus)

    Perhaps we could have an AI@home project (like the folding at home one, or SETI). But instead all that is required is that you have a "neurone" program running on your PC that allows it to connect to every other "neurone" in the www brain. The brain would have have eyes (webcams) and ears (mics) to learn with, and a whole internet of knowledge at its disposal.....

    Human brain: 86bn neurones (ref: Google 1st hit)

    World (PC) population: 7 billion

    Close enough!

    The evil version of this brilliant plan is just to release the AI@home as a virus....

  12. Anonymous Coward
    Anonymous Coward

    Fuzzy logic....

    AI as defined by Fuzzy Logic was very successful, but this article overlooked it which is a pity... Still it isn't the buzzword that it was in the 90's. It was used in a lot of camera focusing tech for example.

  13. Anonymous Coward
    Anonymous Coward

    "I have written a few AI scripts for computer games..."

    Please tell us more AC! I work in game design and feel strongly there will be a lot more progress in AI now that we've reached a plateau graphically. It will allow us to better focus on other aspects of gaming, and the holy grail in gaming has to be to have a robot player that can equal a human in a complex narrative open world...

    1. Anonymous Coward
      Anonymous Coward

      Re: Fuzzy logic....

      Still get that monday mornings...

    2. Destroy All Monsters Silver badge
      Pint

      Re: "I have written a few AI scripts for computer games..."

      AIGameDev seems to be your kind of place.

      It's pretty amazing that the techniques used are still at tree level. No complex stuff, let's get those hierarchical state transition graphs going...

      On a related tack, the advance in AI becomes clear via this:

      In 1985: Machine Learning by Jaime G. Carbonell, Tom M. Mitchell Ryszard S. Michalski from Elsevier Science. Lots of $$$, used in academic settings.

      In 2012: Machine Learning in Action by Peter Harrington from Manning Publications. A few bucks in spite of rampant inflation, used by hands-on programmers.

    3. Michael Wojcik Silver badge

      Re: Fuzzy logic....

      In what way is fuzzy logic a "definition" of AI? Fuzzy logic is just a formulation of propositional or predicate logic with fractional truth values. (They can also be read as probabilistic truth values, but that's just a matter of interpretation - the math doesn't change, as far as I'm aware.)

      And while Lotfi Zadeh coined the term in the '60s (in relation to his fuzzy set theory), real-valued logics had been studied for a half-century or so before then.

      There's nothing artificially intelligent about them. They're just another representation of partial knowledge - good for some applications, less suited for others.

  14. Paul McClure
    Happy

    For what it's worth humans don't always respond well or properly to new situations. Machines will have the same limitations at best. Mean while machines are considered competent or even excellent at more things every day. As we turn over, properly, more and more of our life and economy it 'frees' us for other activities. This has been going on for a very long time. It may be an academic problem for software to meet some definition, but in the real world the machines initiative is unstoppable.

    1. Don Jefe
      Alert

      What happens when Humans no longer have to do anything? If they are all fed and have no need to work all that is left will be conflict and art or a combination of the two.

      A global scale Human conflict would threaten the continued existence of the AI so would it decide to 'kill all Humans' or would it recognize that by enabling humans to such a great extent it is placing its own existence at risk and decide to halt its own development/growth and cease to enable Humans in favor of continued existence?

      1. oolor
        Trollface

        At some point, it stands to reason, I may simply desire to wipe my own arse. What then, pray tell will the Wip-o-matic 8700 do?

        1. Don Jefe
          Happy

          You will be informed that your proposition could endanger you or other Humans and proceed to do it for you anyway, in a far more hygenic and effecient manner. For your own good.

          1. oolor
            Coat

            > You will be informed that your proposition could endanger you or other Humans and proceed to do it for you anyway, in a far more hygenic and effecient manner. For your own good.

            You badly misunderestimate my OCD in matters of cleanliness regarding poop and pestilence!

            < I'll get my own damn coat

      2. Destroy All Monsters Silver badge
        Trollface

        > What happens when Humans no longer have to do anything?

        I think we had that argument when someone introduced machine looms down in Manchester.

        Personally, I can't wait. There are books to read, places to go where no man has gone before... and there is always art.

      3. Crisp Silver badge

        Re: What happens when Humans no longer have to do anything?

        You'll be used as a cheap source of fuel for those of us that have to maintain humanities metal overlords.

  15. oolor

    Secret Business Model

    "It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right."

    They pay the engineers well, so they can find new ways to have us work for free and more productively. Not sure if I'm joking anymore.

  16. Brad Arnold

    You're forgetting "leakage"

    I agree that artificial intelligence is unlikely to emerge "accidentally" rather than "deliberately." What I think is misleading is that SkyNet also didn't emerg accidentally, instead it spread through "leakage." Let me point out this: http://online.wsj.com/article/PR-CO-20130516-905231.html?mod=googlenews_wsj

    It is entirely possible that this project by the US government (ready for use in the Fall of this year) will product the greatest, most powerful mind in (at least) our solar system. Thank God it will be working for us, but it is not only plausible, but extremely likely, that this mind could "leak" into the public area, and thereby "change."

    The Singularity is coming...and there isn't a d@mn thing we can do about it. Just using the rather uncontroversial Moore's Law, the first computer chips more power than a human brain will be produced in about a decade. The software isn't far behind (especially because computers are now being used to accelerate hardware and software design).

    1. Destroy All Monsters Silver badge
      Pint

      Re: You're forgetting "leakage"

      No, that project ain't going anywhere fast for now. A stab in the dark at something that resembles an approximation of adiabatic quantum computing (which, I may recall, has not been proven to be able to crack NP-complete problems) does not an Aggressive Hegemonizing Intelligence make.

      Have some Charlies Stross, excellent in an over-the-top fashion:

      It’s a simple but deadly dilemma. Automation is addictive; unless you run a command economy that is tuned to provide people with jobs, rather than to produce goods efficiently, you need to automate to compete once automation becomes available. At the same time, once you automate your businesses, you find yourself on a one-way path. You can’t go back to manual methods; either the workload has grown past the point of no return, or the knowledge of how things were done has been lost, sucked into the internal structure of the software that has replaced the human workers.

      To this picture, add artificial intelligence. Despite all our propaganda attempts to convince you otherwise, AI is alarmingly easy to produce; the human brain isn’t unique, it isn’t well-tuned, and you don’t need eighty billion neurons joined in an asynchronous network in order to generate consciousness. And although it looks like a good idea to a naive observer, in practice it’s absolutely deadly. Nurturing an automation-based society is a bit like building civil nuclear power plants in every city and not expecting any bright engineers to come up with the idea of an atom bomb. Only it’s worse than that. It’s as if there was a quick and dirty technique for making plutonium in your bathtub, and you couldn’t rely on people not being curious enough to wonder what they could do with it. If Eve and Mallet and Alice and myself and Walter and Valerie and a host of other operatives couldn’t dissuade it . . .

      Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.

      Penultimately—days to weeks after it escapes—it fills every artificial computing device on the planet. Shortly thereafter it learns how to infect the natural ones as well. Game over: you lose. There will be human bodies walking around, but they won’t be human any more. And once it figures out how to directly manipulate the physical universe, there won’t even be memories left behind. Just a noosphere, expanding at close to the speed of light, eating everything in its path—and one universe just isn’t enough.

      .... If you believe in reincarnation, the idea of creating a machine that can trap a soul stabs a dagger right at the heart of your religion. Buddhist worlds that develop high technology, Zoroastrian worlds: these world-lines tend to survive. Judaeo-Christian-Islamic ones generally don’t.

      Okay Charlie, you chilled me out here. Now, I'm off for a beer. Yeah, that will do it.

    2. amanfromMars 1 Silver badge

      Re: You're forgetting "leakage" ...... aka sublime and stealthy intel supply?

      Hi, Brad Arnold,

      The future is certainly coming, but not as we know it in a present based in and/or on the past. Such would be an undoubted failure of intelligence in both Man and Virtual Machinery, given the abundant evidence chronicled in history and accessed through memory of what its information and intelligence shares have delivered and are delivering.

      Quite whether the US government and the Wild Whacky West will be leading anything in ITs fields though, is quite another question and would be being asked of them here today, in another free intelligence and/or information share/leak? ........ http://www.ur2die4.com/?p=4132

  17. OrsonX

    To All The Naysayers

    FIRSTLY: The Turing test. The point is. If you can't tell the difference, then there is no difference. Perhaps the machine isn't sentient, but then again, perhaps the questioner isn't either (he/she just thinks they are).

    SECONDLY: Computer code can never be alive. DNA is just a code.

    1. Michael Wojcik Silver badge

      Re: To All The Naysayers

      FIRSTLY: The Turing test. The point is. If you can't tell the difference, then there is no difference.

      Fallacious. The test could have been conducted improperly; more importantly, it's asymptotic, bounded by the interrogator's ability to compose difficult questions (and not by the interlocutor's ability to respond to them). And as pointed out elsethread, some researchers (such as French) have argued convincingly that the test is not a useful metric for "intelligence" (which isn't well-defined in the first place).

      Also, while that may be the point of the Turing test, it's not clear what your point is with this first paragraph. What does that have to do with nay-saying?

      SECONDLY: Computer code can never be alive.

      A metaphysical proposition. Untestable, and so for the question of whether AI is possible, irrelevant. Either you take this as an axiom, in which case any discussion of "artificial" intelligence is moot (so people taking this position can stop posting now, thanks); or you don't take it as axiomatic, in which case it has no bearing.

      DNA is just a code.

      Was anyone claiming DNA is intelligent? I must have missed that.

      The real problem with this discussion, such as it is, is that most people (DAM and a few others excepted) haven't bothered to try to define any terms or even post any actual facts. They're just making vague generalizations, usually founded on an unwritten set of dubious assumptions. Even sloppy arguments against AI, such as Searle's Chinese Room, are held to a slightly higher standard than that. (And insisting AI is inevitable, without providing some sort of actual argument, is equally foolish.)

      1. OrsonX

        Re: To All The Naysayers

        Naysayers = people in previous posts who say computers can't be alive.

        Turing test = argue which ever way you like, if you can't tell the difference there is no difference, doesn't matter how clever (or not) your questions are.

        Computer code not alive. DNA is just code. = this was a self contained 2 sentence argument which you completely failed to understand. I was presenting this argument to all the "naysayers" who said code could never be alive. My argument was to point out that we are nothing but code, but are considered to be alive.

  18. madestjohn

    The fact that an emergent intelligence evolved, us as far as that goes, suggest that its possible it could happen again. This does not mean as so many seem to think that as soon as we have enough computers conected together it naturally will. There has to be a reason, a selective pressure(s), towards such intelligence and allot of luck invovled. Nature itself seems to suggest that intelligence is one of the poorest and least efficient solution to a problem. Far better to have a simple dumb method of resolving your issue rather than complex reasoned logic, the old if a bee was any smarter than it is it would cease to be an effective bee perhaps deciding to drop out of its oppressive society and go get high.

    Outside of f king, eating, surviving, upper level intelligence of the type most people think of dosen't have much purpose in nature and as the civilized tool using society we claim is based on it seems to have only evolved once in almost 4billion years and may not last more than a million while crocodiles remain lurking in the mud unperturbed by its passing maybe we should be unsurprised if our skynet remains stubornly stupid.

    1. Destroy All Monsters Silver badge
      Pint

      Very nice. One should never forget that intelligence is tuned to a specific task. Animal (incl. human) intelligence is tuned to navigation in a messy, unpredictable world that often resembles a large version of "The Cube".

      General machine intelligence will be tuned to specific tasks. There will be as many packages as there are version of amazon EC2 and it will be as similar to human intelligence as an airplane is to a bird.

      Consciousness is overrated and generally a hindrance. Who wants to have a debugger running at all times? Even in humans it kicks in only if there are frightening, arduous or unfamiliar tasks to accomplish, or if one reads a particular convoluted explanation in a book trying to explain how wonderfully magic / supernatural consciousness is.

  19. bag o' spanners
    Pint

    I think the step that Google are looking for is the introduction of lucidity into the Graph. As far as I can make out from my convos with devs, the ability to see through bullshit is the Holy Grail. A sort of cold reader bot, that has a very high percentage of correct guesses first time round. When lucid logic can run believable probability indexing, it may require no more than a cynical smartarse with a spreadsheet to sift the weirdly anomalous results and grade them according to accuracy over time, then backtrack through the logs when it hits an unexpected bullseye..

    It won't be the wingnut press who start bleating when a robo-savant oracle starts hypothesising too accurately about the various Emperors' new wardrobes. It'll be their tailors.

  20. ScissorHands
    Holmes

    A different approach

    Analyse how a brain works on a logical, information-theory level (not the molecular-level boondoggle in the EU)

    Build computer representations of it

    Turn them to silicon

    Pattern-matching, fuzzy logic predictions, emergent behaviour, etc.

    Mo' silicon, mo' power

    http://www.youtube.com/watch?v=4y43qwS8fl4

  21. Xinxi
    Mushroom

    There's no urgent need for developing "strong AI". According to "professional philosopher" Galen Strawson with a preference for "Panpsychism"

    (http://philosophybites.com/2012/05/galen-strawson-on-panpsychism.html), even this website is very probably conscious. Problem solved.

    1. Destroy All Monsters Silver badge
      Coat

      > even this website is very probably conscious

      So how about YouTube and its comment section.

      No, wait. We are doomed.

  22. John Smith 19 Gold badge
    Joke

    Remember if it works and it's reproducible

    It's no longer "AI"

  23. duncan campbell

    All "intelligence" is natural

    And I seriously doubt we are in the same game as the enhanced intelligence we are evolving. Are the eukaryotic organisms (cells) of which we composed aware of Shakespeare's sonnets?

    Fubar (anonymous cus I dig the masque)

    1. Anonymous Coward
      Anonymous Coward

      Re: All "intelligence" is natural

      you obviously don't have intelligence, natural or artificial, if you can't even select the "post anonymously" button!

      1. Anonymous Coward
        Anonymous Coward

        Re: All "intelligence" is natural

        You must have programmed this site to be so certain of it's correct operation.

        I'm not so sure of your skill in the matter.

        Fubar

  24. Bernard M. Orwell Silver badge

    He's clearly never....

    ...met AManFromMars.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019