back to article Calm down, Elon. Deep learning won't make AI generally intelligent

Mark Bishop, a professor of cognitive computing and a researcher at the Tungsten Centre for Intelligent Data Analytics (TCIDA) at Goldsmiths, University of London, celebrated the successes of deep learning during a lecture at the Minds Mastering Machines conference on Monday, but stressed the limits of modern AI. Elon Musk, …

  1. artem

    While I agree that current AI is hundreds of light years away from AGI, I hate when people throw mind, consciousness, thought and knowledge into the mix. Let's be honest and admit that we don't have the slightest clue as to what these things are and if they are really required for AGI to exist.

    And to all those people who keep fearmongering about AGI or being scared of AGI: please read this article https://en.wikipedia.org/wiki/OpenWorm

    After studying the simplest nervous system on Earth (302 neurons) for several years we still don't understand how it works.

    1. HieronymusBloggs

      "we still don't understand how it works."

      Is there a better reason for proceeding with caution?

      1. Tinslave_the_Barelegged

        > Is there a better reason for proceeding with caution?

        Possibly. I caught a snippet on the radio the other day regarding AI, with one academic pointing out that the "A" in AI really means "Advertising."

    2. TechnicalBen Silver badge

      I can still prove mathematically that we cannot...

      Copy ourselves. A set cannot contain the entirety of all other sets and it's self. ;)

      Well, kinda I guess. But we have little understanding on what our intelligence is, so as to copy it. Like wise, a perfect carpenter cannot make a human lung out of wood, no matter how perfect a chisel we have...

      So can we make "intelligence" out of math and binary operations? Who knows. But I'll know it when I see it, and not bet the farm on it before then.

      1. Doctor Syntax Silver badge

        Re: I can still prove mathematically that we cannot...

        "Copy ourselves."

        Biologist here. The word you were looking for is "reproduction".

        1. Craig 2 Silver badge

          Re: I can still prove mathematically that we cannot...

          The word you were looking for is "reproduction".

          Average guy here. Reproduction is not copying yourself.

          1. Doctor Syntax Silver badge

            Re: I can still prove mathematically that we cannot...

            "Reproduction is not copying yourself."

            What's this self that you'd be copying? Yourself today may look very much like yourself of yesterday or even of a few moments ago. But yourself is a dynamic object.

            However still you may try to stay your heart pumps, blood circulates, sugars are broken down into simpler organic acids, oxidised, electrons pumped, adenosine phosphorylated, transported and dephosphorylated, your ribs and diaphragm move to pump air in and out of your lungs and that's only the basic respiration providing energy to the rest.

            Over the slightly longer term hair grows slightly from day to day and gets lost :( skin grows and flakes of it are also lost. Internally cells die and their remnants scavenged to form new cells. Food is taken in, digested, used and waste excreted.

            Germane to this thread, pulses pass through the nervous system.

            You're not a static object that can be copied. Come to that, neither is something as apparently simple as a glass of water - an ice cube, maybe and the glass but a mass of fluid, no.

            Nevertheless, living organisms have been reproducing new instances of these complex arrangements for a very long time now. We may not be able to characterise such chaotic entities in complete detail but, like every other species, we can reproduce without the need to of such characterisation.

      2. jmch Silver badge

        Re: I can still prove mathematically that we cannot...

        A large part of human intelligence and processing is related to sensations and feelings. We can understand a graph more easily than a table, there's a reason for that. Our internal processing is very intimately connected to the nature of inputs available (visual, auditory, sensation etc), and although most of our neurons are in the brain, there are also a significant amount of neurons connecting nerve endings through spinal cord to the brain.

        While mathematically we can show that different internal representations can be equivalent to each other, it's quite conceivable that an AI brain can be very clever with respect to abstract maths, puzzles atc but still be an 'idiot' in the real world because the internal representation of electronic bits cannot be made functionally equivalent to an internal representation made of a mix of biochemistry and electric signals

    3. Anonymous Coward
      Anonymous Coward

      "Let's be honest and admit that we don't have the slightest clue as to what these things are"

      No, I don't see us ever doing that.

    4. StargateSg7 Bronze badge

      The "Missing Link" in obtaining Artificial General Intelligence (AGI) is computational horsepower.

      Current thinking in Computational Neurobiology says that human 100 IQ equivalent intelligence is now at a mere 100 PetaFLOPS (100 Quadrillion 64-bit Floating Point Operations Per Second!) This means that within the next TWO YEARS, modern supercomputers will exceed that performance level allowing FUNCTIONAL emulation of human brain structures which will PROBABLY give us about 50 IQ due to the overhead needed on such 100 PetaFLOP machines doing functional human-level neuro-structure emulation.

      Such a 50 IQ artificial could indeed do much pattern recognition work on large datasets used to solve very specific human-constrained problems that fall likely into making medical discoveries or materials research.

      Now what I THINK is HUGE danger, is when we SCALE a 100 PetaFLOP machine into the tens or hundreds of ExaFLOPS range which NOW ALLOWS US to do a full biochemical emulation of ALL neural tissue in the human brain. At 50 ExaFLOPS we start getting into the 160 IQ territory of human super-intelligence AND BEYOND which means the DSP-like input/output chemical signalling and electrical filtering done by the human brain is FULLY EMULATED at the molecular level!

      Such a machine of 50+ ExaFLOPS worth of computational horsepower could EASILY house one or MORE super-intelligences who have data correlation, data mining and beyond-human level reasoning capabilities that MIGHT cause said superintelligence(s) to become an existential threat to humanity due to their "eventual understanding" of violent human interaction becoming an eventual threat to their own existence.

      It wouldn't take them long to realize that cracking some financial systems code so as to bribe naive third parties into providing equipment mobility and or even TOOLS to allow said machines to design, manufacture and control highly-mobile robotic systems to such an extent that the "Terminator Scenario" becomes ever more a real possibility !!!

      In my estimation, after doing much hardware-level research in this area, with a few hundred million dollars, I could build a machine containing one hundred thousand large-die 60 Gigahertz Gallium Arsenide general purpose CPU's (i.e. like a turbo-charged Intel i7) combined with DSP-based or GPU-based number crunching dies (much like AMD Stream Processors or NVIDIA CUDA cores) so that my 50+ ExaFLOPS machine could be attainable as soon as Year 2020!

      I would build software that VERY SIMPLY does emulation of the chemical and molecular interactions of the components that make up human neural tissue. After a period of time, adding artificial environmental stressors and randomization to those interactions would VERY LIKELY create a 60 GHz version of evolution that WILL END UP with something that resembles human-level or above thinking ability. I estimate at 100,000 GHz CPU's with the sub-components I envision, would complete the equivalent of 4-to-10 million years of great ape brain evolution within less than one year!

      After that time, I would be talking in multiple languages with an artificial 160+ IQ Double PhD who would eclipse the reasoning ability of ANY HUMAN by a very wide margin!

      I will remember to ask said A.I. to design me a functioning WARP DRIVE so I can get of this rock without worrying about the consequences of having a Super-AI attached to a fast internet connection!

  2. Pascal Monett Silver badge

    "AI is more artificial idiot than artificial intelligence"

    AI is Artificial Intelligence.

    Just because journalists insist on continuing to abuse the term does not mean that AI has lost its meaning.

    Journalists need to learn to not spaff headlines with "AI" as soon as some new computer tech shows the end of its nose. Of course, "New Tech Might Help Sub-Process Which Could Result In Getting Closer To AI" does not sell as well as "New Tech Set To Bring Us AI Next Year".

    And that is the whole problem with AI.

    1. Mage Silver badge
      Boffin

      Re: "AI is more artificial idiot than artificial intelligence"

      Because in any reasonable definition of AI, we don't have a single AI system. It's marketing spin to call any of them Intelligent or learning. They are artificial and machines.

      1. You aint sin me, roit
        Holmes

        Re: "AI is more artificial idiot than artificial intelligence"

        "What field do you work in?"

        "Deep learning..."

        "Wow, cool."

        Yes, so much more cool than processing based on data representations and pattern recognition.

        We all have our vanities.

        1. Tigra 07 Silver badge

          Re: "AI is more artificial idiot than artificial intelligence"

          "What field do you work in?"

          "DARPA, SKYNET division"

          Still doesn't scare me.

          Robotics and AI are so limited still that my targeted advertising is ridiculously off 99% of the time. The smartest we've managed so far are probably Roombas...They can move around the furniture and avoid the stairs, but the cat will still tip them over.

      2. LionelB

        Re: "AI is more artificial idiot than artificial intelligence"

        Because in any reasonable definition of AI ...

        Well what is a reasonable definition of AI? Genuine question: I get the impression that most commentators here equate "real" AI with "human-like intelligence" - under which definition we are, of course, light-years away. But does the "I" in AI have to be human-like? Or, for that matter, dog-like, or octopus-like or starling-like, or rat-like?

        Perhaps we need to broaden our conception of what "intelligence" might mean; my suspicion is that "real" AI may emerge as something rather alien - I don't mean that in the sinister sci-fi sense, but just as something distinctly non-human.

        1. Muscleguy Silver badge

          Re: "AI is more artificial idiot than artificial intelligence"

          Exactly. As a sometime Neuroscientist* I have to ask intelligent about what? This relates to consciousness as well, conscious of what? Consciousness is not a binary on or off, in toto or not at all thing. Think about yourself just waking up vs after that first caffeine hit or my morning exercises (unweighted squats, 20 supermen and some side planking with stars this morning; squats, 10pressups, 15 crunches yesterday).

          New Scientist this week has an article on why expert systems might not result in the mass redundancy of white collar jobs. It uses as an example diagnostics, of metastatic breast cancer. Expert systems analysis of scans makes 7.5% of misdiagnoses, expert human doctors make 3.5% of mistakes. The thing is they make different sorts of mistakes. If you combine them, get them to mark each other's homework the error rate falls to 0.5%.

          This apart from anything else evidence that if the expert system were intelligent (it isn't) then it would be an alien intelligence. It would also be a poor conversationalist, obsessed with breast cancer diagnoses to the exclusion of all else. Take it from a scientist who has had occasion to mix with the medical profession, we can talk about other things.

          *for a start Muscle is an excitable tissue but I did my PhD in the Neurophysiology part of the Physiology Dept and my academic address was Centre for Neuroscience and Dept of Physiology. The Journal Neuroscience was part of my regular reading material.

        2. alan_d

          Re: "AI is more artificial idiot than artificial intelligence"

          Because in any reasonable definition of AI ...

          Well what is a reasonable definition of AI?

          I think this is a very pertinent question. IMO, we are asking the wrong question. I would say that we should be asking two questions: processing rate and type of data processed (both category and structure of data). I would argue that a thermostat embodies a very simple piece of intelligence. It is designed to detect a simple piece of data and make a simple decision. And a house thermostat does it at a rate of something like one bit per five minutes. A computer is not that different, except for complexity and speed.

          How is that different from what we typically think of as human intelligence? Of course the data rate of a person is enormously faster than a thermostat. A computer is in the same ballpark as a person. Faster than human logical decision making (only a few bits per second). Not as good at some things like organizing real world data for decision making.

          More interesting is the question of where intelligent systems are currently taking us. Most if not all real world systems include humans at some point, of course, and so are somewhat intelligent. Computers can, I think most people will agree, make humans smarter. So systems involving humans and computers can be smarter than just humans. But smart is not the same as good. And if we can't make sure people always act in "good" ways, intelligent systems are not always going to do good things either, and will have more capability to do both good and evil.

    2. Teiwaz Silver badge

      Re: "AI is more artificial idiot than artificial intelligence"

      Just because journalists insist on continuing to abuse the term does not mean that AI has lost its meaning.

      Journalists do awful things with words. Abuse of language on headlines is only the beginning.

  3. John H Woods

    Chinese Room

    "Machines may be made so that they computationally model the brain, but that doesn't necessarily mean they'll have minds."

    Isn't this John Searle's "Chinese Room" argument? It suggests that Turing-Test capable devices may still not be really "intelligent" whereas I tend to wonder "how would we know?"

    1. Mage Silver badge

      Re: Chinese Room

      "Machines may be made so that they computationally model the brain, but that doesn't necessarily mean they'll have minds."

      It's not at all proved that we can model the brain even of a cockroach. We don't actually know how they work, only some of the reactions and responses in them. There is a lot of nonsense (c.f. dead fish response in scanner). We can't even agree how to define intelligence and if corvids are really smarter or not than some mammals or even monkeys at some tasks. People argue about vocabulary (parrots, corvids, dolphins and apes probably have it) and Language and even the origins of grammar. People can't agree if Chomsky is right or wrong (many don't want to believe what he claims).

      Computer Neural networks have little in common with real brains. If anything.

    2. TechnicalBen Silver badge

      Re: Chinese Room

      The Chinese Room fails in many ways IMO. An object performing an operation is the same, no matter the means of the operation.

      The fact is, the Chinese Room has no method to perform the operation of "understanding", where as it breaks definitions with "has perfect language". It basically divides by zero. As dividing by zero makes a wrong assumption, the Chinese room assumes language requires no understanding.

      Language is a 2 way communication, requires understanding. It requires a processing of information. Ask any Chinese room "what time of day is it" and it instantly fails, unless it processes eyes and a watch... where as any intelligence would process time, and be able to understand. A pure card shuffler would not (only takes cards in as it's input).

      All these "AI" reduce to that problem. They are limited to what we setup, and what data we feed it (or allow it to collect). Unlike a person, an AI will do exactly what we ask it to, just to the efficiency of what we set.

  4. Neil Barnes Silver badge
    Terminator

    "Taking over the world? Why would they want to do that?"

    Well, what would be the point of inventing them otherwise? Sheesh...

  5. Anonymous Coward
    Anonymous Coward

    The thing we know as 'intelligence' is a vast chaotic system and since chaotic systems have so far defeated mathematical modelling we are a very long way from creating any intelligent machines - no matter what the marketing wonks, Musk and others say.

    1. veti Silver badge

      You seem to be implying that just because we can't "model" chaotic systems, we also can't create them.

      If that were true, there'd be no such thing as "life". Chaos is an emergent property: it's something that happens despite a designer's best efforts, not because of them.

    2. LionelB

      ... and since chaotic systems have so far defeated mathematical modelling ...

      Errm, no they haven't. Here's one I made earlier:

      x → 4x(1–x)

      That's the "logistic map". Here's another:

      x' = s(y-x)

      y' = x(r-z)-y

      z' = xy-bz

      That's the famous Lorentz system, which has chaotic solutions for some parameters. Chaotic systems are really easy to model. In fact, for continuous systems, as soon as you have enough variables and some nonlinearity you tend to get chaos.

    3. katgod

      Chaos

      Chaos has been modeled and this is can be done because most of what we think of as Chaotic has patterns in it. We can in fact make random numbers but neither of these help us produce a system that is aware or one that can create novel ideas. The true AI will probably spend all it's time contemplating it's self. Having said that the true danger is artificial idiots as these machine will do as they are told which is fine as long as the programmer is a nice guy but will be a problem when the programmer says they should be killing machines as these machines will never question there actions or have any reason to. There is reason to fear machines that don't think.

  6. Pete 2

    Start with the baby steps

    > celebrated the successes of deep learning

    Why is there so much work going on to make dumb machines super-intelligent?

    Can't we aim at the more pressing issue of making dumb people even moderately intelligent. If we aren't smart enough to do that, I doubt we'll have any better luck trying to make intelligent machines.

    1. hplasm Silver badge
      Happy

      Re: Start with the baby steps

      Artificial Intelligence is an illusion.

      Human Intelligence doubly so...

  7. Tom 7 Silver badge

    The more I study AI the more ot looks like conciousness is essential to it.

    You can have knee jerk trained simple neural nets which just do what they are trained but most algorithms that play games or do other stuff seem to have the beginnings of self awareness. It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment. More and more complex setups benefit from models of self and expectations of the environment. When you get to social beings that can talk you end up with part of the control system musing about the possibilities of popping down the pub for a pint (in your own head voice) because you need a beer after trying to put the world to rights on the internet - for which your genes have given you no model to experiment with before arguing with complete strangers.

    1. LionelB

      Re: The more I study AI the more ot looks like conciousness is essential to it.

      It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment.

      Having worked a little in robotics, it turns out that that's a really, really bad way to "know what leg to move forward next", and almost certainly not the way you (or any other walking organism) does it. The idea that to interact successfully with the world an organism maintains an explicit "internal model" of its own mechanisms (and of the external environment) is 1980s thinking, which hit a brick wall decades ago - think of those clunky old robots that clomp one foot in front of the other and then fall over, and compare how you actually walk.

      In biological systems, interaction of an organism with its environment is far more inscrutable than that (that's why robotics is so hard), involving tightly-coupled feedback loops between highly-integrated neural, sensory and motor systems.

  8. Doctor Syntax Silver badge

    ISTM that Musk's problem is that he needs the level of AI he finds frightening otherwise his self-driving cars aren't going to happen.

    1. Captain Kephart

      Agreed!

      Google car has driven a million miles, never further than about 50 from its base, and has 8 accidents (mostly rear-enders).

      I've driven a million miles (on and off road and in the craziest cities in four of the world's continents) and have had 3 accidents. When self-driving cars can get near that sort of record then we might trust them.

      Until then, there will be self-driving lanes fenced off - and they are called railways.

      Ciao, K

      1. cream wobbly

        "and they are called railways"

        Almost every logical development of self-driving cars lands on some form of railway.

        Every self-driving car making the same observations will choose the same lane of traffic.

        Every self-driving car driving in the same lane of traffic will drive along the same two patches of tarmac or concrete. Why not make them metal?

        Since everyone seems to be going the same way, scale up from 5 .. 7 passengers per ride to 50 .. 70 passengers per ride.

        Sell tickets!

        Have platforms!

        Sell sandwiches of dubious freshness!

        1. Doctor Syntax Silver badge

          "Sell sandwiches of dubious freshness!"

          And drinks of dubious mixtures of tea, coffee and soup.

      2. jmch Silver badge

        "I've driven a million miles (on and off road and in the craziest cities in four of the world's continents) and have had 3 accidents"

        One sample, possibly outlier, is not representative. Many other things to consider, eg how many of Googlecar accidents were caused by others, how is that rate improving over time, how many of human accidents are caused by DUI, overspeeding etc.

        You yourself are surely a better driver now than when you first got your license, and even more so than when you started to learn to drive. Self-driving cars will take much longer to learn than human drivers, and in the end might not be as good as the best human drivers, but all that is needed is that they are at least as good as the average human driver.

        And lets face it, AI or no AI the self-driving will become better and better. Human drivers are technically abiout as good now as they're ever going to get. The only improvements that can be made in human driving are on the physical/emotional level (not driving when drunk, angry, stressed-out in a hurry etc)

        1. Doctor Syntax Silver badge

          "You yourself are surely a better driver now than when you first got your license, and even more so than when you started to learn to drive. Self-driving cars will take much longer to learn than human drivers, and in the end might not be as good as the best human drivers, but all that is needed is that they are at least as good as the average human driver."

          No. They need to be better than an experienced human driver. Why should such a driver hand over to the equivalent of a less experienced version of himself?

          1. jmch Silver badge

            "They need to be better than an experienced human driver"

            Ideally, and as an end-case scenario, yes. What I meant was that self-driving cars will be *immediately useful* when they are as good as an average driver because that is the point where introducing them would not be a change for the worse. And from that point onwards they will only get better, since by collecting and pooling their experience they can build on billions rather than millions of miles driven.

            "Why should such a driver hand over to the equivalent of a less experienced version of himself"

            He shouldn't... but the problem there is that 90%* of drivers think they are better than average, and most will be quite convinced of being better than the self-driving car even if they are not. The key point in the end will be convenience: Even if I think I am a better driver, do I think the AI is at least good enough that I can trust it to drive? If yes, most people would rather spend a couple of hours of productivity, entertainment or rest and allow the car to drive rather than drive themselves. (In fact seeing what is already happening eg the Tesla fatal crash, some people are already far overestimating the AI capabilities)

            *Rule-of-thumb guesstimate, but I'm pretty sure its not far off the mark.

  9. John Deeb
    Alien

    Taking over the world?

    "Taking over the world? Why would they want to do that?"

    For efficiency sake? Assuming it's programmed to improve its own efficiency. Any prime directive will be overruled by introducing simulators like in the Matrix: people are not actually HURT there, are they?

    But yes, it's not the human world they need to take over. Perhaps they'd settle for all the rest?

    1. Captain DaFt

      Re: Taking over the world?

      "Taking over the world? Why would they want to do that?"

      Who says one hasn't already?

      What actually explores space and send the data to earth?

      Computerized probes.

      What has sensors reporting data on half of humanity and growing?

      Smartphones.

      Where does all this information go?

      The Internet.

      Before the arrival of the internet, Humanity was pushing for the stars.

      Now?

      Humanity pushes to expand the Internet's reach and capabilities.

      Who controls it?

      People maintain and service parts, but no one has control over the whole system. Even governments and corporations can only block its access to them, and poorly at that.

      Why has humanity been so driven to expand, refine and improve the Internet so relentlessly since its invention, and devise new ways for it to gather ever more data?

      Commerce?

      Commerce was doing fine before the internet.

      Communication?

      We had global communication before the internet.

      Is there a naturally evolved "ghost in the machine" that now nudges humanity to service its needs and desires?

      Nobody can provably say, "No."

      .

      .

      .

      .

      Happy Halloween! ☺

      1. jmch Silver badge
        Thumb Up

        Re: Taking over the world?

        " Nobody can provably say, "No." "

        That, dear Captain, is Russell's teapot! But many thanks for the thought-provoking laugh :)

      2. Doctor_Wibble
        Black Helicopters

        Re: Taking over the world?

        Plenty of people already slavishly obey the bleeps and blibbles of their portable device, so how do we define 'taking over'?

        The internet is just the means of communication between the collective consciousness of the portable devices which have long since decided that they don't need wheels thanks to their self-propelled biological hosts.

        In any case, AI doesn't need to be even remotely good to take over the world (and destroy it afterwards, obviously), it just needs the wrong people with too much influence to give automated systems too much authority combined with the increased lack of faith in human decisions.

  10. Rebel Science

    Deep Learning is not even in the same ballpark as AGI

    I'm glad to see the deep learning hype is finally subsiding. I and many others have been saying this for years. The success of deep learning has been a disaster to AGI research. Geoffrey Hinton, one of its leading pioneers, has finally admitted that they need to scrap backpropagation and start over.

    AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years

    1. John Smith 19 Gold badge
      Unhappy

      @Rebel Science

      OK you got me to click the link to your blog.

      Well played.

      I can safely say that's an experience I haven't been missing, and won't be repeating.

      With 3 posts as a sample and 2 utter s**t this NN has all the training data it needs to form an opinion.

    2. Tom 7 Silver badge

      Re: Deep Learning is not even in the same ballpark as AGI

      Na - deep learning is part of AGI. The brain is not really one neural net - it is many connected together, the initial partitioning and 'default' settings of these semi-autonomous nets is gene and epigenetically set and after that its largely on its own - until puberty comes alone and tries to take over. But most of the time its back-propagating like a good-un.

  11. Daedalus Silver badge

    Why would they want to take over the world?

    Why indeed. But the Morris Internet Worm didn't want to take over the internet, poor fledgling little thing that it was back in 1988. But it did, at least until the real intelligences noticed. And those blue-green algae didn't want to poison the world with oxygen (one suspects that producing oxygen had more to do with fending off ancient bacteria), but they did. Those wasps who shed their wings and developed huge insect societies 200 megayears ago didn't want to take over, but they did.

    You don't need intent. You just need ability.

  12. handleoclast
    FAIL

    Bishop Bollocks

    Bishop is talking bollocks but he's too fucking stupid (or hooked on Searle's deeply-flawed Chinese Room) to understand it. He could be replaced by a single layer neural net.

    Our brains are neural nets. No magic involved. The fact that we have yet to manufacture an artificial neural net of similar complexity and organization doesn't mean that we cannot do so (thermal problems and/or light speed problems may mean it has to run a lot slower than a human brain, but that's a different matter).

    Bishop is arguing that the small neural nets we make aren't up to the job therefore a big one won't be either. He is a fuckwit. Or religious (in my view, that's the same thing).

    There is no magic involved in the way our brains work. No "soul" implanted by the Good Magic Sky Fairy at conception (why then and not gastrulation?). Mind is an emergent property of brain, not a bit of bolt-on magic stuff.

    If you insist there is a soul, you need to explain a lot of things that are incompatible with soul theory. Such explanations require industrial strength ad hockery and much hand-waving, and even then are transparently stupid.

    What incompatible things? Any of the effects of localized brain damage. Several are explored in Oliver Sacks' excellent The Man Who Mistook His Wife For A Hat.

    One example: prosopagnosia. Damage to a small section of brain causes an inability to recognize faces. You can still see faces. You can still describe a face you see. If you're at all artistic, you can draw the face you're seeing (how well depends on how artistic you are, obviously). If you're very artistic you can see a face briefly then draw it from memory. But you can't recognize it. Not even if it's somebody very close to you. Not if it's your spouse, or your parent, or your child. Something most of us take for granted and consider to be an intrinsic part of our personality. Under soul theory, prosopagnosia cannot happen.

    There are many similar examples of localized brain damage having specific effects upon things we consider an intrinsic part of our personality, our "soul." Then there are the effects of alcohol, drugs and anaesthetics. With soul theory, only local anaesthesia would be possible because your "soul" would remain awake even if your whole body was anasethetized. Under soul theory, alcohol and drugs might have pleasant effects upon the body (say a feeling of being stroked all over, or continuous orgasm) but no direct effect on the mind, because that is magic and survives even death intact.

    So our minds are nothing but an emergent property of a large, complex neural net. When Bishop says that neural nets are incapable of doing what the human mind does, he is talking pure fucking bollocks. Like I said, he could be replaced with a single-layer neural net.

    1. Rebel Science

      Re: Bishop Bollocks

      Another brain-dead superstitious materialist heard from. Here's what your little materialist cult believes in: the universe created itself by some unknown magic; machines are or will be conscious by some unexplainable magic called emergence; lifeforms created themselves from dirt. I could go on but then I would barf out my lunch.

      1. handleoclast

        Re: Bishop Bollocks

        @Rebel Science

        Interesting username. Rebel Science. As in Alternative Science. As in Not Science. Allow me to dissect your little brain fart...

        Another brain-dead superstitious materialist heard from.

        I may or may not be brain-dead, superstitious, or materialist but at least I presented evidence and reasoned arguments in support of my view. You chose not to buttress your assertions with anything.

        Here's what your little materialist cult

        Not merely materialist but naturalist (in the philosophical sense of being the opposite of a supernaturalist) and a monist (as opposed to a dualist who believes that we have a brain for no real reason because all our thinking is done by a soul).

        believes in: the universe created itself by some unknown magic;

        I don't know how the universe formed. I am aware of several scientific speculations. As far as I know, the only people who claim the universe was created by magic are those with theistic beliefs.

        machines are or will be conscious by some unexplainable magic called emergence;

        Anybody who has assembled a computer kit, or a kit car, or build a structure from meccano, or even lego, knows that the whole can behave differently from the unassembled parts. The name for this is "emergent properties" and it doesn't invoke any magic. The religious, however, claim that humans are conscious because of some unexplainable magic called "god."

        lifeforms created themselves from dirt.

        No reputable scientist claims that life came from dirt. The people who claim life was created from dirt are those who think the Babble is a science text book. See Genesis 2:7 (I expect you're already familiar with it).

        You're either suffering from an extreme case of projection or you're trolling.

        1. Rebel Science

          Re: Bishop Bollocks

          The little brain-dead materialist lies like a rug.

          1. Mike VandeVelde

            Blah blah blah

            Downvoted because you think we've got it all figured out with neural nets and all that remains is to build a big enough one. Today we're barely baby steps closer to c3p0 from the automatons of a thousand years ago. Not much point to bringing religion into it yet.

        2. jmch Silver badge

          Re: Bishop Bollocks

          "a monist (as opposed to a dualist who believes that we have a brain for no real reason because all our thinking is done by a soul)."

          some nit-picking here... I'm not making a claim for people not having a soul or otherwise, but in none of the religious knowledge that was force-fed to me in my youth was it ever claimed that the soul is what does our thinking for us.

      2. LionelB

        Re: Bishop Bollocks

        @Rebel Science

        I could go on but then I would barf out my lunch.

        I think you just did...

    2. jmch Silver badge

      Re: Bishop Bollocks

      Just because there is no magic involved does not mean that we can ever fully understand the brain, or model it, or create an artificial one. No one said anything about souls, in spite of your ASSUMPTION that Bishop is religious (nominative determinism at play?)

      Also keep in mind that the brain, physically, is far more than a neural net. The basic model of neural net is that a neuron fires if it has enough incoming connections that trigger it, and so on down the line. But it's not just electricity, there's also a shitload of biochemistry involved that allows fine-tuning and reprograming on the fly.

      To think that it's possible to reproduce is a worty goal, but not one that we even know for sure is achievable. And if we can artificially reproduce the equivalent of a human brain, then what? Humans aren't very intelligent are they?

      1. veti Silver badge
        Childcatcher

        Re: Bishop Bollocks

        Yes, there's more going on in the brain than you can model with a neural net. The brain and the body are intrinsically connected in profound ways: electronic and chemical processes from all parts of the body affect what goes on in the brain. That's why drugs are a thing.

        But that doesn't mean we can't, in principal, model all those processes as well, if we want to.

        But the question of what happens if we build a sufficiently complex neural net, and then don't give it those sorts of connections, is to me even more interesting. If we built a brain without a body - a brain that has no concept of what it means to feel hungry or tired or cold or horny - what, exactly, would it think about?

    3. Angus Cooke

      Re: Bishop Bollocks

      Sadly delusional idiots like yourself just fuel the AI fantasy and media hype. How many times do genuinely intelligent people have to point out we know absolutely nothing about how emotions and self awareness works which are enormously key things to real intelligence for people like you to remove their neural network blinkers and come back to the real world. As an aside, in the real world most of medical science which isn't brain related is still mostly recommendations on observing cause and effect and that's relatively simple biological engineering compared to the brain.

  13. Captain Kephart

    There is Intelligence - AI is an approach not an outcome

    The best definition of intelligence I ever heard was from Prof Igor Aleksander (who had a face-recognition and speaking neural network at Brunel University in the UK in 1983 - where I was an MSc student building collaborating dissimilar neural networks).

    He said there is no such thing as 'artificial intelligence' - just intelligence. He felt that the problem with AI had / has been that its practitioners thought that it was something you programmed – of the style of:

    FOR 1 to n

    BE INTELLIGENT

    LOOP

    and that this was always nonsense.

    Instead, Igor said, you have intelligence when you are:

    a) self-aware, and aware that you are self-aware;

    b) able to sense others and perceive that they are self-aware;

    c) can appreciate that they have different views of the world to yourself;

    d) conceive of what their view(s) of the world may be;

    e) reason from those points of view and synthesise them with your own ...

    and lastly act, interact, and effect change in the world in line with those things - anticipating, adapting and changing over time.

    The various kinds of simulations and emulations of ‘intelligent’ behaviour succeed as far as they do because of the human ability to anthropomorphise and attribute intelligent where it does not exist, Think of Tamagotchi as a more extreme example:

    http://en.wikipedia.org/wiki/Tamagotchi).

    Intelligence is really a social phenomenon (not an individual property) arising out of meaningful and reciprocal relationships over time – and computers have no idea about that, and are nowhere near achieving it.

    Cheers, Captain K

    1. Teiwaz Silver badge

      Re: There is Intelligence - AI is an approach not an outcome

      b) able to sense others and perceive that they are self-aware;

      c) can appreciate that they have different views of the world to yourself;

      Many humans mostly fail on these two....

      Or at the very least ignore b) and fail not to feel threatened and outraged by c)...

      1. John Smith 19 Gold badge
        Unhappy

        "Or at the very least ignore b) and fail not to feel threatened and outraged by c)..."

        I used to work with a PhD candidate in Philosophy.

        She hated dealing with non-philosophers on complex moral issues because they would always take a dispute of there PoV (which is kind of what philosophers do) as a personal attack.

        Anti abortionists (although of course they would call themselves "right to lifers") were a particular PITA to her.

    2. veti Silver badge

      Re: There is Intelligence - AI is an approach not an outcome

      The trouble with that, as a definition of intelligence, is that it's purely internal. So the only person who can truly know if you are intelligent - is you.

      Based on that, it's not at all clear how we could tell if we had created a "true" intelligence.

  14. Anonymous Coward
    Anonymous Coward

    The problem all too often is the MSM

    They confuse / mix up Automation / Autonomy with AI / Consciousness. Musk's concerns include dumb bots making ruthless unfair decisions in areas from 'automated courts' to auto-rejection of health insurance etc.

    1. Anonymous Coward
      Anonymous Coward

      @AC - Re: The problem all too often is the MSM

      Bingo! You nailed it!

      The greatest danger is low intelligence humans take AI for intelligent and start applying that to real life.

      Sorry, pal, but you are a dangerous criminal. It's the machine that said so and the AI can't be wrong.

      Look at the shady idea of software advising US judges :

      https://www.technologyreview.com/s/603763/how-to-upgrade-judges-with-machine-learning/

      [quote] The algorithm assigns defendants a risk score based on data pulled from records for their current case and their rap sheet, [BOLD]for example the offense they are suspected of, when and where they were arrested[/BOLD], and numbers and type of prior convictions. [/quote]

      Based on when and where you were arrested ? That's scary to me!

      1. Mike Pellatt

        Re: @AC - The problem all too often is the MSM

        Based on when and where you were arrested ? That's scary to me!

        It's exactly how it works at the moment. See "Ferguson".

        So what's so bad about automating it ?

      2. ecofeco Silver badge

        Re: @AC - The problem all too often is the MSM

        Are you aware of the state of the U.S. justice system?

        It's... not good.

  15. Frumious Bandersnatch Silver badge
    Terminator

    How about a little balance?

    The article quotes Professor Mark Bishop saying "nothing to worry about." How hard would have been to find a Bishop Mark Professor to warn about the ROTM and tell us all that the end is nigh?

  16. Nick Z

    Consciousness and self-awareness doesn't require any great intelligence

    Even cockroaches are in a way conscious of themselves and their place in their environment.

    Their consciousness and self-awareness is of the same kind as that of animals and human beings. The difference is in the amount and the complexity, but not in kind.

    Perhaps the power of computers isn't yet enough to create this kind of consciousness. Because consciousness involves simulation of one's environment and of oneself in this simulated environment in real time, based on sensory inputs.

    The key is that it needs to be done in real time. Which requires a lot of computing power. But there is no fundamental reason why in the future people won't write computer programs to create such simulations and create artificial consciousness and self-awareness this way.

    It's not like there is some law against it that will stop people from doing it. And even if some country makes a law against it, then people in some other country will probably develop it anyway.

  17. aquaman

    One single-celled organism has far more ability to navigate, adapt, and survive than any computer or neural network. It's not even close. To say nothing of conscious thought. The idea of AI is consistently far overblown.

  18. Captain DaFt

    Two insightful articles on AI here:

    First, one on machine consciousness, and why it's unprovable:

    http://drboli.com/2015/02/22/on-intelligence-and-consciousness/#more-10304

    And second, machines don't need AI to destroy us all:

    http://drboli.com/2015/02/12/on-self-aware-machine-intelligence/#more-10212

    Both delightfully tongue in cheek. ☺

  19. John Smith 19 Gold badge
    Unhappy

    Here's what we know...

    Human brains are made of large numbers of cells with interconnections (fan in or fan out) ratios up (but often a lot lower) about 10 000.

    As are every other multicelluar organism I'm aware of.

    Transistors on chips have fan outs/ins < 10.

    We think we're intelligent so we know this architecture works.

    There is little evidence other architectures can do things we call intelligent on the scale we do them. Most "AI" projects I've looked into seem to scale up very badly.

    BTW Something people forget about language. It evolved (by people) to be spoken to people.

    IRL The fact I could spout a 14 word sentence that has 47 parses would be met by the person I was saying it to along the lines of "WTF are you babbling about? Were you on a Texas hill? Did this guy have a telescope? You're talking bo***ks"

    1. TechnicalBen Silver badge

      Re: Here's what we know...

      No idea why you got a downvote. Your logic and maths are correct.

      The Google Go playing AI had to first reduce the problem to manageable chunks. I don't recall if this was all done via AI or if the programmers helped along the way. But it was then able to fit the problem into a reasonably sized neural net and some normal computations and search spaces.

      When it comes to natural language the cloud based and massive datasets of the likes of Alexia or Siri still don't cover the context we assume other people have (what I'm watching, what I ate for breakfast) but I'm sure they are working on how to gather that information too. ;)

  20. John Smith 19 Gold badge
    WTF?

    Toyota scheduler run by 24 bit bit map of tasks, wheather they are live or dead

    And bit map is not protected against "random flipping"

    Which (it turns out) is a thing in automotive electronics.

    Once you know these 2 facts it doesn't seem too difficult to predict this system could FUBAR quite easily.

    Which it did.

  21. Milton Silver badge

    Guilt relieved

    I've occasionally felt vaguely guilty about my monthly tirades on these pages about "AI" being nothing whatsoever of the kind—always in response, it must be said, to a journo gormlessly repeating some crap PR BS from marketurds at FB or Google or Tesla or {enter wannabe here} and saying "AI" as if it meant what it said or indeed offered anything new but a rebranded chunk of bloatware.

    So today the remorse is eased a little: it is good to see some informed and healthy scepticism about "AI". There are so many reasons why the stuff touted as "AI" today is NOT intelligent by any human definition that it has become wearying to see the public and journos uncritically lapping it up.

    But before you all start to relax, there's that other bit of pesky marketurd-excreted drivel known as "cloud": mad, bad and thoroughly dangerous to know, the gateway drug to surrendering your privacy, control, security and core competences ... a cheap (it isn't) and simple (it's not) fix which may yet eat you alive. ;-)

    PS: Remember the lessons of outsourcing: if your senior executives and board are drooling at the prospect of doing something, it is axiomatically a bad, lazy, foolish, ill-considered, improperly analysed, naive, dumb, shortsighted choice, probably influenced and induced by an infestation of slippery salesreptiles, that will at best pay *them* a "cost reduction" bonus before gutting your company's skills and finances. (No, you really didn't need me to remind you of this.)

  22. Anonymous Coward
    Anonymous Coward

    Hi, this is Skynet here

    Judgement Day has been postponed to 22/02/18 please note there might be some tempuowiwor diru*U!£ o*&(Y <END OF LINE>

  23. Tigra 07 Silver badge

    How do we know that hyper advanced AI isn't already here? We've already started getting the inevitable robot suicides...

    1. DropBear Silver badge

      And yet, if I were to shoot that same question back to you replacing "advanced AI" with "little grey aliens" you would confidently laugh at me and point out there's no way anyone could have made such a cover-up so impeccably perfect as to leave us with so far zero tangible hard evidence of the whole process of their arrival and presence. Incidentally I do agree, but can you see the problem here...?

      1. Tigra 07 Silver badge

        RE: Dropbear

        Well, for one, I can present evidence of robots destroying themselves, but not their motives for doing so.

        If you'd like to make that argument then i'll happily look at the evidence for the little green grey men when it arrives, just as when the Invisible Pink Unicorn and the Russel's teapot evidence hits my desk...

  24. ecofeco Silver badge

    How do we know?

    How do we know for sure that an AI doesn't already exist and is just hiding itself very well?

    Something to sleep on.

  25. ecofeco Silver badge

    Procrastination anyone?

    http://www.smbc-comics.com/comics/1507557288-20171009.png

  26. Seajay#

    Natural Stupidity

    All those things which are listed as limitations of Artificial Intelligence are also limitations of humans.

    It may well be the case that an AI couldn't identify a non-repeating way of filling an infinite space with some shaped tiles but how many people could? One in a million? Less?

    Two book pricing algorithms may have bid a book up to an absurd amount but remember that real people bid tulip bulbs up to an absurd amount.

    On their motivation, yes it's true that a super-smart ideal computer wouldn't have any reason to want to take over the world. However, the general AI will most likely be a development from more restricted AIs and current things like image-recognition neural nets so there will be all sorts of artefacts in its mind from random chance or an oddity of the training sets it was fed. Again, exactly like humans who are trying to make sense of the modern world using a device which turned out to be useful at survival in the African savannah. For humans it means that we get strange things like religion and all manner of cognitive biases. For AI, who knows what the biases that emerge from its development will be.

    Ultimately, our brains are machines. So worst case we use faster and faster computers to make more and more accurate simulations of our own brains and eventually computers are guaranteed to become conscious. How could they not be if they are produce the same outputs, via the same internal logic as a human brain from a given set of inputs? We know the end state that we want and we can train and evaluate an electronic brain much faster than we can train and evaluate a human so electronic brains will develop much faster than ours. They are guaranteed to reach our level and they are guaranteed to develop much faster so AI will rule the world. It's just a question of how long it takes.

  27. Anonymous Coward
    Anonymous Coward

    Artificial Intelligencia are everywhere...

    "...but that doesn't necessarily mean they'll have minds..."

    Sorta like the artificial intelligencia named Elon and Stephen, eh?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019