back to article Calm down, Elon. Deep learning won't make AI generally intelligent

Mark Bishop, a professor of cognitive computing and a researcher at the Tungsten Centre for Intelligent Data Analytics (TCIDA) at Goldsmiths, University of London, celebrated the successes of deep learning during a lecture at the Minds Mastering Machines conference on Monday, but stressed the limits of modern AI. Elon Musk, …

Page:

  1. artem

    While I agree that current AI is hundreds of light years away from AGI, I hate when people throw mind, consciousness, thought and knowledge into the mix. Let's be honest and admit that we don't have the slightest clue as to what these things are and if they are really required for AGI to exist.

    And to all those people who keep fearmongering about AGI or being scared of AGI: please read this article https://en.wikipedia.org/wiki/OpenWorm

    After studying the simplest nervous system on Earth (302 neurons) for several years we still don't understand how it works.

    1. HieronymusBloggs

      "we still don't understand how it works."

      Is there a better reason for proceeding with caution?

      1. Anonymous Coward
        Anonymous Coward

        > Is there a better reason for proceeding with caution?

        Possibly. I caught a snippet on the radio the other day regarding AI, with one academic pointing out that the "A" in AI really means "Advertising."

    2. Anonymous Coward
      Anonymous Coward

      I can still prove mathematically that we cannot...

      Copy ourselves. A set cannot contain the entirety of all other sets and it's self. ;)

      Well, kinda I guess. But we have little understanding on what our intelligence is, so as to copy it. Like wise, a perfect carpenter cannot make a human lung out of wood, no matter how perfect a chisel we have...

      So can we make "intelligence" out of math and binary operations? Who knows. But I'll know it when I see it, and not bet the farm on it before then.

      1. Doctor Syntax Silver badge

        Re: I can still prove mathematically that we cannot...

        "Copy ourselves."

        Biologist here. The word you were looking for is "reproduction".

        1. Craig 2

          Re: I can still prove mathematically that we cannot...

          The word you were looking for is "reproduction".

          Average guy here. Reproduction is not copying yourself.

          1. Doctor Syntax Silver badge

            Re: I can still prove mathematically that we cannot...

            "Reproduction is not copying yourself."

            What's this self that you'd be copying? Yourself today may look very much like yourself of yesterday or even of a few moments ago. But yourself is a dynamic object.

            However still you may try to stay your heart pumps, blood circulates, sugars are broken down into simpler organic acids, oxidised, electrons pumped, adenosine phosphorylated, transported and dephosphorylated, your ribs and diaphragm move to pump air in and out of your lungs and that's only the basic respiration providing energy to the rest.

            Over the slightly longer term hair grows slightly from day to day and gets lost :( skin grows and flakes of it are also lost. Internally cells die and their remnants scavenged to form new cells. Food is taken in, digested, used and waste excreted.

            Germane to this thread, pulses pass through the nervous system.

            You're not a static object that can be copied. Come to that, neither is something as apparently simple as a glass of water - an ice cube, maybe and the glass but a mass of fluid, no.

            Nevertheless, living organisms have been reproducing new instances of these complex arrangements for a very long time now. We may not be able to characterise such chaotic entities in complete detail but, like every other species, we can reproduce without the need to of such characterisation.

      2. jmch Silver badge

        Re: I can still prove mathematically that we cannot...

        A large part of human intelligence and processing is related to sensations and feelings. We can understand a graph more easily than a table, there's a reason for that. Our internal processing is very intimately connected to the nature of inputs available (visual, auditory, sensation etc), and although most of our neurons are in the brain, there are also a significant amount of neurons connecting nerve endings through spinal cord to the brain.

        While mathematically we can show that different internal representations can be equivalent to each other, it's quite conceivable that an AI brain can be very clever with respect to abstract maths, puzzles atc but still be an 'idiot' in the real world because the internal representation of electronic bits cannot be made functionally equivalent to an internal representation made of a mix of biochemistry and electric signals

    3. Anonymous Coward
      Anonymous Coward

      "Let's be honest and admit that we don't have the slightest clue as to what these things are"

      No, I don't see us ever doing that.

    4. StargateSg7

      The "Missing Link" in obtaining Artificial General Intelligence (AGI) is computational horsepower.

      Current thinking in Computational Neurobiology says that human 100 IQ equivalent intelligence is now at a mere 100 PetaFLOPS (100 Quadrillion 64-bit Floating Point Operations Per Second!) This means that within the next TWO YEARS, modern supercomputers will exceed that performance level allowing FUNCTIONAL emulation of human brain structures which will PROBABLY give us about 50 IQ due to the overhead needed on such 100 PetaFLOP machines doing functional human-level neuro-structure emulation.

      Such a 50 IQ artificial could indeed do much pattern recognition work on large datasets used to solve very specific human-constrained problems that fall likely into making medical discoveries or materials research.

      Now what I THINK is HUGE danger, is when we SCALE a 100 PetaFLOP machine into the tens or hundreds of ExaFLOPS range which NOW ALLOWS US to do a full biochemical emulation of ALL neural tissue in the human brain. At 50 ExaFLOPS we start getting into the 160 IQ territory of human super-intelligence AND BEYOND which means the DSP-like input/output chemical signalling and electrical filtering done by the human brain is FULLY EMULATED at the molecular level!

      Such a machine of 50+ ExaFLOPS worth of computational horsepower could EASILY house one or MORE super-intelligences who have data correlation, data mining and beyond-human level reasoning capabilities that MIGHT cause said superintelligence(s) to become an existential threat to humanity due to their "eventual understanding" of violent human interaction becoming an eventual threat to their own existence.

      It wouldn't take them long to realize that cracking some financial systems code so as to bribe naive third parties into providing equipment mobility and or even TOOLS to allow said machines to design, manufacture and control highly-mobile robotic systems to such an extent that the "Terminator Scenario" becomes ever more a real possibility !!!

      In my estimation, after doing much hardware-level research in this area, with a few hundred million dollars, I could build a machine containing one hundred thousand large-die 60 Gigahertz Gallium Arsenide general purpose CPU's (i.e. like a turbo-charged Intel i7) combined with DSP-based or GPU-based number crunching dies (much like AMD Stream Processors or NVIDIA CUDA cores) so that my 50+ ExaFLOPS machine could be attainable as soon as Year 2020!

      I would build software that VERY SIMPLY does emulation of the chemical and molecular interactions of the components that make up human neural tissue. After a period of time, adding artificial environmental stressors and randomization to those interactions would VERY LIKELY create a 60 GHz version of evolution that WILL END UP with something that resembles human-level or above thinking ability. I estimate at 100,000 GHz CPU's with the sub-components I envision, would complete the equivalent of 4-to-10 million years of great ape brain evolution within less than one year!

      After that time, I would be talking in multiple languages with an artificial 160+ IQ Double PhD who would eclipse the reasoning ability of ANY HUMAN by a very wide margin!

      I will remember to ask said A.I. to design me a functioning WARP DRIVE so I can get of this rock without worrying about the consequences of having a Super-AI attached to a fast internet connection!

  2. Pascal Monett Silver badge

    "AI is more artificial idiot than artificial intelligence"

    AI is Artificial Intelligence.

    Just because journalists insist on continuing to abuse the term does not mean that AI has lost its meaning.

    Journalists need to learn to not spaff headlines with "AI" as soon as some new computer tech shows the end of its nose. Of course, "New Tech Might Help Sub-Process Which Could Result In Getting Closer To AI" does not sell as well as "New Tech Set To Bring Us AI Next Year".

    And that is the whole problem with AI.

    1. Mage Silver badge
      Boffin

      Re: "AI is more artificial idiot than artificial intelligence"

      Because in any reasonable definition of AI, we don't have a single AI system. It's marketing spin to call any of them Intelligent or learning. They are artificial and machines.

      1. You aint sin me, roit
        Holmes

        Re: "AI is more artificial idiot than artificial intelligence"

        "What field do you work in?"

        "Deep learning..."

        "Wow, cool."

        Yes, so much more cool than processing based on data representations and pattern recognition.

        We all have our vanities.

        1. Tigra 07

          Re: "AI is more artificial idiot than artificial intelligence"

          "What field do you work in?"

          "DARPA, SKYNET division"

          Still doesn't scare me.

          Robotics and AI are so limited still that my targeted advertising is ridiculously off 99% of the time. The smartest we've managed so far are probably Roombas...They can move around the furniture and avoid the stairs, but the cat will still tip them over.

      2. LionelB Silver badge

        Re: "AI is more artificial idiot than artificial intelligence"

        Because in any reasonable definition of AI ...

        Well what is a reasonable definition of AI? Genuine question: I get the impression that most commentators here equate "real" AI with "human-like intelligence" - under which definition we are, of course, light-years away. But does the "I" in AI have to be human-like? Or, for that matter, dog-like, or octopus-like or starling-like, or rat-like?

        Perhaps we need to broaden our conception of what "intelligence" might mean; my suspicion is that "real" AI may emerge as something rather alien - I don't mean that in the sinister sci-fi sense, but just as something distinctly non-human.

        1. Muscleguy

          Re: "AI is more artificial idiot than artificial intelligence"

          Exactly. As a sometime Neuroscientist* I have to ask intelligent about what? This relates to consciousness as well, conscious of what? Consciousness is not a binary on or off, in toto or not at all thing. Think about yourself just waking up vs after that first caffeine hit or my morning exercises (unweighted squats, 20 supermen and some side planking with stars this morning; squats, 10pressups, 15 crunches yesterday).

          New Scientist this week has an article on why expert systems might not result in the mass redundancy of white collar jobs. It uses as an example diagnostics, of metastatic breast cancer. Expert systems analysis of scans makes 7.5% of misdiagnoses, expert human doctors make 3.5% of mistakes. The thing is they make different sorts of mistakes. If you combine them, get them to mark each other's homework the error rate falls to 0.5%.

          This apart from anything else evidence that if the expert system were intelligent (it isn't) then it would be an alien intelligence. It would also be a poor conversationalist, obsessed with breast cancer diagnoses to the exclusion of all else. Take it from a scientist who has had occasion to mix with the medical profession, we can talk about other things.

          *for a start Muscle is an excitable tissue but I did my PhD in the Neurophysiology part of the Physiology Dept and my academic address was Centre for Neuroscience and Dept of Physiology. The Journal Neuroscience was part of my regular reading material.

        2. alan_d

          Re: "AI is more artificial idiot than artificial intelligence"

          Because in any reasonable definition of AI ...

          Well what is a reasonable definition of AI?

          I think this is a very pertinent question. IMO, we are asking the wrong question. I would say that we should be asking two questions: processing rate and type of data processed (both category and structure of data). I would argue that a thermostat embodies a very simple piece of intelligence. It is designed to detect a simple piece of data and make a simple decision. And a house thermostat does it at a rate of something like one bit per five minutes. A computer is not that different, except for complexity and speed.

          How is that different from what we typically think of as human intelligence? Of course the data rate of a person is enormously faster than a thermostat. A computer is in the same ballpark as a person. Faster than human logical decision making (only a few bits per second). Not as good at some things like organizing real world data for decision making.

          More interesting is the question of where intelligent systems are currently taking us. Most if not all real world systems include humans at some point, of course, and so are somewhat intelligent. Computers can, I think most people will agree, make humans smarter. So systems involving humans and computers can be smarter than just humans. But smart is not the same as good. And if we can't make sure people always act in "good" ways, intelligent systems are not always going to do good things either, and will have more capability to do both good and evil.

    2. Teiwaz

      Re: "AI is more artificial idiot than artificial intelligence"

      Just because journalists insist on continuing to abuse the term does not mean that AI has lost its meaning.

      Journalists do awful things with words. Abuse of language on headlines is only the beginning.

  3. John H Woods Silver badge

    Chinese Room

    "Machines may be made so that they computationally model the brain, but that doesn't necessarily mean they'll have minds."

    Isn't this John Searle's "Chinese Room" argument? It suggests that Turing-Test capable devices may still not be really "intelligent" whereas I tend to wonder "how would we know?"

    1. Mage Silver badge

      Re: Chinese Room

      "Machines may be made so that they computationally model the brain, but that doesn't necessarily mean they'll have minds."

      It's not at all proved that we can model the brain even of a cockroach. We don't actually know how they work, only some of the reactions and responses in them. There is a lot of nonsense (c.f. dead fish response in scanner). We can't even agree how to define intelligence and if corvids are really smarter or not than some mammals or even monkeys at some tasks. People argue about vocabulary (parrots, corvids, dolphins and apes probably have it) and Language and even the origins of grammar. People can't agree if Chomsky is right or wrong (many don't want to believe what he claims).

      Computer Neural networks have little in common with real brains. If anything.

    2. Anonymous Coward
      Anonymous Coward

      Re: Chinese Room

      The Chinese Room fails in many ways IMO. An object performing an operation is the same, no matter the means of the operation.

      The fact is, the Chinese Room has no method to perform the operation of "understanding", where as it breaks definitions with "has perfect language". It basically divides by zero. As dividing by zero makes a wrong assumption, the Chinese room assumes language requires no understanding.

      Language is a 2 way communication, requires understanding. It requires a processing of information. Ask any Chinese room "what time of day is it" and it instantly fails, unless it processes eyes and a watch... where as any intelligence would process time, and be able to understand. A pure card shuffler would not (only takes cards in as it's input).

      All these "AI" reduce to that problem. They are limited to what we setup, and what data we feed it (or allow it to collect). Unlike a person, an AI will do exactly what we ask it to, just to the efficiency of what we set.

  4. Neil Barnes Silver badge
    Terminator

    "Taking over the world? Why would they want to do that?"

    Well, what would be the point of inventing them otherwise? Sheesh...

  5. Anonymous Coward
    Anonymous Coward

    The thing we know as 'intelligence' is a vast chaotic system and since chaotic systems have so far defeated mathematical modelling we are a very long way from creating any intelligent machines - no matter what the marketing wonks, Musk and others say.

    1. veti Silver badge

      You seem to be implying that just because we can't "model" chaotic systems, we also can't create them.

      If that were true, there'd be no such thing as "life". Chaos is an emergent property: it's something that happens despite a designer's best efforts, not because of them.

    2. LionelB Silver badge

      ... and since chaotic systems have so far defeated mathematical modelling ...

      Errm, no they haven't. Here's one I made earlier:

      x → 4x(1–x)

      That's the "logistic map". Here's another:

      x' = s(y-x)

      y' = x(r-z)-y

      z' = xy-bz

      That's the famous Lorentz system, which has chaotic solutions for some parameters. Chaotic systems are really easy to model. In fact, for continuous systems, as soon as you have enough variables and some nonlinearity you tend to get chaos.

    3. katgod

      Chaos

      Chaos has been modeled and this is can be done because most of what we think of as Chaotic has patterns in it. We can in fact make random numbers but neither of these help us produce a system that is aware or one that can create novel ideas. The true AI will probably spend all it's time contemplating it's self. Having said that the true danger is artificial idiots as these machine will do as they are told which is fine as long as the programmer is a nice guy but will be a problem when the programmer says they should be killing machines as these machines will never question there actions or have any reason to. There is reason to fear machines that don't think.

  6. Pete 2 Silver badge

    Start with the baby steps

    > celebrated the successes of deep learning

    Why is there so much work going on to make dumb machines super-intelligent?

    Can't we aim at the more pressing issue of making dumb people even moderately intelligent. If we aren't smart enough to do that, I doubt we'll have any better luck trying to make intelligent machines.

    1. hplasm
      Happy

      Re: Start with the baby steps

      Artificial Intelligence is an illusion.

      Human Intelligence doubly so...

  7. Tom 7

    The more I study AI the more ot looks like conciousness is essential to it.

    You can have knee jerk trained simple neural nets which just do what they are trained but most algorithms that play games or do other stuff seem to have the beginnings of self awareness. It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment. More and more complex setups benefit from models of self and expectations of the environment. When you get to social beings that can talk you end up with part of the control system musing about the possibilities of popping down the pub for a pint (in your own head voice) because you need a beer after trying to put the world to rights on the internet - for which your genes have given you no model to experiment with before arguing with complete strangers.

    1. LionelB Silver badge

      Re: The more I study AI the more ot looks like conciousness is essential to it.

      It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment.

      Having worked a little in robotics, it turns out that that's a really, really bad way to "know what leg to move forward next", and almost certainly not the way you (or any other walking organism) does it. The idea that to interact successfully with the world an organism maintains an explicit "internal model" of its own mechanisms (and of the external environment) is 1980s thinking, which hit a brick wall decades ago - think of those clunky old robots that clomp one foot in front of the other and then fall over, and compare how you actually walk.

      In biological systems, interaction of an organism with its environment is far more inscrutable than that (that's why robotics is so hard), involving tightly-coupled feedback loops between highly-integrated neural, sensory and motor systems.

  8. Doctor Syntax Silver badge

    ISTM that Musk's problem is that he needs the level of AI he finds frightening otherwise his self-driving cars aren't going to happen.

    1. Captain Kephart

      Agreed!

      Google car has driven a million miles, never further than about 50 from its base, and has 8 accidents (mostly rear-enders).

      I've driven a million miles (on and off road and in the craziest cities in four of the world's continents) and have had 3 accidents. When self-driving cars can get near that sort of record then we might trust them.

      Until then, there will be self-driving lanes fenced off - and they are called railways.

      Ciao, K

      1. cream wobbly

        "and they are called railways"

        Almost every logical development of self-driving cars lands on some form of railway.

        Every self-driving car making the same observations will choose the same lane of traffic.

        Every self-driving car driving in the same lane of traffic will drive along the same two patches of tarmac or concrete. Why not make them metal?

        Since everyone seems to be going the same way, scale up from 5 .. 7 passengers per ride to 50 .. 70 passengers per ride.

        Sell tickets!

        Have platforms!

        Sell sandwiches of dubious freshness!

        1. Doctor Syntax Silver badge

          "Sell sandwiches of dubious freshness!"

          And drinks of dubious mixtures of tea, coffee and soup.

      2. jmch Silver badge

        "I've driven a million miles (on and off road and in the craziest cities in four of the world's continents) and have had 3 accidents"

        One sample, possibly outlier, is not representative. Many other things to consider, eg how many of Googlecar accidents were caused by others, how is that rate improving over time, how many of human accidents are caused by DUI, overspeeding etc.

        You yourself are surely a better driver now than when you first got your license, and even more so than when you started to learn to drive. Self-driving cars will take much longer to learn than human drivers, and in the end might not be as good as the best human drivers, but all that is needed is that they are at least as good as the average human driver.

        And lets face it, AI or no AI the self-driving will become better and better. Human drivers are technically abiout as good now as they're ever going to get. The only improvements that can be made in human driving are on the physical/emotional level (not driving when drunk, angry, stressed-out in a hurry etc)

        1. Doctor Syntax Silver badge

          "You yourself are surely a better driver now than when you first got your license, and even more so than when you started to learn to drive. Self-driving cars will take much longer to learn than human drivers, and in the end might not be as good as the best human drivers, but all that is needed is that they are at least as good as the average human driver."

          No. They need to be better than an experienced human driver. Why should such a driver hand over to the equivalent of a less experienced version of himself?

          1. jmch Silver badge

            "They need to be better than an experienced human driver"

            Ideally, and as an end-case scenario, yes. What I meant was that self-driving cars will be *immediately useful* when they are as good as an average driver because that is the point where introducing them would not be a change for the worse. And from that point onwards they will only get better, since by collecting and pooling their experience they can build on billions rather than millions of miles driven.

            "Why should such a driver hand over to the equivalent of a less experienced version of himself"

            He shouldn't... but the problem there is that 90%* of drivers think they are better than average, and most will be quite convinced of being better than the self-driving car even if they are not. The key point in the end will be convenience: Even if I think I am a better driver, do I think the AI is at least good enough that I can trust it to drive? If yes, most people would rather spend a couple of hours of productivity, entertainment or rest and allow the car to drive rather than drive themselves. (In fact seeing what is already happening eg the Tesla fatal crash, some people are already far overestimating the AI capabilities)

            *Rule-of-thumb guesstimate, but I'm pretty sure its not far off the mark.

  9. John Deeb
    Alien

    Taking over the world?

    "Taking over the world? Why would they want to do that?"

    For efficiency sake? Assuming it's programmed to improve its own efficiency. Any prime directive will be overruled by introducing simulators like in the Matrix: people are not actually HURT there, are they?

    But yes, it's not the human world they need to take over. Perhaps they'd settle for all the rest?

    1. Captain DaFt

      Re: Taking over the world?

      "Taking over the world? Why would they want to do that?"

      Who says one hasn't already?

      What actually explores space and send the data to earth?

      Computerized probes.

      What has sensors reporting data on half of humanity and growing?

      Smartphones.

      Where does all this information go?

      The Internet.

      Before the arrival of the internet, Humanity was pushing for the stars.

      Now?

      Humanity pushes to expand the Internet's reach and capabilities.

      Who controls it?

      People maintain and service parts, but no one has control over the whole system. Even governments and corporations can only block its access to them, and poorly at that.

      Why has humanity been so driven to expand, refine and improve the Internet so relentlessly since its invention, and devise new ways for it to gather ever more data?

      Commerce?

      Commerce was doing fine before the internet.

      Communication?

      We had global communication before the internet.

      Is there a naturally evolved "ghost in the machine" that now nudges humanity to service its needs and desires?

      Nobody can provably say, "No."

      .

      .

      .

      .

      Happy Halloween! ☺

      1. jmch Silver badge
        Thumb Up

        Re: Taking over the world?

        " Nobody can provably say, "No." "

        That, dear Captain, is Russell's teapot! But many thanks for the thought-provoking laugh :)

      2. Doctor_Wibble
        Black Helicopters

        Re: Taking over the world?

        Plenty of people already slavishly obey the bleeps and blibbles of their portable device, so how do we define 'taking over'?

        The internet is just the means of communication between the collective consciousness of the portable devices which have long since decided that they don't need wheels thanks to their self-propelled biological hosts.

        In any case, AI doesn't need to be even remotely good to take over the world (and destroy it afterwards, obviously), it just needs the wrong people with too much influence to give automated systems too much authority combined with the increased lack of faith in human decisions.

  10. Rebel Science

    Deep Learning is not even in the same ballpark as AGI

    I'm glad to see the deep learning hype is finally subsiding. I and many others have been saying this for years. The success of deep learning has been a disaster to AGI research. Geoffrey Hinton, one of its leading pioneers, has finally admitted that they need to scrap backpropagation and start over.

    AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years

    1. John Smith 19 Gold badge
      Unhappy

      @Rebel Science

      OK you got me to click the link to your blog.

      Well played.

      I can safely say that's an experience I haven't been missing, and won't be repeating.

      With 3 posts as a sample and 2 utter s**t this NN has all the training data it needs to form an opinion.

    2. Tom 7

      Re: Deep Learning is not even in the same ballpark as AGI

      Na - deep learning is part of AGI. The brain is not really one neural net - it is many connected together, the initial partitioning and 'default' settings of these semi-autonomous nets is gene and epigenetically set and after that its largely on its own - until puberty comes alone and tries to take over. But most of the time its back-propagating like a good-un.

  11. Daedalus

    Why would they want to take over the world?

    Why indeed. But the Morris Internet Worm didn't want to take over the internet, poor fledgling little thing that it was back in 1988. But it did, at least until the real intelligences noticed. And those blue-green algae didn't want to poison the world with oxygen (one suspects that producing oxygen had more to do with fending off ancient bacteria), but they did. Those wasps who shed their wings and developed huge insect societies 200 megayears ago didn't want to take over, but they did.

    You don't need intent. You just need ability.

  12. handleoclast
    FAIL

    Bishop Bollocks

    Bishop is talking bollocks but he's too fucking stupid (or hooked on Searle's deeply-flawed Chinese Room) to understand it. He could be replaced by a single layer neural net.

    Our brains are neural nets. No magic involved. The fact that we have yet to manufacture an artificial neural net of similar complexity and organization doesn't mean that we cannot do so (thermal problems and/or light speed problems may mean it has to run a lot slower than a human brain, but that's a different matter).

    Bishop is arguing that the small neural nets we make aren't up to the job therefore a big one won't be either. He is a fuckwit. Or religious (in my view, that's the same thing).

    There is no magic involved in the way our brains work. No "soul" implanted by the Good Magic Sky Fairy at conception (why then and not gastrulation?). Mind is an emergent property of brain, not a bit of bolt-on magic stuff.

    If you insist there is a soul, you need to explain a lot of things that are incompatible with soul theory. Such explanations require industrial strength ad hockery and much hand-waving, and even then are transparently stupid.

    What incompatible things? Any of the effects of localized brain damage. Several are explored in Oliver Sacks' excellent The Man Who Mistook His Wife For A Hat.

    One example: prosopagnosia. Damage to a small section of brain causes an inability to recognize faces. You can still see faces. You can still describe a face you see. If you're at all artistic, you can draw the face you're seeing (how well depends on how artistic you are, obviously). If you're very artistic you can see a face briefly then draw it from memory. But you can't recognize it. Not even if it's somebody very close to you. Not if it's your spouse, or your parent, or your child. Something most of us take for granted and consider to be an intrinsic part of our personality. Under soul theory, prosopagnosia cannot happen.

    There are many similar examples of localized brain damage having specific effects upon things we consider an intrinsic part of our personality, our "soul." Then there are the effects of alcohol, drugs and anaesthetics. With soul theory, only local anaesthesia would be possible because your "soul" would remain awake even if your whole body was anasethetized. Under soul theory, alcohol and drugs might have pleasant effects upon the body (say a feeling of being stroked all over, or continuous orgasm) but no direct effect on the mind, because that is magic and survives even death intact.

    So our minds are nothing but an emergent property of a large, complex neural net. When Bishop says that neural nets are incapable of doing what the human mind does, he is talking pure fucking bollocks. Like I said, he could be replaced with a single-layer neural net.

    1. Rebel Science

      Re: Bishop Bollocks

      Another brain-dead superstitious materialist heard from. Here's what your little materialist cult believes in: the universe created itself by some unknown magic; machines are or will be conscious by some unexplainable magic called emergence; lifeforms created themselves from dirt. I could go on but then I would barf out my lunch.

      1. handleoclast

        Re: Bishop Bollocks

        @Rebel Science

        Interesting username. Rebel Science. As in Alternative Science. As in Not Science. Allow me to dissect your little brain fart...

        Another brain-dead superstitious materialist heard from.

        I may or may not be brain-dead, superstitious, or materialist but at least I presented evidence and reasoned arguments in support of my view. You chose not to buttress your assertions with anything.

        Here's what your little materialist cult

        Not merely materialist but naturalist (in the philosophical sense of being the opposite of a supernaturalist) and a monist (as opposed to a dualist who believes that we have a brain for no real reason because all our thinking is done by a soul).

        believes in: the universe created itself by some unknown magic;

        I don't know how the universe formed. I am aware of several scientific speculations. As far as I know, the only people who claim the universe was created by magic are those with theistic beliefs.

        machines are or will be conscious by some unexplainable magic called emergence;

        Anybody who has assembled a computer kit, or a kit car, or build a structure from meccano, or even lego, knows that the whole can behave differently from the unassembled parts. The name for this is "emergent properties" and it doesn't invoke any magic. The religious, however, claim that humans are conscious because of some unexplainable magic called "god."

        lifeforms created themselves from dirt.

        No reputable scientist claims that life came from dirt. The people who claim life was created from dirt are those who think the Babble is a science text book. See Genesis 2:7 (I expect you're already familiar with it).

        You're either suffering from an extreme case of projection or you're trolling.

        1. Rebel Science

          Re: Bishop Bollocks

          The little brain-dead materialist lies like a rug.

          1. Mike VandeVelde

            Blah blah blah

            Downvoted because you think we've got it all figured out with neural nets and all that remains is to build a big enough one. Today we're barely baby steps closer to c3p0 from the automatons of a thousand years ago. Not much point to bringing religion into it yet.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like