back to article ARM daddy simulates human brain with million-chip super

While everyone in the IT racket is trying to figure out how many Intel Xeon and Atom chips can be replaced by ARM processors, Steve Furber, the main designer of the 32-bit ARM RISC processor at Acorn in the 1980s and now the ICL professor of engineering at the University of Manchester, is asking a different question, and that is …

COMMENTS

This topic is closed for new posts.

Page:

  1. Nigel 11
    Boffin

    Brain: Classical or Quantrum computing device?

    Some people think that the real question that needs addressing is whether brains are classical computing devices, or quantum computing devices.

    If the former, then once the right interconnect and neuron code is arrived at, this simulator might be as smart as a cat.

    If the latter, it hasn't got a hope - you'd need that much computing to simulate a single synapse (and even then, only after making a lot of approximations).

    Brain as quantum computer is a minority view. However, a synapse is small enough and sufficiently low-energy that quantum effects must be of significance there. The eye, which is a sensor-extension of the brain, demonstrably is a single-quantum detector. And personally, I would be very surprised if evolution had not found a way to exploit quantum effects, rather than just treating them as a source of noise to be beaten into submission.

    An even more minority view is that consciousness is a quantum effect.

    As a parting shot, where is the code in a solitary spider-hunting wasp, for identifying appropriate prey, stinging in exactly the right place to paralyze it while avoiding becoming prey of the spider, selecting an appropriate site to dig a burrow, entomb spider, lay egg, etc? It is built-in, not learnt. Ditto in a honeybee or termite, for complex colony formation, though in these cases there may be some form of learning or "culture". None of these insects boasts more than a million neurons.

    1. Oppressed Masses
      Thumb Up

      Is Consciousness a Quantum phenomenon?

      Quoting Wikipedia Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine. We say that the halting problem is undecidable over Turing machines.

      As humans can solve halting problems we must assume the brain is not a collection of Turing machines and a simulation of the brain by Turing Machines can never be conscious. It seems to me that true artificial intelligence is impossible until this problems is resolved. It is not my area of expertise what do the expert think about this?

      1. Nigel 11

        We can't solve the halting problem

        Human's can't solve the halting problem either. There are many hypotheses in mathematics lacking a proof, such as the Goldbach Conjecture(*). We just give up on a too-hard problem (just the same as a programmable computer with a proper operating system will be interruptible by its real-time clock and devices, and ultimately, by its frustrated programmer.

        Godel proved that some of these hypotheses will in fact be undecidable within the accepted (finite) framework (set of axioms) of Mathematics. No complete self-consistent system can be based on a finite set of axioms. One may have to extend mathematics by admitting the theorem, or its opposite, as an axiom ... but of course, doing so for something that is in fact decidable risks defining as true that which is provably false, or vice versa.

        (*) that every even number greater than two can be expressed as the sum of two primes in at least one way. So "obvious", yet still unproven more than 350 years after it was first stated.

      2. Anonymous Coward
        Thumb Up

        interesting distraction

        Hmm, a tricky but fascinating area for Reg commentards on a slow Friday. It was Roger Penrose (think Hawking, but less famous and more clever) who pointed out that whether or not the brain is a quantum computer or not, it posseses many of its key properties. He co-authored a paper exploring how this might be biologically realised within neuron microtubules. http://www.cs.indiana.edu/~sabry/teaching/b629/f06/QuantumComputationInBrainMicrotubules.pdf

        I think he has backed away from it, half due to a screechy reductionist onslaught, and half due to "woo-woo mystics" sidling up to him at conferences. more here: http://www.scottaaronson.com/democritus/lec10.5.html

        The japanese were onto a similar brain-scale computer, their sixth generation computer programme - what ever happened to that?

        1. nyelvmark
          Boffin

          Penrose

          ...is getting old, and perhaps a bit eccentric. "The Emperors' New Mind" is still a good debunking of the AI myth, however. It's not something you can skim in an hour, though. You actually have to read it. Furthermore, if you're not prepared to accept Penrose's word, there are many, many more hours needed to read all his references. If you're still not happy, you need to read more widely about mathematics, logic and computer science.

          If you don't want to do all that, you could just believe whoever has the loudest voice, which is probably the organisation that spends the most money promoting their view.

    2. Anonymous Coward
      Anonymous Coward

      @Nigel11

      "Brain as quantum computer is a minority view. However, a synapse is small enough and sufficiently low-energy that quantum effects *MUST* be of significance there".

      I would suggest that if *MUST* (my emphasis) is correct then this would not be a minority view. I could easily get behind *COULD*.

      1. Nigel 11

        Levels of meaning

        My "must" referred to the physics. I could equally well have said that quantum effects must be of significance to the design of a start-of-the-art CPU (20nm gates, 0.8V supply voltage, etc).

        In the CPU, the significance is that they cause things regarded as bad by the designers, like charge leakage (leading to higher power consumption and waste heat) and electronic noise (meaning extra efforts have to be made to keep the circuitry reliably binary, again leading to increased power consumption).

        In a brain, it's presently very unclear what Nature has done with the quantum effects that must be present at the synaptic level. Worked around them as in the CPU? Or, the minority view, embraced them and worked out how to build a much better platform for thinking with? Or, the minority-minority view, allowed consciousness to arise as an essentially quantum phenomenon?

    3. Destroy All Monsters Silver badge
      Boffin

      "An even more minority view is that consciousness is a quantum effect."

      Unfortunately this assumption is useless.

      "Quantum effects" is just code for "some magic happens that gives you additional power; I will leave for the reader to imagine what that could be". In other words: low-level religious feel-good crap packed in modern jargon [yes, I'm looking at you, Penrose!]

      Even honest-to-god quantum computers would not help you solve NP-complete problems in polynomial time. In fact, the problems in BQP [bounded error quantum polynomial time] do not seem to be of any interest for daily jobs. Do do not need fast factorization for getting milk jugs out of the fridge.

    4. Ken Hagan Gold badge

      Shh!

      "As a parting shot, where is the code in a solitary spider-hunting wasp [...] None of these insects boasts more than a million neurons."

      But nyelvmark tells us that no-one is interested, so despite the fact that you'd probably deserve a Nobel prize for answering this question, you'd better go and waste money on something sexier.

  2. HMB
    Alert

    And they called it John Henry

    So you get the neuron count up to 85bn and well done! You have a baby, so who's going to raise the first AI and bloody hell, what's it going to be like as a teenager?!

    It's nuclear power all over again.

    For your entertainment over the next few decades, witness the almighty clash of the luddites Vs the technocrats!

    On the luddites team are millions of people all in complete denial with little gratitude as to how much technology has made their lives better. In this camp the more ignorant you are, the better you can argue!

    On the technocrats team are a much smaller number of technically minded people who understand what's going on. Let's hope they far sighted enough to avoid any unpleasantness that could spring up with the increasingly powerful and potentially world changing technologies of tomorrow.

    1. Disco-Legend-Zeke
      Pint

      Will We Become As...

      ...gods?

      As the machines we construct multiply and become more like us, will they begin to argue over Intelligent Design?

      More importantly, will there be some electronic equivalent of beer?

      1. tpm (Written by Reg staff)

        Re: Will We Become As...

        I sure hope so. Perhaps to stay in their good graces, I will come up with that beer alternative now. Perhaps something based on plasma....

  3. Mike 137 Silver badge

    Which one per cent?

    "...only going to be able to simulate about 1 per cent of the complexity inherent in the human brain"

    So which one per cent are they going to choose? The classic error of artificial intelligence gurus that keeps the intelligence truly artificial is to consider the brain as a single entity with intellect as its prime purpose. The brain is actually an integrated assemblage of several organs - evolved independently and performing multiple separate functions (admittedly with many local overlaps). But fundamentally, as Robt. Ornstein pointed out some 30 years ago, the brain is a body controller. The rest is extra. So an arbitrary replica of one per cent of the number and interconnections of synapses in an average brain dedicated to intellectual processing is not a model of the brain - nor would a similar replica of 100% of the number be.

    So this is all good fun, but really has nothing to do with a decent quality human brain. The Californian definition of artificial intelligence draws from the Californian definition of intelligence, and that of Southampton similarly - but they may not be identical, and neither may be all that representative of the real thing. Mr. Spock was a non-feasible fantasy too.

    Plus, think of the energy consumption. I can work till lunchtime on a couple of slices of toast and some coffee. A million CPUs (even ARM CPUs) are going to clock up some serious electricity bills. Then you have maintenance - we'll be back to the reliability problems of the early vacuum tube computers. So why bother Steve?

    1. Nigel 11
      Thumb Down

      Maintenance issues

      A brain is highly fault-tolerant, at least with respect to well-distributed failures. Neurons die as we age. One of the things we need to do is work out how to make networks of millions of CPUs similarly faut-tolerant.

      Why bother? Curiosity, and an assumption that at some point in the future the electricity bills will be reduceable. The Met office always has a next-generation weather-forecasting program to hand, which they can prove is better than the program currently used in all respects except one. That one respect is that on the currently available hardware, it forecasts tomorrow's weather several days after tomorrow!

    2. David Lester
      Pint

      Reply to Mike 137

      The topic for discussion next Monday with Kevin Gurney (Computational Neuroscience, Sheffield) is: "The striatum: what do we know? Can SpiNNaker model it convincingly?"

      But the point about Robots is well-taken. We have already made contact with both UK and EU robotics potential partners. What Tim has not focused on (there's rather a lot of work behind the press release) is that the system runs in real-time. And it's low power --- each chip consumes 1A at 1V (for 1W power consumption) and for the neural simulation it has the computing power of a typical high-end desk top. The full system runs at less then 50kW (depending on work-load).

      Still, the question we all have is: just what do you have to do to get an article filed under RiseOfTheMachines? Buy all the London-based staff beers?

  4. Pantelis
    Thumb Up

    Rejoice...

    "and how much drinking and brown acid they have done"

    Don't know about the brown acid but according to recent studies the old notion of alcohol killing brain cells seems to be a myth...

    http://health.howstuffworks.com/human-body/systems/nervous-system/10-brain-myths9.htm

  5. AceRimmer1980
    Terminator

    AI powered by a million ARM processors

    and it still can't control the ship in Zarch.

  6. nederlander
    FAIL

    Brain in a jar?

    OK time for my usual AI rant..

    1) There is nothing inherent in neural networks that skews towards any particular type of behaviour (such as pair bonding, aggression, communication with other agents, territoriality, pain response, etc.) If such behaviours were desired, a lot of work would be required in the design of the network topology and learning algorithm in order to increase the likelihood of them arising. So don't worry sci-fi nuts - there is no chance of accidentally creating either a monster or a ickle baby.

    2) Yet again the importance of embodiment has been overlooked. Until you have a billion sensor, billion actuator robot body there is no point in creating a billion neuron brain.

    3) Finally, self awareness isn't anything special. A thermostat is self aware.

    Rant complete. Thank you for your patience.

  7. Nigel 11
    Thumb Down

    A thermostat is self aware - really?

    How do you prove that?

    1. nederlander
      Pirate

      being the bat

      @Nigel 11

      The thermostat can sense the results of its own actions. Therefore it is self aware.

      There is no point (outside art, or worse - philosophy) of defining self awareness in a way that requires one to _be_ the subject in order to determine the extent of its self awareness. A useful definition of self awareness is one that can be determined by an independent observer.

  8. Anonymous Coward
    Facepalm

    Yeah, but...

    We already know the answer is 42.

  9. Matt Bucknall

    Can't help feel...

    ...that this kind of exercise is akin to implementing PC virtualization with a SPICE simulator. It's going to be massively inefficient any which way you look at it. If anyone were even qualified to implement a super efficient neural network in real hardware, it would be Furber! I hope this work leads him to such a solution someday.

  10. Goatan
    Trollface

    Lookalike

    Am i going crazy or does Steve Furber look like John Lithgow (3rd rock from the sun dude)

  11. Steve Martins
    Boffin

    Brains aren't synonymous with storage

    It sound like they are trying to replicate the synchronous firing of neurons as a result of data, rather than the synchronous firing of neurons resulting in data. I've found the best way to think of it is an event driven cascade resulting in further events.

    As a cascade moves through the brain, the neurons along the paths it follows fire. If there are sufficient firing in synchronicity the electric potential in that localised area raisies, and at a certain level causes a reaction with an enzyme which makes that path stronger, so in the future an impulse is more likely to take that path. Each pathway represents information that may or may not be correlated. Over time the correlations form patterns so they fire in synchronicity, leading to associative memory (which is why the best learing techniques use association).

    Maybe I'm wrong, but i believe mimicking the brains activity will need a drastic step change in the design of processors before they can truly replicate such behaviour!

  12. Martin Usher
    Unhappy

    ICL?

    A web search for ICL generates all sorts of results, nothing in the computing line. Alas, like every other engineering venture in the UK it either generates overnight profits or its gone -- in this case, to become Fujiitsu, a name I associate with laptops and incredibly overpriced government software.

    Even the Manchester bit is just a nod towards history. Manchester used to be at the center of computing but.....whatever, the future's all marketing, financial services and the like, isn't it?

  13. John Smith 19 Gold badge
    Thumb Up

    Why nature is ahead.

    Well volume wise it helps if you can do *true* 3d packaging.

    The best I'm aware of in this line was a Hughes project for some kind of missile guidance system (SDI?) stacking bare *wafers* on top of each other with feed through connectors made by putting drops of Tin on the surface and using a temperature gradient to "Drive in" the Tin to create a high conductivity path front to back. top layer sensors (vision?) then multiple layers of processing and memory.

    Today a thing called "SMART Cut" uses H ion implantation to create a weak layer <10 micrometres below the surface. Build the circuit on a regular thickness wafer, slice off the top and repeat. Thickness reduction of x30 roughly, putting very substantial power into a standard chip package. If you can get the heat out.

    However this still leaves you *fundamentally* in a 2d world. Neurons are simply *not* restricted in this way. They also allow fan outs of up to 10000 other neurons, while conventional transistor gates hit about 10 by design.

    Power wise the brains asynchronous architecture saves a *lot* of power and eliminates the whole clock distribution problem. Today *half* of all CPU transistors are dedicated to either transmitting the clock or restoring its rise/fall times or re-synching the local clock with the chip wide clock due to clock skew.

    To *really* get to human brain power levels they would have to go with Carver Mead's CalTech group using custom logic elements (but built on conventional foundary processes) as *analog* components of simulation. They've gone rather quiet lately but part of what they found was the design has to *incorporate* noise, not fight it (digital logic aims to swamp noise).

    They also found that "Computation" flowed in waves across their arrays of devices.

    BTW the Torus is a good architecture for super computers as a message passed even the long way round gets to its target *whatever* x,y direction it's sent in.

    An interesting point about this project will be if the processors will be black box neurons and its the connections and initial data values that will be settable *only* (IE like a *real* brain) or if they will tinker with the simulator code on the nodes once built. IRL that would be more akin to supplying drugs or replacement cells through in vitro grown stem cells.

    It's an interesting project and I'm not sure how much work has been done on the bridging stages between actual neaurobiochemistry and the big picture stuff. Thumbs up.

  14. Iain
    Go

    Will it run 17 times slower?

    Anyone get the reference to a mind bending SF book?

    1. John Smith 19 Gold badge
      Happy

      @Iain

      When Harlie was one?

  15. Anonymous Coward
    Anonymous Coward

    less intelligent than a man?

    As long as they dont simulate the parts of the brain do with thinking about beer and girls, then it should work out about 1000% more intelligent by my reckoning

  16. Anonymous Coward
    Anonymous Coward

    womb envy

    I sometimes wonder if male artificial intelligence developers have womb envy. Much as male-bashing futurologist articles consider a world where men are not needed, perhaps this is the opposite retaliation. An underlying resentment of the opposite sex perhaps?

Page:

This topic is closed for new posts.

Other stories you might like