back to article ARM chip OG Steve Furber: Turing missed the mark on human intelligence

"Brains are massively parallel. We each have just under 100 billion neurons inside our heads, all running at the same time. And they are hugely connected, with 1015 synapses connecting the neurons together. The way forward in computing is parallelism. There is no other option." Professor Steve Furber, one of the designers of …

  1. Anonymous Coward
    Anonymous Coward

    Marvellous

    That's the sort of content that keeps me coming back to the Reg - great science, great tech, lucidly and amusingly presented. Thank you.

    1. Korev Silver badge
      Pint

      Re: Marvellous

      Me too

      One for the author ->

      1. Will Godfrey Silver badge
        Pint

        Re: Marvellous

        ... and a crate for Steve Furber's team.

        1. Adrian 4
          Pint

          Re: Marvellous

          Partially-Ordered Event-Triggered Systems

          And a friday afternoon's worth for Andrew Brown.

    2. Stoke the atom furnaces

      Re: Marvellous

      ARM Chip OG (Original Gangsta).

      Who came up with such brilliantly succinct headline?

  2. Voland's right hand Silver badge

    Two parameters involved

    1. Compute elements. That we got solved - our compute vastly exceeds in capacity your average neuron. A single neuron is not even 4004 - it is less - just a few logic gates.

    2. Network density. That is something we have actually failed to figure out. "Creative Wiring" does not cut it. While the I/O bandwidth of a brain is nothing to shout about, the interconnect (especially in some birds) is orders of magnitude above anything we have come up with.

    We will not get anywhere near solving the AI problem (not even to mouse brain levels) until we crack that one. Sure, we can use neural nets and other AI style approaches to solve specific problems. Getting an mouse brain to function, however is outside of our abilities. Even if the mouse is not called Algernon.

    1. Anonymous Coward
      Coat

      Re: Two parameters involved

      If your mouse Algy has rhythm, he might be working on a different problem.

      1. defiler

        Re: Two parameters involved

        Everyone knows Rastamouse has rhythm.

  3. jake Silver badge

    So ...

    ... The folks currently making/spending massive bucks in AI/ML are either cluelessly following a trail to nowhere, or they are shysters?

    1. Anonymous Coward
      Anonymous Coward

      Re: So ...

      "The folks currently making/spending massive bucks in AI/ML are either cluelessly following a trail to nowhere, or they are shysters?"

      Neither. Given that we simply don't understand how biointellegnce works mainstream AI/ML development has focussed upon a statistics based approach that we know does work, at least to a degree. This statistics based approach though, isn't capable of self-learning and requires exhaustive training just to be able to solve each specific type of problem; it is never going to spontaneously deduce the existence of rice pudding and income tax. As Steve Furber points out, biointellegence doesn't work like that; "you could take a two-year-old human, show them one cat, and they'll recognise cats for the rest of their life."

      I also think that he makes a very good point when he highlights the energy and efficiency considerations; this goes far beyond the simple matter of 'jelly' vs. silicon and points to a completely different paradigm.

      1. DropBear
        WTF?

        Re: So ...

        Actually, I have enormous issues with how he highlights the energy and efficiency considerations. The current units we (they) use to simulate neurons have nothing in common with actual neurons, which are more akin to a simple logic gate. A bazillion of "cores" takes megawatts to run because each of them is incomparably more complex (and more active) than a neuron. Do neurons go "ping" billions of times per second? No? Well then. On the other hand, a bazillion of logic gates takes only watts to run - it's called "one single core". It's infinitely less interconnected internally than an equivalent number of neurons would be of course (and it's not wired for parallel processing), which is why that single core is not much use for AI; but to compare efficiency numbers like this is not even wrong. It's just fucking meaningless.

        1. Primus Secundus Tertius

          Re: So ...

          @DropBear

          I doubt that a neuron is 'just a simple logic gate'. Even a bacterial cell, without a nucleus, embodies feedback mechanisms without which it would not survive. Cells with a nucleus, including neurons, are much more complicated than bacteria.

          I therefore believe that until we understand the whole evolutionary history of cells and brains we will not properly understand how the brain works. Current AI will, I expect, produce useful machines and some lessons, but not that full understanding.

        2. Anonymous Coward
          Anonymous Coward

          Re: So ...

          Meaningless? I've really got to disagree here. Like I said above, this goes far beyond comparisons of hardware and wetware. The key thing is the amount of 'work' achieved within the energy budget.

          Sure, you may be able to power a lot of simple logic gates on a low energy budget but you won't be able to make them do very much with existing paradigms.

          It's basic and fundamental physics; you can't change anything without energy being involved, whether it's flipping a logic gate or firing a synapse and the bottom line is that our brains need a lot less energy to do the vast range of things that they do than the simplest silicon system.

        3. the spectacularly refined chap

          Re: So ...

          A bazillion of "cores" takes megawatts to run because each of them is incomparably more complex (and more active) than a neuron. Do neurons go "ping" billions of times per second? No? Well then.

          Well done on completely mission the very point he was making. Of course neurons don't operate at those kind of speeds, the whole point is that throughput is achieved via parallelism instead of one big core running at unimaginable speed. His observation was that this isn't what happens, so why are we proceeding on that basis?

          Furber is far smarter in this area than you or I will ever be. You do not arrive at some profound insight by calling him a dick, taking a tiny line of argument he uses, and then developing that line in the same way he himself proceeds.

          1. handleoclast

            Re: So ...

            His observation was that this isn't what happens, so why are we proceeding on that basis?

            Fundamental theorem of computing: any problem that can be solved using multiple CPUs can be solved by a single CPU using context switching. It's just slower.

            So yes, you can emulate millions of neurons with a single CPU, it's just slower. Even when the CPU is clocked in the Gigahertz it's still a lot slower than a million neurons, because the CPU has a lot of extra complexity to allow it to perform general computing problems. The architecture isn't optimized for emulating neurons, it's optimized for being able to handle many different types of problem programmatically. The same CPU could run a game, or let you browse the internet, or compile some code, or...

            Long ago I realized that neurons have some similarities to how overflow-rate multipliers were used in the early days of CNC machines. Not identical, but maybe enough to point the way to an optimized neuron emulator. Or maybe I'm talking bollocks, there. Then again, the earliest hardware emulations of neural nets used even simpler circuitry and managed primitive optical recognition. Plessey, I seem to recall, had something to do with that research.

            Anyway, using a CPU is almost certainly the wrong way to emulate neurons in bulk and at speed. The architecture is completely wrong. But it's very configurable, which is what you want at this stage of the game. If you came up with a better hardware architecture it would require custom chips at great expense, and when you put enough of them together you'd probably find your architecture had problems. CPUs let you research the problem enough that one day you'll be able to figure out what you want well enough to go to a completely different hardware architecture with a degree of confidence.

          2. DropBear

            Re: So ...

            "Well done on completely mission the very point he was making."

            I must return the compliment - way to miss my point too. I was commenting on power requirements alone, and as long as boffins are using entire cores as "units" instead something more or less equivalent to a logic gate - to which, based on out current limited knowledge a neuron is functionally most similar to, regardless of its inner complexity that keeps it alive - it's ludicrously pointless to even mention consumption side by side. No, we're not using parallel architecture now the way a brain does. We use stuff that does one single thing at a time, working very fast. Which is why it needs so much energy, especially if we go on to build huge clusters of them trying to mimic a brain. If we'd be using much more parallel but relatively SLOW stuff, akin to many-input gates, they would consume very little power even today, even if the resulting device would appear to process massive amounts of data quickly due to its parallel structure and the sheer number of "gates".

            TL;DR: neurons are as far as I know NOT ultra-fast oscillating units, which is the thing that makes electronics power-hungry. Any slow-switching electronics simulating whatever it is they actually do would similarly have a LOW power consumption, unlike the myriad of super-fast cores we build our brain simulators out of today. Comparing those abominations to a brain's power consumptions is still not even wrong.

      2. Anonymous Coward
        Anonymous Coward

        Re: Wheel vs Legs.

        Current Computers and AI, are like a car and wheels. A car goes from a-b with wheels. So it simulates a human, who goes from a-b with legs?

        Almost. It does the task, but in a different way. Thus current AI may have some of the aspects of a brain, or intelligence of a person, but not often and not completely.

        PS, as to energy use, some things can be changed without using much energy... it's just we are not very good at it artificially just yet, where as the most neurons can do a switch of potential energy efficiently very well. :D

      3. Long John Brass
        Terminator

        Re: So ...

        "you could take a two-year-old human, show them one cat, and they'll recognise cats for the rest of their life."

        Hmmm ... Humans have an awful lot of wiring in place straight off the bat thanks to evolution. So it not really a fair comparison to an AI that's starting from scratch. Whats the embedded cost in watts over a billions years?

  4. J I

    Ancient history

    For those who like their computer history, it's perhaps worth mentioning that Acorn did actually do a prototype tablet device, the NewsPAD, around 1996 as part of an EU project, but it never got futher than that:

    http://chrisacorns.computinghistory.org.uk/Computers/NC.html

    https://en.wikipedia.org/wiki/Acorn_Computers#NewsPad

    I tried one out at the time - it was a bit clunky, bit it did point the way to what we have today.

    1. Chris Evans

      Acorn NEWSPad Re: Ancient history

      I recall hearing from Acorn that when Larry Ellison visited them about the NC reference design they showed him the NEWSPad he was impressed and he wanted to take one back to the states, when they said they couldn't give him one he replied but I might buy the company! They then explained that they only had two prototypes.

    2. Anonymous Coward
      Anonymous Coward

      Re: Ancient history

      So that's Apple up creek without a paddle?

  5. Michael H.F. Wilkinson Silver badge
    Coat

    So basically, we need very many machines that go "ping"

    Sorry, couldn't resist. I'll get my coat. Mine's the one with the DVD of Monty Python's Meaning of Life in the pocket

  6. psyq

    Equivalent to the brain of...?

    "Put four chips on a board and you get 72 ARM cores, which equates to the brain of a pond snail. Put 48 chips on it and you get 864 cores, equivalent to the brain of a small insect."

    I am sorry, but no, until we have a satisfactory model of neural computation stating that XYZ ARM (or any other) cores is somehow equivalent to the brain of >any< living being is preposterous.

    Needless to say, at this moment we do not have such model, so the actually required compute power is still an unknown. Should we model networks, spikes, membrane dynamics, ionic channels, proteins, molecules...? What is the appropriate level of abstraction, if any? Nobody has yet found the answer so, no, bunch of CPU cores is not equivalent to biological anything.

    1. matjaggard

      Re: Equivalent to the brain of...?

      It was specified first that this was ONLY about numbers of ARM cores vs numbers of Neurons.

  7. Milton

    Suspect assumptions

    All in favour of the science and I'm sure there will be much to learn from these massively parallel endeavours.

    That said, there are at least two glaringly suspect assumptions here:

    1. That because the human brain works with a lot happening in parallel, a computer must do so to the same level. This ignores the fact that silicon and the qubits that will eventually arrive on the scene have matchless power and many strengths that the squishy grey jelly simply does not. One reason the brain works with such parallelism is because it cannot clock at, say, 5 GHz. Jelly cannot do it. Silicon can. Insofar as the brain's parallelism is a compensation for its many other weaknesses, it is unwise to become too obsessed with parallelism for its own sake. This runs the risk of learning the wrong lessons from the human brain and can easily become a blind alley.

    2. That the animal brain is something we should faithfully emulate ... but why? Animal brains are evolved, not designed, and include a great many of the errors, inefficiencies, redundancies and circuitously superfluous kludges that evolution produces because it does not and cannot think ahead. You wouldn't design a robo-giraffe with a wasted length of neural wiring its neck, as evolution caused to happen: you'd think ahead, *design*, and do it better. The human brain is shockingly easy to deceive and manipulate, constantly forgets and makes mistakes, is quite capable of holding beliefs contradicted by objective fact and rationality: what's the point of including all the weaknesses and bad stuff? Why try to replicate the human multiple-reinforced-connections way of storing memories (which gradually summarises, simplifies, erodes and sometimes completely fictionalises them) when technology can put ever-tinier terabytes of RAM and petabytes of storage in your hands, to be managed by software that will store far more data more accurately than a person ever could?

    If you do succeed in creating something with the processing power and *processing style* of a human brain, it will have to have emotions: fear, hunger and lust being near the top of the list, since they keep an organism alive and provide it with motivation. Without feeling, you have a computer, not a mind. Even assuming you can implement this in a non-organic substrate, and even assuming that this is not merely a software emulation of those feelings (therefore, still a computer), what do you do next? Answer: you're either a son of a bitch who's imprisoning an innocent child, or you spend the next 20 years getting stuck in an ethical thicket, because you've created a consciousness, something which probably ought to have freedom and citizenship and agency ... and the latter will be definition include the capacity to decide to do harm or good.

    In sum, attempting to build a truly human brain is probably impossible and almost certainly horribly unwise. Yes, by all means let's continue creating awesomely powerful computing devices, they may be our salvation. But where brain and mind is concerned, the ambition is in more than one way quite doomed.

    (And yes, I am purposely conflating brain and mind in this comment, which in this context is not necessarily a reductive fallacy.)

    1. Charles 9

      Re: Suspect assumptions

      Regarding (2), part of the reason for modeling a living brain, foibles and all, is to get a better understanding of how OUR brains work, of which concrete data is sparse at best. We can't model around something we don't understand yet; we could easily take a wrong turn.

  8. Anonymous Coward
    Anonymous Coward

    He's an interesting chap Steve. He was my first year tutor. Brain the size of a planet. Despite (this being 2009) being an expert in ARM, low-power networking and distributed device-based computing he'd never so much as used a smartphone as he "couldn't see the point".

    Made for an interesting conversation with a tutorial group of 19 year olds.

    1. Anonymous Coward
      Anonymous Coward

      'he'd never so much as used a smartphone as he "couldn't see the point".'

      If you're working in this sort of area then sometimes its better to be disconnected from the results of your work .... is it really a good idea to realize that your life work is to enable people to have a 24-7 conenction to facebook etc!

      I remember it hit me years ago when we were being asked to almost double the performance of the processor we were designing and when pressed on why this extra performance was needed the answer seemed to be "the customer wants to add 3-d shadows to the text on the on screen menus"

      1. Primus Secundus Tertius

        "the customer wants to add 3-d shadows to the text on the on screen menus"

        I guess that's the time to hand it over to the B team.

        1. defiler

          "I guess that's the time to hand it over to the B team."

          s/team/ark

    2. Korev Silver badge
      Boffin

      I met a very well known figure in machine learning recently who'd only just got his first smart phone (and barely knew how to use it) on the basis that Google and Apple use his technology so he ought to see it in use.

  9. Tom 7

    Missing 500 million years of structure.

    I think a major slowdown will be the simple fact that we are at the end of 500 million years of evolution. Our brain is not a neural net - its a shitload of them put together in a specific way with a considerable collection of initial conditions and connections that gives us a considerable headstart (sic) over a bunch of processors put together with what for now is guesswork.

    However now we have AI that can learn for itself and beat the best man made Go machine I think it could be interesting to watch development over the next few years as machines develop angst with no alcohol to fix it.

  10. Mage Silver badge
    Boffin

    72 ARM cores, which equates to the brain of a pond snail

    No, it doesn't.

    Maybe it's as many connections, but it's not at all like a brain.

    Even in 1970s we knew the "future" of computers is parallelism. Programming rather than hardware has been the problem.

    It's very interesting research and I hope the "real" work is more about how to program parallel systems than chasing unicorns.

  11. amanfromMars 1 Silver badge

    And the Applications for Way Forward Parallelism, Professor?

    The way forward in computing is parallelism. There is no other option. .... Professor Steven Furber

    Well, no other more intelligent option, Professor. And the results and entangling will be dazzling and quite supernaturally disruptive and disturbing to moribund petrified status quo systems administrations.

    amanfromMars Oct 17, 2017 2:45 PM ..... [1710171945] ....... following opportunities on http://www.zerohedge.com/news/2017-10-17/russia’s-crypto-ruble-just-changed-game ...... or thinking to create them?

    Putin is openly inviting investment capital into Russia that is legal and above board. Russia wants legitimate businesses to operate in Russia in whatever currency they like as long as that business is transparent.

    Here's a SMARTR Joint AIBusiness Venture, methinks worthy of Putin Presidential Consideration ..... A Safe Harbour for Russia Crypto-Rubles be their very own CyberIntelAIgent Network of Global Operating Devices Live Active BetaTesting with Future Augmented Virtual Reality Productions for NEUKlearer HyperRadioProActive Live Operational Virtual Environments. ....... Quite Alien Space Places.

    Is anyone able to Offer and Deliver More, Even Better or Different and Working in a Parallel Dimension ....... which we can from here deeper explore and further examine with simple complex searching questions looking at forthright answers for dynamic future secured solutions.

    And just to make sure that there is no misunderstanding, any and/or all of that is readily available to any and/or all who would recognise their Need and Desire its Advanced IntelAIgent Feeds/Seeds/Magical Sources. I wouldn't want any national to be thinking they are excluded.

    1. Anonymous Coward
      Anonymous Coward

      Re: Applications, Professor?

      Yet a simple human brain, just maybe any of which go in bulk markets for a Penny Per Pack, is incomparable with Nanometricons in its effectiveness in Consumption to Work. Each of them can generate a unique universe, which none of the known, and being in project, supercomputers, can do.

      But who needs such a cheap universe? Does Budding the Handles That Fit IT rise its Anything to be Valued/Estimated?

      Solutions, SomeTHInG from which SomeOnE anywhere could have repeated, implemented, gained any kind of profit. Just anything given to our senses, that can be extracted from any of 8 bn of universes - that's the only value they can produce for those into accounting books/Mankind/etc.

      A simple supercomputer makes this extractionist behavio(u)r taking much less effort. Or - gives 815 minutes to edit this post, while a grey banner above the postbox, placed once by some lucky universe-maker, tells that one has only 10 (-;

      1. amanfromMars 1 Silver badge

        Application, Professor? NEUKlearer HyperRadioProActive Silk Road Ways/Quantum Communications Waves

        Howdy, AC,

        Methinks, Go East to the Middle Kingdom/Republic of China, is where all the NeuReal Surreal Flash Work is most likely to be very highly valued and regarded nowadays, AC .......

        Meanwhile, foreign investors can benefit from strong government support in emerging tech industries like VR device R&D and manufacturing. ..... Newly encouraged industries. R&D and manufacturing of virtual reality (VR) and augmented reality (AR) devices .... China’s 2017 Foreign Investment Catalogue Opens Access to New Industries

        Especially whenever the West is so nobbled to server old systems propping up failing capital markets for corrupted vested interests and thus absolutely terrified of that which emerges in/from the future which they neither comprehend nor command and control.

  12. Philip Stott

    I can't help thinking that Steve Furber should have a chat with Jeff Hawkins of Palm & Numenta fame (which I also initially learnt from another excellent Reg article).

    Between them I reckon we could expect SkyNet to come online in short order.

  13. Paul 195

    The point Mr Furber makes about power consumption is a very good one, and gives us a very good clue about just how far away we are from emulating human intelligence. It's something for all those people who expect to merge with the singularity to think about. Even with your big heavy meat body attached, you are about a 100,000 more times energy efficient than today's best technology, even if we knew how to upload you. A thousand fold improvement would get your energy cost down to 20Kw, so Sizewell B would be able to power 63,0000 people, about 3/4 of the population of Basingstoke.

    1. Charles 9

      But if you give it (and physics) some additional thought, you begin to realize that perhaps the REAL real reason the brain is as "efficient" as we think it is because we're also overlooking the idea that the brain is a bodge job. IOW, it's full of shortcuts and assumptions. It's as simple as taking a very good look at how the brain interprets the signals from our eyes (which BTW is rather incomplete). Extrapolate from that and you begin to wonder just how many of these bodges are built into our brain.

      1. Anonymous Coward
        Anonymous Coward

        Look at structure...

        A "bodge job" collapses. Like a poorly built house.

        A "network" is sprawling, but you will find each of those knots a requirement to get from a-b efficiently without blocking the other.

        Just look at plants. While a garden is the opposite to a jungle, each individual plant will *always* go towards the light source for efficiency.

        Thus the assumption that the human brain is a "bodge job" may be because as a group it looks like a jungle, but on the neuron level etc it is super efficient. It uses "assumptions" only where required or where failure is not a problem (see blind spot of the eye image processing etc).

        1. Charles 9

          Re: Look at structure...

          No, a bodge (or kludge) simply means it's assembled haphazardly. Evolution tends to do that sometimes because it tends to be reactive. Has no meaning as to whether or not it actually works, just that it was designed on the spot (trust me, I've watched Scrapheap Challenge--now those were some bodge jobs; some just worked better than others). After all, not everything that comes out of evolution makes sense (like yawning).

          1. Paul 195

            Re: Look at structure...

            "haphazard" rather ignores just how efficient evolution is at engineering good structures. Those random mutations which create small improvements become part of the gene pool, and those which don't get lost. The process is one of continual iterative improvement with ruthless whittling of functionality that doesn't help you survive long enough to have offspring - and long enough to help your offspring survive tool.

            The fact is that we are nowhere near building machines which work as well as the thing you are describing as a "bodge". Good engineering is all about only building as much as you need; the information we throw away simply isn't needed most of the time. If we knew how the brain was so good at discarding the irrelevant to concentrate on the important, we might be able to build better machines.

            With lots of effort we can optimize machines to perform specialized tasks far better than we can, but we are still an incredibly long way from creating anything as adaptable and smart as a human. Or even a cat.

  14. davcefai

    Number of cores

    Not all of the brain is concerned with reasoning. A goodly portion is "engine management" of the body. Without entering the other arguments in this thread I would venture that, based on the author's calculations, they will end up with a "brain" bigger than a human's.

  15. DropBear

    "The way forward in computing is parallelism. There is no other option."

    I seriously doubt it. Parallelism is only good for "data flow" processing, which actually approximates humans acceptably - perceptions going in, actions going out, emotions rattling around inside. Given enough runtime, enough state might even accumulate inside for the occasional "I think therefore I am"; but as far as current general-purpose computing goes, it's incomparably better for anything rigorous and precise even now than any "parallelised" (or even our own, "state-of-the-art") brain will ever be. We're being beaten by any pocket calculator for that sort of thing. Yes, parallelism-powered AI is what you'll need for mollycoddling the apparently endlessly ageing first-world population. But it will be useless* as soon as you need a CAD package, or a VR simulation or, you know, serving up a webpage...

    * Bear in mind that in this context "parallelism" is typically understood as "a large number of interconnects between processing units, a large number of which being affected by any information diffusing through the system" and NOT "a large number of specialized processing units performing the same well-defined operation on many pieces of data simultaneously" the way we have in GPUs today.

  16. Anonymous Coward
    Anonymous Coward

    AI Getting Nowhere

    "It turns out human intelligence is not about that. We're still not quite sure what it is about. However, we do know the brain is formidably power efficient."

    So he is admitting that he, the rest of the AI community, and biologists, have given up on doing the actual basic research - the SCIENCE- needed to solve the problem, and are instead just throwing dead rats or memory, processors and interconnections at the problem until they find intelligence ( or the funding dries up ).

    1. Slx

      Re: AI Getting Nowhere

      Very few computers can run on a cheese sandwich.

      1. magickmark
        Thumb Up

        Very few computers can run on a cheese sandwich.

        Or a really hot cup of fresh tea

  17. Korev Silver badge
    Joke

    An easy problem to solve

    If only they used brain processors instead of ARM ones then they'd probably find it a lot easier...

  18. fluffybunnyuk

    its a nice idea unless like me you subscribe to the view that is the non-computability of conscious thought...

    1. David Nash Silver badge

      the non-computability of conscious thought...

      Well, we can't know whether that is right or not without doing the research.

      It doesn't help much to "subscribe to a view" without showing that it is either correct, or not.

      1. fluffybunnyuk

        Re: the non-computability of conscious thought...

        "Well, we can't know whether that is right or not without doing the research.

        It doesn't help much to "subscribe to a view" without showing that it is either correct, or not."

        I refer you to the book "Shadows of the mind" by Penrose as starter material.

        At university I wrote a paper on the subject matter concluding that machine intelligence is not and can never be "human", and that any machine intelligence must in and of itself be strictly "machine". I do not subscribe to the view that consciousness in an animal; human beings one such sub-category; can ever be simulated by various computational or mathematical processes alone.

        So yes I have done my research, and yes I reached my conclusion based on 3 years of study.

        Whether or not it is correct remains to be seen, but not in our lifetime certainly without significant advances in computer science, and mathematics the likes of which we have not seen yet.

  19. Bronek Kozicki

    general purpose CPU ...

    .. for the simulation of a neural network? Well of course, if power utilization and space are not concern, do count me in. There are surely many hobby projects where this could work. However, for a large simulation, I would certainly use a different approach. Something along the lines of Google TPU, perhaps.

    1. Bronek Kozicki

      Re: general purpose CPU ...

      To elaborate, the only calculation that a single node of such network needs to do fast can be illustrated as here. It follows that in order to keep your power requirements optimal, you need to have this (or similar - it is one of the variants) calculation embedded in hardware, with minimum data flow on input and output. For example, currently the biggest chunk of power budged of any general purpose CPU is consumed on shuffling data to and from DRAM, and in a neural network this should not be necessary most of the time.

      1. Anonymous Coward
        Anonymous Coward

        Re: general purpose CPU ...

        I often wondered where the difference of memory and computation goes in a pure neural network system?

        1. Bronek Kozicki
          Boffin

          Re: general purpose CPU ...

          Each node of a neural network maintains very little state, but nevertheless there is some state - at the very least weight of the input (e.g. "y" parameter on the graph above). Arriving at the useful combination of this state for all nodes is what neural network training is about, and in a classical simulation it would be an important part of the data file describing the network. However since there is so little state, a pure neural network would not need separate RAM to store it to (or read from) during active work, as the required memory should easily fit alongside with the computation part. Still, in the interest of maintaining neural network state between machine suspends, it would be desirable to occasionally dump this state elsewhere - hence RAM could be used as a buffer for writes. Perhaps alongside with a description of the network topology.

  20. Anonymous Coward
    Anonymous Coward

    intelligent design

    I'm curious how many people actually believe one of the systems being designed could have crawled out from mud and I am being facetious here seeings a lot of these designs are being copied from biological self-repairing systems already running around that fuel themselves and do dishes.

  21. Aladdin Sane
    Terminator

    This explains

    Why the machines needed humans in The Matrix. Not for power, but for processing.

    1. Anonymous Coward
      Anonymous Coward

      Re: This explains

      It was the original story line. Would make more sense and IMO I just replace the words when watching. ;)

  22. CrazyOldCatMan Silver badge

    Put 48 chips on it and you get 864 cores, equivalent to the brain of a small insect.

    Double that and you'll have the functional equivalent of a politician..

    1. Spacedinvader

      Re: Put 48 chips on it and you get 864 cores, equivalent to the brain of a small insect.

      Half, Shirley?

  23. Chris Evans

    Great article

    Great article and good to read the correct original etymology of ARM

    "the original Acorn RISC Machine (better known as the ARM chip)".

    When they set up ARM PLC this was changed to "Advanced RISC Machine" though I think it has for many years been just plain ARM.

    Their website I see "arm, ARM and Arm" on the same page!

  24. HmmmYes

    Why not star of a slug brain and work your way up?

    I remember the 80s and the fight between top-down and bottom-up AI.

    Both promised humanlike intelligence in 2000-ish.

    In all thing brain + neurony it might be better to have a bit more honesty, starting any claim with 'We are not sure but ...'

  25. JimmyPage Silver badge
    Thumb Up

    Plus ...

    an awful lot of mammalian intelligence isn't in the brain anyway. Every single cell in the body has an input into it. Vision, for example. Your eyes and optic nerves are WAY more thank just cameras and cables.

    Hearing ? Well, we know that the ear processes sounds before letting the brain know what they are.

    and so on.

  26. Slx

    The thing we forget about biological computers is that all the wiring also appears to be active processors. The only relatively recently discovered that the dendrites that connect the neurones are fully active signal processors.

    Also brains aren't binary, they can have umpteen different complex electrical and biochemical nuances between 0 and 1 and they can combine all of those in vast numbers of complex ways.

    So, I would suspect that 1 million ARM processors is still probably drastically less than 1% of a brain's processing power.

    We're still a long way from mimicking what wetware does!

    1. YARR

      I've also read that neurons exhibit quantum behaviour (which could be considered a form of parallelism) and might allow neurons to communicate at a distance.

      If the observed behaviour of a neuron is so complex, how can scientists be confident that the classic model neuron they simulate, truly represents how a brain works? Maybe AI researchers are barking up the wrong tree and should return to researching how individual neurons and networks of neurons behave?

  27. Rebel Science

    Efficient, unsupervised spiking neural nets are the future of AGI

    Great article.

    "And so we built a hardware-software system that has good support for sparse connectivity. We're very focused on spiking networks whereas machine learning almost completely ignores spikes."

    Wonderful. Now that deep learning guru Geoffrey Hinton has finally acknowledged that we must abandon backpropagation and start over, it is time to promote the correct paradigm that will replace backpropagation. Deep neural nets will soon become obsolete. The future of machine learning will be based on the precise timing of discrete sensory signals, aka spikes. Welcome to the new age of unsupervised spiking neural networks.

    Unsupervised Machine Learning: What Will Replace Backpropagation?

  28. Rebel Science

    The timing of the spikes is what's important

    "Depending on its role in the brain, that timing may or may not be significant. It's clear you can't completely ignore it."

    Are you kidding? You guys need to completely forget about spiking rate. Timing is everything. EVERYTHING. Spiking rate is a red herring, a complete waste of time (no pun intended). It is true that the retina uses rank order encoding to compress visual information (~200 to 1) but the cortex is entirely driven by the precise timing of the spikes.

    Fast Unsupervised Pattern Learning Using Spike Timing

  29. John Smith 19 Gold badge
    Unhappy

    Spikes and timing. This was worked on by Carver Mead's team in 1989

    But it seems no one has taken this work any further

  30. anothercynic Silver badge

    Human Brain Project

    We had the opportunity to listen to listen to Steve's colleague at the HBP, Karlheinz Meier, talking about the HBP and what they are up to. It makes this supercomputer in our heads just absolutely incredible... We're a 1KW battery walking around, powering that 20W computer, and doing other things, but we aren't anywhere near the ability to recreate that same functionality in silicon. Biology is a marvellous thing...

  31. Slx

    Brain without a body ?

    The other thing you have to remember is that the brain did not evolve as a stand-alone computer in a box. It's an integral part of an animal which is a body that is incredibly well adapted to its surroundings - it can move around with extreme agility and also can sense, feel, experience and is basically a deeply integrated part of that environment.

    An artificial computer in a box does not have that multi-billion year evolutionary history of having literally evolved out of the environment that it is part of. Rather, it's a quite abstract creation built by the biological entities that did just that. So it is starting from a very different position.

    So, it will be very interesting to see how this develops over the decades and centuries ahead. Also whether it's possible to ever make the jump to sentience and consciousness. We could be missing a trick with that and we will just end up with more and more intelligent computers that are still not really 'alive'.

    I think, however, humans are pretty arrogant in assuming that we're the only animals that possess those two features too. When you look around the animal world, we aren't a hell of a lot different other than we've developed the ability to express and communicate abstract thoughts as sophisticated language.

    Does that necessarily mean that other animals don't have them? I don't really buy that argument at all.

    We're all just versions of the same basic vertebrate evolutionary model so it would make sense that we share a lot of the same mental faculties, just developed in different degrees and directions.

    I think we just like to mentally separate ourselves from the other animals we have a sense of superiority and a big ego but also, probably because it allows us to eat them. If we thought about every hunted / farmed animal in a cuddly way, we'd probably have issues doing that.

  32. Anonymous Coward
    Anonymous Coward

    Re. quantum neural nets

    Yes, this is indeed believed to be the case.

    Interestingly quite simple systems can exhibit complex behaviors.

    Someone designed a BEAM core which uses (IIRC) six gates on a 74xx IC

    yet it emulates what a cockroach behavior is like simply using analogue feedback.

    I looked into exploiting quantum effects in DDR3 a while back and perhaps this is a possible avenue of research? Design a memory/CPU that is intentionally "fuzzy" using leaky dielectrics and inter-cell gaps containing an active medium that encourages virtual pathways to form where frequently used cells interact, using HTSCs and high K dielectrics like the old technology used in analogue storage chip.

    This can be in three dimensions as well by layering thinned chips, power usage would be very low.

    I already found that simply overclocking the memory can identify areas that might show quantum behavior even in a brand new module, obviously with use they get better.

  33. John Smith 19 Gold badge
    Unhappy

    It's been clear for decades you won't get brain power consumption with digital logic.

    Digital --> transistors hard driven to conduction or switch off. Definite 1 or 0. Up to GHz clocks

    --> Fan in / fan out < 10:1

    --> Stages driven by clocks and transfers controlled through latches

    Brain --> Much more probabalistic. Multiple inputs trigger, or prevent output firing. mV, not volt, + switching levels

    --> Fan in / fan out < 10 000:1

    --> No central clocking. More like an event driven system.Maximum frequency 10-15Hz

    1. Slx

      Re: It's been clear for decades you won't get brain power consumption with digital logic.

      Frequencies don't necessarily make sense in brain tech.

      Think about it: billions or processors operating at low frequencies and all subtly offset a little bit - gives you incomprehensibly high frequencies even though the speed or individual circuits might not look fast.

      It could also be exploiting some biochemistry that is giving it very high speeds we can't detect electrically. We are just picking up the big snaps which could be resets or recharges or anything tbh ...

  34. Sssss

    Well, somebody beat me to my multi million processor setup. He is barking up the wrong tree with Arm, the leaders in low energy processing is green array chips. They are the ones to work with. My own stuff is unaffordable for me to do. But, why the arm processors when talking about neural networks? (It's after 3am here and I am finding it hard to read). Neural networks are largery less useful for general processing as general purpose processors are for AI. Networks are good for recognition, and the braunbsliws down in variouse thinking that a GP processor could emulate. As far as following a list of instructions compared to a GP processor, how quick can you follow instructions? So, brains are even slower. So, a reduced recognition environment (like a virtual text world) is more doable for a general purpose computer then.yje real world. But in a way, he is on a right path, it is how you use those 1 million processes. I've come up with a comventional.scheme, but it had occured to me a similar scheme based on intelligence data, could also be done similarly. Say, he creates a personality with such a system, let's call it moron, as the article alleges it will only have a small faction I'd the human brain. It obviously then, must either run out of capacity and have to recycle memory, or be dumberer (of course I'm being satirical, in the fashion of the register writers here) The issue is, when they make it into a personality, and declare it to then be a living being, what are they going do as the arm Chios break down and loose memory and function, and who is going okay the electricity bill, or does this machine have to hit the streets looking furnace job? :)

  35. Slx

    I think what we're more likely to end up with is a very cool processor tech, but not a brain.

    I actually suspect that we'll mimic a brain probably using some combination biotech and nanotech, not traditional semiconductor technologies.

    Neurology has levels of subtle control over signals that digital electronics aren't really near. We are still processing data with switches, while your brain is processing data with living cells and biochemistry.

    Even the references to processor frequencies don't necessarily make sense as it's not necessarily using sampling of signals and seems to have an ability to deal with analogue inputs in pure analogue form without needing to quantising them.

    It also isn't a general processor and uses specialist signal processing "technology" incredibly tightly adapted to handle specific sensory inputs.

    There's a *lot* more research to be done to hack brain technology but I just think we have a history of assuming that brain systems and whatever the cutting edge of contemporary computer systems is should be directly comparable.

    As someone described it before - it's a bit like being presented with an alien computer system and a multimeter.

    Even though our brains are us, they're far more alien in technology terms as we didn't design or build them, they have no particular reason to be easy to understand or follow as they're not "designed" and they are self repairing / not repairable and they're painfully complex. The "wiring" doesn't even necessarily follow any logic that would make sense to someone analysing it as it "happened" upon solutions in evolutionary steps.

    My view of it is that it's a problem that will be solved in a technology radically different to semiconductor switching processors.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like