back to article IBM boffins stuff 16 million-neuron chips into binary 'frog' brain

IBM researchers developing chips that mimic mortal brains say they've built a 4,096-core processor that simulates a million neurons. The SyNAPSE silicon, fabricated by Samsung using a 28nm process, has 5.4 billion CMOS transistor gates, consumes 70mW of power, and uses a processor architecture completely unlike today's CPUs. …

Page:

  1. SVV

    Big Blue's binary bonce 'the scale of a frog brain'

    Project Boss : "OK guys, switch it on and let's see what happens!"

    Techie : "OK here goes...... aaah, I think it just croaked"

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: Big Blue's binary bonce 'the scale of a frog brain'

      Stop picking on the French.

  2. Grikath

    "16 million neurons is roughly the scale of a frog brain," Prof Furber quipped. "So, the IBM board may be able to catch a fly for its dinner."

    If they manage to get it *that* coordinated they'd be in for a Nobel prize...

    That being said..... I want to play with one...

    1. psyq

      "Frog" brain... or "any" brain...

      If it would be able to catch a fly for its dinner, Dr. Modha would be most likely earning himself a Nobel prize.

      Unfortunately, Dr. Modha is known for sensationalistic announcements (several years ago it was a "cat" brain, which sadly did not do much either) and little real material.

      Putting bunch of simplified models of neurons together is nothing new. It has been done dozens of times before:

      - In 2008, Edelman and Izhikevich made a large-scale model of human brain with 100 billion (yes, billion) of simulated neurons (http://www.pnas.org/content/105/9/3593.full)

      - Since then, there have been numerous implementations of large-scale models, ranging from million to hundreds of millions of artificial neurons

      - Computational neuroscience is my hobby, and I managed to put together a simulation with 16.7 million artificial neurons and ~4 billion synapses on a beefed-up home PC (http://www.digicortex.net/). OK, it was not really a home PC, but it will be in few years

      - And, of course, there is a Blue Brain Project, which evolved into Human Brain Project. Blue Brain Project had a model of a single rat cortical column, with ~50000 neurons, but modelled to a much higher degree of accuracy (each neuron was modelled as a complex structure with few thousands of independent compartments with hundreds of ion-channels in each compartment).

      --

      All of these simulations have one thing in common: while they do model biological neurons with a varying degrees of complexity (from simple "point" process to complex geometries with thousands of branches), they all show "some" degree of network behavior similar to living brains, from simple "brain rhythms" which emerge and are anti-correlated when measured in different brain regions, to some more complex phenomena such as acquiring of receptive fields (so e.g. neurons fed with visual signal become progressively "tuned" to respond to e.g. oriented lines etc.) - NONE OF THEM is yet able to model large-scale intelligent behavior.

      To put it bluntly, Modha's "cat" or "frog" are just a lumps of sets of differential equations. These lumps are capable of producing interesting emergent behavior, such as synchronization, large-scale rhythms and some learning through neural plasticity which result in simple neuro-plastic phenomena.

      But they are NOWHERE near anything resembling "intelligence" - not even of a flea. Not even of a flatworm.

      I do sincerely hope we will learn how to make intelligent machines. But we have much more to learn. At the moment, we simply do not know what level of modelling detail is needed to replicate intelligent behavior of a simple organism. We simply do not know yet.

      I do applaud Modha's work, as well as work of every computational neuroscientist, AI computer scientist, AI software engineer and also all developers playing with AI as their hobby. We need all of them, to advance our knowledge of intelligent life.

      But, for some reason, I do not think PR like this is very helpful. AI, as a field, has suffered several setbacks in the history thanks to too much hype. There is even a term, "AI winter" which came to be precisely as a result of one of those hype cycles, very early in the history of AI.

      I am also afraid that Human Brain Project, for all it is worth, might lead us to the same (temporary) dead end. I do hope HBP will achieve its goals, but announcements that Dr. Markram made in the last years, especially (I paraphrase) "We can create human brain in 10 years" will come back to haunt us in 10 years if HBP did not reach its goals. EU agreed to invest one billion euros in this - I hope we picked the right time, but I am slightly pessimistic. Otherwise we will be up for another AI winter :(

      1. DJV Silver badge

        @psyq

        He did model a cat brain - as soon as it was turned on it did what cats normally do, and went to sleep.

      2. NomNomNom

        Re: "Frog" brain... or "any" brain...

        The public are fed such BS to believe that AI is capable of far more than it is actually is. I get told that "they" have now made a computer as intelligent as a mouse, but I know that isn't true because AI just isn't there. I know full well what they've really done is create something analogous to the structure of a mouse brain. But how can I argue with newspaper articles? Everyone wants to believe hard AI is just 5 years away and they don't like me sounding like some neo-luddist when I say it's rubbish. People don't realize how frickin stupid AI is still and how ludicrously ahead of reality some of the proposed pie in the sky ideas (eg "google self-driving cars") and the like are. I think gamers perhaps have a better appreciation of how crap AI really is because they get to see the results (or lack of) of attempts to get code to do something truly intelligent - and that's in a controlled environment!

  3. Rampant Spaniel

    This is hardly new, experts replicated a politicians brain in 1952 using only a 6W light bulb, 2 valves and a heap of horse manure.

    More seriously, this is very interesting to see. I think the closer we get to being able to accurately duplicate the physical structure and workings of brains the more we may come to understand about what the difference is between a lump of brain and actual sentience. Whilst I have no doubt the goal behind this will be to make something wacko and darpa'ish like artificially intelligent jihadi seeking rocket dolphins it could end up answering some very fundamental questions about ourselves (or just prove we will stop at nothing to create new ways of exterminating ourselves).

    1. Colin Brett
      Happy

      Upgrades

      "make something wacko and darpa'ish like artificially intelligent jihadi seeking rocket dolphins"

      So, sharks with frikkin' laser beams v2.0?

      Colin

      1. DropBear
        Trollface

        Re: Upgrades

        If a game titled "Rocket Dolphins vs. Laser Sharks" doesn't surface somewhere, stat, I'll be VERY disappointed in the powers of Internet...

        EDIT: ...closely followed by "Flappy Bird vs. Nyan Cat", hopefully. Fight!

  4. Anonymous Coward
    Anonymous Coward

    Reinventing the flat tyre

    If they manage to simulate biological brain activity (like we use), won't the resulting computer become, er, unreliable, just like us? Do we WANT that?

    1. Steve Knox

      Re: Reinventing the flat tyre

      That's what it's designed for. See the PRNGs in the diagram? They're there to provide the unreliability*, essentially.

      * Specifically, they're there to add noise to the spike thresholds for the neurons. The desired effect is neurons may fire before their threshold is reached (?intuition?) or may not fire when they "should" (?anyone got a good term for this -- ironically, I can't think of one!?).

      1. Anonymous Blowhard

        Re: Reinventing the flat tyre

        "or may not fire when they "should" (?anyone got a good term for this -- ironically, I can't think of one!?)."

        I think the legal term is DUI

      2. Jimcrick

        Re: Reinventing the flat tyre

        Dementia

      3. fritsd
        Pint

        Re: Reinventing the flat tyre

        "or may not fire when they "should" (?anyone got a good term for this -- ironically, I can't think of one!?)."

        lazyness.

        very important for intelligence :-)

        Anyway; do you all remember that simulation from years ago where a NN steered and parked a "car", trained until it worked well, then some of its weights were randomly cut, and afterwards it *DID* (attempt to) park like someone DUI.

        A deterministic computer program would not have been able to park at all, if several of its lines were randomly erased.

        It's going to be a fascinating time developing computer languages to program these things. Maybe a bit like INTERCAL?

  5. William Boyle

    The birth of Sky Net

    To quote the Terminator - "I'll be back!"....

  6. Skyraker

    Frog Brain

    Pretty much anyone in the City of London police.

    1. et tu, brute?
      Alert

      Re: Frog Brain

      And at least a hundred times bigger than the clowns in Westminster!

      1. Fred Flintstone Gold badge

        Re: Frog Brain

        Re: Frog Brain

        And at least a hundred times bigger than the clowns in Westminster!

        That's engineers for you: always over engineering things :)

  7. aberglas

    Not Neurons

    It is difficult to know what this chip really does from the marketing woffle, but it certainly does not model real neurons. It might be modelling Perceptrons, which are actually useful for engineering.

    But then there is mention of "binary synapse", which suggests that it is just a huge programable logic array (PLA). The use of the word "Synapse" is particularly awful. Hype hype hype.

    1. Destroy All Monsters Silver badge

      Re: Not Neurons

      Yeah yeah yeah.

      > It might be modelling Perceptrons, which are actually useful for engineering.

      They are not very interesting because they are just classifiers handily described in a single equation. This is not that.

      1. CadentOrange

        Re: Not Neurons

        >> It might be modelling Perceptrons, which are actually useful for engineering.

        > They are not very interesting because they are just classifiers handily described in a single equation. > This is not that.

        Well, what is it then?

        1. fritsd

          Re: Not Neurons

          Don't bite my head off if I misunderstand, but I thought that perceptrons don't have a time dimension; the axon activities are represented by a (floating point) value, whereas in biological NNs a "high" value means a "spike train" with multiples spikes in rapid succession, and a "low" value means a spike train with only a few spikes. If you think about those spikes, the important properties are NOT just coded in the frequency of the spikes, but also in their rhythm. You can't easily do a Fourier transform either, because excitations happen starting from a certain time and stop after a while as well (because the neurotransmitters get tired and need some fresh ATP?), so it's not just a (superposition of) clean sine waves and you need to keep at least two-dimensional time. Instead I'd imagine you'd have to use more complicated functions, maybe Daubechies wavelets or something (imagine me handwaving in the air here, hoping nobody finds out I actually don't know much about Lotka-Volterra kernels, spike timing dependent plasticity etc.)

          1. BPeterF

            Re: Not Neurons

            I appreciate your relatively realistic sketching of your position on this topic, but aren't you under-emphasising the fact that alongside the 'freqency-spike and rhythm-spike patterning aspect' of any contents of (not least our) largely unconscious (including selectively unconscious) brains, there is also a '9-dimensional spatial patterning aspect'? ;-)

    2. Suricou Raven

      Re: Not Neurons

      It also doesn't look trainable. I'm guessing it's the type of architecture you'd see used to do hardware acceleration of things like machine vision and classification. The chip can be simulated for training purposes by a conventional supercomputer, sucking up a few megawatts for a couple of months to train the thing - but once it's trained, you can mass-produce the little power-sippers and stick them in smartphones and appliances. In twenty years, you might see one in your car deciding if the thing that just stepped onto the road is a plastic bag, a fox or a child.

      1. Destroy All Monsters Silver badge
        Headmaster

        Re: Not Neurons

        Always amazing how random commentards have already invented everything, thought everything through and found something not worthy by scanning through an El Reg article in 30 seconds. Probably while not having reached the first course on differential equations yet.

        Well, you eejits have proved you can actually read, so feel free to read some more.

        As for using the word "synapse", it has been standard procedure to use it for the connection element since, like, 1943 (A logical calculus of the ideas immanent in nervous activity). Yes, a real synapse is far more complex than anything implement yet. So what?

        1. Destroy All Monsters Silver badge
          Headmaster

          Re: Not Neurons

          You also mentioned learning, I think?

  8. John Smith 19 Gold badge
    Unhappy

    It's only when you try to mimic the human brain architecture you discover *how* clever it is.

    The roughly 10^10 neurons and 10^15 synapses runs on < 400W.despite being 3d packed.

    Binary synapes are not without uses. IIRC the WISARD machine used them and demonstrated facial recognition in 1/30 sec IE 1 TV frame in the late 80's.

    1. Pete 47
      Joke

      Re: It's only when you try to mimic the human brain architecture you discover *how* clever it is.

      I suppose 20W is less than 400 or is your brain early AMD?

  9. tony2heads

    I for one

    welcome our cyber-amphibian overlords

  10. Shane 4

    Yeah, But can it run Crys..... ahhh forget it I'll get my coat.

    1. FartingHippo
      Coat

      No but if you stress-test it, it has a crisis.

  11. Anonymous Coward
    Anonymous Coward

    How does it handle failure mode?

    Braaaainz? :)

  12. Charles Manning

    Nothing like neurons

    Back in 2004, 25000 rat neurons flew a flight simulator

    http://www.theregister.co.uk/2004/12/07/rat_brain_flies_jet/

    Here we have 16 million of these things and all it can do is is ribbet and catch flies....

    Clearly calling these neurons is a huge stretch.

    1. Destroy All Monsters Silver badge

      Re: Nothing like neurons

      25000 rat neurons flew a flight simulator

      For some limited amount of "fly".

  13. Tromos

    The first thing to do...

    ...is train it to say "Hello, world."

    1. Lapun Mankimasta

      Re: The first thing to do...

      And write Vogon poetry in its spare time, if it can't manage Paula Nancy Millstone Jennings' quality and quantity ...

  14. Unicornpiss
    Mushroom

    On August 29, SyNet became self aware...

    And so it begins... Big Blue merely copied the architecture of the broken CPU they had in their secret vault..

  15. Anonymous Coward
    Anonymous Coward

    I think you ought to know I'm feeling very depressed ...

    Share & Enjoy!

  16. Simon Harris
    Coat

    "So, the IBM board may be able to catch a fly for its dinner."

    obviously has applications as a debugger then.

  17. TeaLeaf

    I've Been Wondering...

    If some of the publicly announced DARPA projects are just decoys to try and get the Chinese and/or Russians to waste shedloads of money on totally useless and impossible projects. Meanwhile, the really important stuff is in secret projects. Just a thought, and this project does not seem to be one of the decoys.

  18. JeffyPoooh
    Pint

    Decision Inverter

    They should include a NAND gate at the output. So if their prototype is constantly making bad decisions, then they can simply set one bit to invert the output. Bad decisions instantly become good decisions. Decision Inverters are very useful for any system that constantly makes bad decisions with a rate higher than 50%. For example, Microsoft's OS dept. desperately needs one installed.

    If the above proposal doesn't work, then their neural entire system must be a *perfect* RNG with perfect 50/50 randomness in the output. If so, then it has application to crypto. That would also be a useful result.

    So this project simple cannot fail. It either works, or it works with a Decision Inverter, or it works as a RNG. Brilliant!

    1. Simon Harris

      Re: Decision Inverter

      I think you mean an exclusive OR gate.

      1. JeffyPoooh
        Pint

        Re: Decision Inverter

        Yes - you are exactly correct. Thank you.

  19. keith_w

    typos

    "IBM researchers developing chips that mimic the brains of human beinfs say they've created a 4,096-core processor that simulates a million neurons"

    Do you think it will be able to recognize typos?

  20. Tom 7

    They've spawned a monster

    The worrying thing is if they connect it to the internet and does what the average teenager does and fills the lab with tadpoles!

  21. Pirate Dave Silver badge
    Pirate

    so

    Eventually Dr. Sbaitso could analyze himself? What's he going to do if he says a cuss word to himself? That could get noisy...

  22. Anonymous Coward
    Anonymous Coward

    Pattern recognition is awesome to think about. It must be that arrays of neurons in the brain operate something like a hologram... interfere (non-linearly?) a certain fixed combination of lower frequency carrier waves with an array of internal wave emitters to output a pattern. If you fire in the same carriers, you get the pattern out. Fire in the pattern, you get the carrier out. Allow the carrier outputs to propogate to higher levels, build in a feedback mechanism where nice strong locked signals are favoured and reinforced, so once a pattern and a carrier start to associate, they lock together strongly. :D

  23. Cybershaman

    I wonder if the first artificial brain will be 3D printed... I can envision the first robotic brains being painstakingly built up in layers with as yet to be created nanolevel 3D printers. And as they continuously toss out botched brains the first will finally emerge from the clean room: a reflective brown lump being carried gingerly in white gloved hands. I wonder how much wattage it will require? How will we interface with it? Will we need to work on duplicating nerve fibers and creating them in carbon filament form? Little doubt in my mind that the first artificial person will look at us and think how squishy and fragile we are. And will its next thought be one of contempt or compassion? Will it skip all of the silly god nonsense and realize that as corporeal beings we are all in this together and could only benefit from cooperation? Or will it seek out the nearest atomic weapons and before accessing the launch controls utter the words "Let there be light..." ;)

    1. Anonymous Coward
      Anonymous Coward

      Oooh I like your ideas :D Google probabaly have enough compute in a warehouse somewhere to simulate a human brain entirely in software, just need the will and the structural definition of the network toplogy to do it! My idea is to set a load of agents running in a simulator with a fractally grown simulated brain, provide sensory information about food sources, and use genetic algorithms to kill off the ones that circle aimlessly while letting the ones that identify food sources and move towards them to eat survive. Then give them the ability to communicate with each other, to kill some time when not eating, and start to increase the size parameter for the neural network... :D

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like