back to article Scientists build largest ever computerized brain

Canadian scientists have built a functioning computer simulation of the human brain, dubbed Spaun, that's able to recognize characters and perform simple actions. The team at the University of Waterloo's Centre for Theoretical Neuroscience built the brain from 2.5 million computer-simulated neurons, compared to the average …

COMMENTS

This topic is closed for new posts.
  1. Allan George Dyer
    Facepalm

    Can I borrow it?

    For those, you know, baaaaad mornings.

    1. Scott Broukell

      Re: Can I borrow it?

      No need for such measures in my case I fear, but i knows your feel bro (or something wicked like that).

    2. Katie Saucey
      Thumb Up

      Re: Can I borrow it?

      It would be useful to have a pluggin for those hangover Mondays. Also, despite the slow processing speed it could probably 1-up my boss on his best days.

  2. Anonymous Coward
    Anonymous Coward

    How about ..

    ...going for a simpler but still meaningful target? An ant brain is way, way smaller than a human one, yet it can still perform many complex tasks. I'd like to see how well that simulation stacks up against the reality. It might give a more objective view of the state of the art.

    1. Anonymous Coward
      Anonymous Coward

      Re: How about ..

      "a simpler but still meaningful target?

      David Cameron might be less challenging than a complex colony animal.

      1. GitMeMyShootinIrons

        Re: How about ..

        By that measure, it should be possible to simulate our illustrious deputy PM using only a matchbox and a peanut.

        As for the respected leader of the opposition, he can be simulated using only a petulant expression.

      2. BlueGreen

        Re: How about ..

        So the inevitable idiot comments on politicians already. They are mirror of the electorate. As such, I suppose their stated position is an output, the input being perception of what the electorate wants, an n-to-1 summary function. Sound familiar?

        Anyway if you want to change things get involved in politics, don't whinge, it isn't hard.

        1. Anonymous Dutch Coward
          Holmes

          Re: Politicians vs Electorate

          "They are mirror of the electorate" Or a subselection.

          Depends on whether you're an optimist or pessimist.

        2. Anonymous Coward
          Anonymous Coward

          Re: How about ..

          "Anyway if you want to change things get involved in politics, don't whinge, it isn't hard."

          For the most part it is actually hard, unless you're prepared to work for the Labour or Conservative parties, who have undue influence and control of the system, have entrenched voting bases amongst the hard of thinking, and who run the systems to reduce the chances of non-conformist opinions being elected.

          I live in a marginal constituency, and the last Labour MP was a (sadly unconvicted) expenses fraudster, the current one represents only the Tory party in whose name she stood. Now does that sound familiar?

          1. veti Silver badge
            FAIL

            Re: How about ..

            @AC 18:03: The idea that you can only influence politics by working through one of the two main parties is a pernicious myth.

            It doesn't take any party affiliation at all to submit evidence to a select committee, which can translate directly into changes in law. I've done it, both in the UK and in New Zealand. If there's a subject you actually care enough about to educate yourself on, to the point where you can say something worthwhile, then you too can speke ur branes to the people who are in the process of revising laws. All without having to suck up to anyone, and regardless of how safe or otherwise your constituency is.

    2. Anonymous Coward
      Anonymous Coward

      Re: How about ..

      It might give a more objective view of the state of the art

      State of the ant, you mean..

      1. Charles Manning

        More humble goals are more useful

        Targeting the processing capability of an insect would be more useful.

        A fly or a bee is incredibly capable when compared with a robot. Achieving fly-like capability in robotics would be quite remarkable.

        But no doubt funding etc come into play. It is going to be hard to attract funding to replicate a fly brain. So instead AI chases rainbows like we have been for the last 60 years....

  3. danR2
    Black Helicopters

    Scale it up

    Remember, this is using a conventional supercomputer.

    National governments have to be silently taking this sort of development seriously. Throw 10 billion dollars worth of specialists, petabytes, mega-cpu's, fast languages, etc. at the problem and you've got your:

    "Colossus: The Forbin Project"

    If anyone remembers the movie.

    1. John Brown (no body) Silver badge
      Thumb Up

      Re: Scale it up

      Um, yeah. ElReg just did on article on world domination by SF computer "characters"

    2. John Angelico
      Big Brother

      Re: Scale it up

      Yes, I remember it - with an impossibly young chief scientist in charge of the project, and other laughable implausibilities...

    3. Blitterbug
      Unhappy

      Re: Remember, this is using a conventional supercomputer...

      Yeah... then I spat my drink out when I reached this bit: "since the team is approaching the limits to how far you can scale the Java software." The mind boggles. Just - boggles.

  4. Steve Knox
    Trollface

    So far the model code is reaching the limits of what can be done...

    ...since the team is approaching the limits to how far you can scale the Java software

    Java? Well, there's your problem...

    At least it's not JavaScript, though.

    1. tcstewar

      Re: So far the model code is reaching the limits of what can be done...

      Yup, Java's definitely holding us back right now on this version (I'm the Terrence Stewart quoted in the article). We've been focused so much on the research aspect (how can you connect up realistic neurons to perform the tasks we think different parts of the brain are supposed to do) that we haven't spent much time on the pure computer science question of "how fast can we make it go". I tossed together a quick Theano version of the core inner loop (Python compiled to C optimized for matrix operations) a few weekends ago and got a 100x speedup on a simplified model without much problem. And we've got some work on a CUDA implementation, and a student working on an FPGA version. Oh, and we've been in close contact with Steve Furber at Manchester, so we can make use of his SpiNNaker project (a giant supercomputer made of one million custom ARM processors). So I think there's tons of easy to do things to speed this up, now that we've demonstrated that it can work.

    2. Werner McGoole
      Thumb Up

      Java ain't so bad

      I have number-crunching Java code running almost 24/7 and it's no slouch. Several of the inner loops have been re-written in C, unrolled, and all the optimiser flags tweaked to suit the CPUs it runs on... and... well after all that effort they do run about 30% faster. But given that the Java code required no optimisation effort whatsoever, I don't think it's doing too badly.

      Dedicated hardware is what you want for real speed improvements. I happen to have some, but it's in use just now, thinking.

  5. Anonymous Coward
    Anonymous Coward

    Java?

    HAHAHAHAHA Java? Seriously? Oh boy......that is a fail right there.

    I hope they did not use Adobe Flash to simulate visual memories. Just the loading time of some animations would make the brain halt like in a coma.

    At least it is Science. And we all know that in the end, it works bitches!

    They just started with the wrong infrastructure, that is all.

    1. Kebabbert

      Re: Java?

      Do you mean that Java is slow? Maybe you should read about adaptive optimizing compilers? They can in theory be faster than an ordinary, say, C/C++ compiler. The thing is, every time the JVM runs the program, it can gradually optimize. If you use gcc, then you can not use vector instructions, because not all x86 cpus have those. Therefore you just deliver a common binary. But with the JVM, it can optimize to use vector instructions if your cpu has them, or in other ways, optimize your code to suit your very own cpu. Gcc can not do that, as it only optimizes once. It is not difficult to see why JVM can be faster than C/C++.

      NASDAQ largest stock exchange system called INET, is developed in Java. And the NASDAQ system is among the fastest in the world, with sub 100ns latency and extreme throughput. If Java suffices for one the worlds largest and fastest stock exchanges, then it suffices for most needs.

      But we must remember that Java is best on the server side. Not on the client side, sure, you use it for developing games and such (Minecraft) but Java is built mainly for servers.

      1. tcstewar

        Re: Java?

        The Java JIT definitely helps a lot in our model, since the vast majority of the processing time is a simple core inner loop.

        The nice thing is, though, that the core inner loop is something that's pretty straightforward to do a custom re-write in whatever language we feel like, so we can do that sort of special-case optimization to run on particular hardware if we want to. But, we still like having our main software in Java, for two reasons.

        First, it's incredibly easy for anyone to download and use. We run tutorials at various conferences on it (and there's online tutorials at http://nengo.ca ), and there's a big advantage to having it just work on everyone's machine. The second reason is that it's a research tool where we tend to want to try out new algorithms and different neuron models, and for that sort of work it's more important to have a clear API than for it to run quickly.

        But, now that we know it works, we can turn to speeding it up, and there's tons of good ways to do so. There's lots of other projects out there focusing on how to simulate neurons really quickly. What we're doing that's new is showing what the connections should be between neurons in order to do different tasks. So we can take that and apply it to lots of other people's hardware (for example, we've been working with Steve Furber's group, getting our models to run on their SpiNNaker hardware http://www.theregister.co.uk/2011/07/07/arm_project_spinnaker_super/ )

      2. Woza
        WTF?

        Re: Java?

        1. Are you seriously claiming that you cannot use vector instructions in C code? With the proper switches, gcc can output vector instructions, as can intel's icc and doubtless others.

        2. gcc can to better than 'only optimises once' - I direct your attention to profile-guided optimisations. I will concede that an adaptive optimisation scheme would do better at optimising for runtime-variant behaviour.

      3. kukreknecmi

        Re: Java?

        100ns is impossible, sure u not mean 100ms?

  6. Anonymous Coward
    Anonymous Coward

    I'll be really impressed

    when they can get it to duplicate the behaviour of a worker bee, without requiring a room full of hardware. Although no doubt the military would be first in line to build missiles that seek out and "pollinate" targets that have the appropriate visual signature.

    Reminds me of Dark Star. "Let there be light."

    1. Bleepme
      Thumb Up

      Re: I'll be really impressed

      Dark Star. Wow that was a great movie.

    2. BlueGreen

      Re: I'll be really impressed

      > duplicate the behaviour of a worker bee

      I don't know how this project works but bear in mind there's likely to be a hell of a lot of inherited "hardwired" behaviour specialised structures in a bee, and not just in their brain but quite possibly processed somewhat before it reaches the brain (IIRC there is processing in the eyes of male horseshoe crabs which helps them recognise females - if anyone can ref that I'd be grateful).

      Neurons alone are (probably) not going to get you that far.

      1. veti Silver badge
        Holmes

        Re: I'll be really impressed

        Yep, this point is consistently glossed over whenever journalists get on the 'artificial intelligence' bandwagon:

        Brains don't develop in isolation. Your brain is tied into your body by a lot more than mere nerve ends. It gets input (prefiltered) from your senses, and every goal and idea it has is geared in some way to serving the needs of your body.

        To put it another way: if the words "hungry", "cold", "hot" , "tired", "sleepy", "painful" or "horny" meant no more to you than the names of the seven dwarfs - what exactly would you think about all day?

  7. Arthur 1

    As had been said... WTF Java?

    I understand that OO is going to make this kind of task a lot easier (a neuron class and a good IoC container would go a looong way) but why make this in any language with a VM? You're adding framework limits that don't have to exist. This is at least nominally HPC work, isn't it?

    I wonder how many neurons the C++ implementation on the same hardware could simulate?

    1. Anonymous Coward
      Anonymous Coward

      Re: As had been said... WTF Java?

      Sure, random garbage collection is an overhead, but I really doubt that switching from JIT to, say, C++ would get more than a doubling in performance. They need to improve performance by many orders of magnitude. Merely doubling performance is nowhere near as important as refining their algorithms and data structures until they are perfected, without shooting themselves in both feet through silly errors.

      Real performance improvements will come after the neuron design is perfected and implemented in silicon.

      1. Arthur 1

        Re: As had been said... WTF Java?

        The guy above me correctly pointed out that your over-simplification isn't necessarily true, but let me follow up with two things:

        1) C++ has many of the same OO facilities as Java, especially ones that would be relevant to this kind of work.

        2) At no point was I talking about performance or ease of use, I was talking about scaling which you seem to have missed entirely. The VM is a constraint on massive scaling, and that's what this kind of project ultimately needs to be economical.

        Bonus: I doubt very much the breakeven for ASICs would be here, and the inability to recode bits would pretty much defeat the entire purpose of a research cluster like this.

    2. Anonymous Coward
      Anonymous Coward

      Re: As had been said... WTF Java?

      In terms of raw speed, for any given algorithm, Java < C++ < C < assembly language < FPGA < ASIC. Ease of use/low risk goes in the other direction. If your main goal is to learn/create prototypes, then Java seems like a good choice. If you want a serious product, then eventually an ASIC is probably the ideal goal.

      1. This post has been deleted by its author

      2. tcstewar

        Re: As had been said... WTF Java?

        I definitely agree (I'm one of the main developers on the project). But there's already tons of people working on how to do high-speed simulations of neurons, and we'll just make use of whatever they come up with. What we're doing that's new is saying how to connect neurons to do particular tasks. That's a much less well-studied problem, especially with questions like "how do you selectively route information between brain areas"? We're using Java for our default simulator just to let us focus on the problem of figuring out what the connections between the neurons should be, as it lets us flexibly try different things.

    3. Fred Flintstone Gold badge

      Re: As had been said... WTF Java?

      Honestly, guys, you're missing the point.

      When the MACHINE starts dissing your choice of programming language you know you're on your way to a proper AI.. If you get a screen message during bootup like "FFS, did you HAVE to use LISP?" you know you're on your way. And you have control - the threat to rewrite the whole beast in BASIC ought to be enough to stop any AI from becoming too uppity.

      1. danR2

        Re: As had been said... WTF Java?

        FORTH.

        1. Chemist

          Re: As had been said... WTF Java?

          I'll start you off :

          : BRAIN HALFABRAIN HALFABRAIN ;

    4. JDX Gold badge

      Re: As had been said... WTF Java?

      Seriously Arthur, if you're thinking "a neuron class would be good" then you're reallllly not the person to be making criticisms over the software chosen.

      The VM doesn't impact scaling, there are JVMs which can span multiple PCs for instance.

  8. Cliff

    Honey boo boo

    Come on Reg, don't demonize kids, that isn't on. Poor thing is a product of her environment, and may yet grow up to take her place in the world. I mean, how would you like too be judged now for how you were aged 10 or whatever? People grow up and change.

    1. Tom 35

      Re: Honey boo boo

      The sad thing is making a TV show out of it.

      I thought TLC was The Learning Channel (not living in the US, or having cable TV) but looking at their website it clearly has very little to do with learning. "The number-one nonfiction media company" they say... the whole thing looks like a load of crap to me. People pay for that stuff?

      1. Michael Wojcik Silver badge

        Re: Honey boo boo

        I thought TLC was The Learning Channel (not living in the US, or having cable TV) but looking at their website it clearly has very little to do with learning.

        How can you say that? Did you know about Honey Boo Boo before TLC revealed her to the world? We're not all savants, you know.

        People pay for that stuff?

        Not by choice. "Extended cable" channels like TLC aren't offered a la carte; they come as part of a bundle, typically the entire "extended cable" bundle. And often the bundles are only sold as tiers; you can only get a given tier if you also get all the lower ones. With my provider (one of the highest-rated in the US in independent surveys), I get TLC if I want anything more than the most limited package, which only includes local stations, shopping channels, and a few mysterious extras like WGN (a Chicago-based "superstation") and C-SPAN2.

        That said, TLC does occasionally provide some amusement, even if it is generally of the circus-sideshow variety. Just last night we were watching one of their shows on Christmas-themed collections that are somewhat ... excessive. And it was entertaining enough, for forty-five minutes.

  9. John Smith 19 Gold badge
    Boffin

    Keep in mind the human brain runs at <15 Hz.

    That's about the highest frequency brain signals picked up.

    So everything humans do is down to a) Massive parallelism b) Huge fanout/fanin in the neurons (up to 10 000:1 Vs maybe 10:1 in silicon).

    AFAIK still the only serious neural net computer effort was WIZARD, which is going back a bit.

    Interesting the first 4 posts were all moderator deleted.

    1. Efros
      Pint

      Re: Keep in mind the human brain runs at <15 Hz.

      Probably detected as AI, ROTM marches on!

    2. Anonymous Coward
      Anonymous Coward

      Re: Keep in mind the human brain runs at <15 Hz.

      Seriously : why were the posts removed? It seems a bit arbitrary and I'm beginning to feel nervous and vulnerable now that one of the certainties of life (elReg moderation) is looking unpredictable.

      1. Mage Silver badge
        Childcatcher

        Re: Keep in mind (Deleted Posts)

        Maybe loads of posts are deleted all the time and they don't tell us and this is new openness from El Reg.

        Sometimes a post was deleted because it was correction (which is included). This before there was a Correction link. I know I had such a post "deleted" once. It didn't mean there was anything evil.

        Some threads seem to be pre-moderated and some post moderated and it may vary by account? It could have been spam.

        1. NomNomNom

          Re: Keep in mind (Deleted Posts)

          me2 if you spot a spelling mistake and say it they correct the post and delete your comment, which is good or otherwise after they correct it loads of people would reply to your comment saying "it IS spelled correctly u is stupid!!11"

        2. Anonymous Coward
          Anonymous Coward

          Re: Keep in mind (Deleted Posts)

          > Maybe loads of posts are deleted all the time and they don't tell us and this is new openness from El Reg.

          A number of us commented early about it written in "javascript". It was a typo so the comments were deleted since they wouldn't make sense after the typo was corrected.

          1. This post has been deleted by its author

    3. Anonymous Coward
      Anonymous Coward

      Re: Keep in mind the human brain runs at <15 Hz.

      So everything humans do is down to a) Massive parallelism b) Huge fanout/fanin in the neurons (up to 10 000:1 Vs maybe 10:1 in silicon)

      Errm, AFAIK you're casually omitting one of the most important contributors to how we tick: we're analogue creatures and do things by approximation.

      1. Nick Ryan Silver badge

        Re: Keep in mind the human brain runs at <15 Hz.

        There's no such thing as analogue. At some point the measurement is discrete, it's just the scale is high. e.g. you can't have half a photon or atom. OK so you can, but things get a bit interesting at this point and we're generally interested in stable structures.

        The human brain is a massively parallel pattern matching machine - emulating this in a procedural computational environment is never going to be optimal.

        1. Anonymous Coward
          WTF?

          Re: Keep in mind the human brain runs at <15 Hz.

          > There's no such thing as analogue. At some point the measurement is discrete...

          I don't know why you got the downvotes.

          A neuron firing is a non-analogue operation. The input to it might be analogue, or at least more finely nuanced but the computation is certainly binary.

  10. Mage Silver badge
    Holmes

    "This is nothing like as quick as the human brain,"

    It's probably nothing like the human Brian either.

    I'm not sure what research like this can achieve, but at least it's research and may achieve something, though probably not AI.

    Studying Neurons may be as related to brain function as pigment chemistry is to art. Also while we know particular areas are used when we do or sense various things, you can relearn these things in many cases when that part of brain is destroyed. Also we still have no useful definition of Intelligence (or Stupidity, perhaps you need a degree of intelligence to be stupid) or reasonable method of measurement (IQ tests and Psychometric Testing test certain Memory and Reasoning features that seem to somehow be related to Intelligence. They don't actually measure intelligence).

    Neuron simulation is good research in its own right, but may be a dead end for creating real AI. All existing AI systems are actually not AI at all by any meaningful non-Computer definition of Intelligence, though useful.

    1. David Dawson
      Facepalm

      Re: "This is nothing like as quick as the human brain,"

      I think that we should include Brian in the conversation directly, if you want to start comparisons.

    2. Anonymous Coward
      Anonymous Coward

      Re: "This is nothing like as quick as the human brain,"

      > Neuron simulation is good research in its own right, but may be a dead end for creating real AI.

      I disagree.

      When learning about the art of flying, people studied birds. Then they tried to create wings as part of the process of understanding flight. Then, once the principle was understodd, they made aeroplanes.

      The neural net field is very theoretical and highly mathematical, but at some stage, you have to put something together to probe their properties. Fron this process will emerge a greater understanding of the underlying mechanisms.

      At the end of the day the mid-term goals are not the recreation of the human brain but the creation of a machine that can think, which will look and act practically nothing like the human brain in much the way that a jumbo jet is not like a bird. As technology advances, and power systems, processing systems and fabrication enables it, we will see smaller leaner and more specialized forms, like we now see with tiny flying drones such as we see for spying.

  11. NotDevo
    Pint

    Spare brain

    But it is smarter than many politicians... it can remember.

  12. NomNomNom

    Should this research even be allowed without regulation? I am shocked by the attitude these so-called "researchers" have towards developing this thinking machine. I read the article over a couple of times because I thought I must be making a mistake, but no, there is no mention at all of any security measures they've put in place to de-activate the AI if it should become too powerful. What if the AI learns to use it's massive arm to tear the lab apart and escape into the wild and start splicing innocent people? They seem to be ignorant of the dangers, almost jovial about the whole thing.

  13. Anonymous Coward
    Facepalm

    haha! I was scratching my head wondering why only 2 and a half million neurons and it being as slow as a dog when the author casually mentioned that it is written in Java!! ROFL!

    1. psyq
      Headmaster

      Java or not...

      Actually, Java is the smallest problem here (although it is rather lousy choice if high-performance and high-scalability is a design goal, I must agree).

      The biggest problem is brain "architecture" which is >massively< parallel. For example, typical cortical neuron has on order of 10000 synaptic inputs and a typical human has on order of 100 billion neurons with 10000x as much synapses. Although the mother nature did lots of optimizations in the wiring, so the network is actually of a "small world" type (where most connections between neurons are local, with small number of long-range connections so that wiring, and therefore energy, is conserved) - it is still very unsuitable for the Von Neumann architecture and its bus bottleneck.

      For example, you can try this:

      http://www.dimkovic.com/node/7

      This is the small cortical simulator I wrote, which is highly optimized for Intel architecture (heavily using SSE and AVX). It is using multi-compartment model of neurons which is not biophysical, but phenomenological (designed to replicate desired neuron behavior - that is, spiking ,very accurately without having to calculate all intrinstic currents and other bilogical variables we do not know) .

      Still, to simulate 32768 neurons with ~2 million synapses in real time you will need ~120 GB/s of memory bandwidth (I can barely do it with 2 Xeons 2687W with heavily overclocked DDR 2400 RAM!) ! You can easily see why the choice of programming language is not the biggest contributor here - even with GPGPU you can scale by max. one order of magnitude compared to SIMD CPU programming, but the memory bandwidth requirements are still nothing short of staggering.

      Then, there is a question of the model. We are still far far away from fully understanding the core mechanisms that are involved in learning - for example, long term synaptic plasticity is still not fully understood. Models such as Spike-timing dependent plasticity (STDP) which were discovered in late 90's are not able to account for many observed phenomenons. Today (as of 2012) we have somewhat better models (for example, post-synaptic voltage dependent plasticity by Clopath et al.) but they still are not able to account for some experimentally observed facts. And then, how many phenomena are still not discovered experimentally?

      Then, even if we figure out plasticity soon - what about glial contribution to neural computation? We have much more glial cells which were though to be just supporting "material", but now we know that glia actively contributes to the working of the neural networks and has signalling of its own...

      Then, we still do not have too much clue in how the neurons actually wire. Peters rule (which is very simple and intuitive - and therefore very popular among scientists) is a crude approximation with already discovered violations in-vivo. As we do know that neural networks mature and evolve depending on the inputs - figuring out how the neurons wire together is of uttermost importance if we are really to figure out how this thing really works.

      In short, today we are still very far away from understanding how brain works in the detail required to replicate it or fully understand it down to the level of neurons (or maybe even ions).

      However, what we do know already - and very well indeed, is that the brain architecture is nothing like Von Neumann architecture of computing machines, and emulation of the brains on the Von Neumann systems is going to be >very< ineffective and require staggering amounts of computational power.

      Java or C++ - it does not really matter that much on those scales :(

      1. Steve Knox
        Thumb Up

        Re: Java or not...

        Thanks for the info, psyq. While I enjoy making trolling remarks about languages, I also do appreciate relevant, informative posts.

        1. Anonymous Coward
          Anonymous Coward

          Re: Java or not...

          Me too! Also a nod to Terence Stewart for his posts. Don't let the buggers get you down!

      2. tcstewar

        Re: Java or not...

        Hi, this is Terrence Stewart, one of the researchers on the project.

        The parallelism point is exactly why we've been working with a group at Manchester building a system to deal with exactly this problem: http://www.theregister.co.uk/2011/07/07/arm_project_spinnaker_super/

        Once we use our software to determine the connectivity between neurons that is needed to perform a particular task, we can download that connectivity into SpiNNaker (or any other high-speed neural simulator) and run it there. That's a big part of why we haven't worried about simulation speed all that much so far.

        One way to think of what we're doing is that we have a neural compiler. You specify the algorithm, and it figures out the optimal way to organize neurons to compute that function. The compiler (and tutorials) is all available here: http://nengo.ca

  14. Peter Johnstone
    Joke

    It's not all that intelligent...

    ...I've heard that it has become a member of Westbro Baptist Church.

  15. Arachnoid
    FAIL

    But why begin modelling on a human brain, surely the complexity required for a computer brain would overly obscured using this technique by the additional use of sensory functions that are irrelevant to the requirements of the experiment?

    1. Steve Knox

      Because...

      Here are a few of the many reasons for modeling a human brain before attempting to create a computer brain:

      • First off, we have human brains to compare to, so we can measure our progress
      • Second, there are still aspects of biological brains which we still don't understand, any of which could result in the creation of a computer brain so dysfunctional as to be useless.
      • Third, as these results show, our systems simply aren't capable of modeling a neural network of significant size.

      Finally, I question your assertion that the sensory functions are irrelevant. Even a computer brain will have to interact with the world. Even if you only connect a terminal up as I/O, that has to be mapped as sensory input somehow.

  16. Jason Hindle

    So, not exactly Orac then

    As others have said, using Java wouldn't exactly help. This needs C or C++ (or better still, that machine language stuff that well hard programmers used in the 80s - preferably with a hex editor they'd written for themselves in their computer''s basic).

    1. Chemist

      Re: So, not exactly Orac then

      "preferably with a hex editor they'd written for themselves in their computer''s basic)"

      Well I did write an 6800 assembler once in BASIC but that took hours to assemble 1K of code - raw hex is much better written on bare hardware and a hex keypad ( no- it's NOT )

      1. Anonymous Coward
        Anonymous Coward

        Re: 6800

        I wrote a 6802* cross-assembler and simulator in BASIC on the Amstrad CPC464. I thought everybody else had more sense than to do that! Never got round to building the hardware to use it tho'

        *If anyone under 50 is interested, the 6802 was a 6800 with 128 bytes of RAM and a built-in clock generator.

        1. Chemist

          Re: 6800

          "I wrote a 6802* cross-assembler "

          Actually I used it for a 6802 - it was the first cpu project I did and the end-result was a burglar alarm. Luckily I only needed to run the assembler twice as it was so slow. I had to shoehorn a PIA into the Dragon I was using to run a home-made EPROM programmer. I rapidly shifted to 6809s and Forth assembler after that !

    2. psyq

      Re: So, not exactly Orac then

      Hmm, machine language would be a huge waste of time as you could accomplish the same with the assembler ;-) Assuming you meant assembly code - even that would be an overkill for the whole project and actually it might end up with slower code compared to an optimizing C/C++ compiler.

      What could make sense is assembly-code optimization of critical code paths, say synaptic processing. But even then, you are mostly memory-bandwidth bound and clever coding tricks would bring at most few tenths of % of improvement in the best case.

      However, that is still drop in the bucket compared to the biggest contributor here - for any decent synaptic receptor modelling you would need at least 2 floating point variables per synapse and several floating point variables per one neuron compartment.

      Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks.

      1. John Smith 19 Gold badge
        Thumb Up

        Re: So, not exactly Orac then

        "Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks."

        Interesting to put some brackets on what kind of hardware you're looking at.

        For "neuron" density you would seem to want some kind of fine grained parallelism. You might look at some of the ICL DAP processors of the early 70's. SIMD mostly but with some controlled instruction bypass at the processor level. I suspect the connectivity (especially) the fan out is the big problem. All done long before any kind of PAL architecture was available.

        Perhaps an architecture where every node is the same but and can support message passing but whose intermediate results and node identifier can migrate. In time highly connected codes also become physically close nodes within the array.

        1. psyq

          Re: So, not exactly Orac then

          My bet is on the truly neuromorphic hardware.

          Anything else introduces unnecessary overhead and bottlenecks. While initial simulations for model validation are OK to be done on general-purpose computing architectures, scaling that up efficiently to match the size of a small primate brain is going to require elimination of overheads and bottlenecks in order not to consume megawatts.

          Problem with neuromorphic hardware is of "chicken and the egg" type - to expect large investments there needs to be a practical application which clearly outperforms the traditional Von Neumann sort - and to find this application, large research grants are needed. I am repeating the obvious, but our current knowledge of neurons is probably still far from being on the level enough to make something really useful.

          Recognizing basic patterns with lots of tweaking is cool - but for a practical application it is not really going to cut it as the same purpose could be achieved with much cheaper "conventional" hardware.

          If cortical modelling is to succeed - I'd guess it needs to achieve goals which would make it useful for military/defense purposes (can be something even basic mammals are good at - recognition, and today's computers still suck big time when fed with uncertain / natural data).. Then, the whole discipline will certainl get a huge kick to go to the next level.

          Even today, there is a large source of funding - say, for Human Brain Project (HCP). But, I am afraid that the grandiose goals of HCP might not be met - coupled with pimping of general public and politician's expectations, the consequences of failure would be disastrous and potentially yield to another "winter" similar to the AI Winters we had.

          This is why I am very worried about people making claims that we can replicate human brain (or even brain of a small primate) in the near future - while this is perhaps possible, failing to meet the promises would bring unhealthy pessimism and slow down the entire discipline due to cuts in funding. I, for one, would much rather prefer smaller goals - and if we exceed them, so much for the better.

  17. Alan Denman

    There is Dumb and then there is Dumber

    It might be as dumb as a bag of hammers, but all those predicting brain like intelligent computers might well compete for dumbness.

  18. James 100

    Better platform needed

    Never mind software, to scale it would need an (analogue) ASIC: a mix of op-amps and analogue sample/hold buffers presumably. A thousand transistors per neuron, a billion transistors per chip (lower than production chips right now) and you'd need a hundred thousand-chip array to match a human brain's complexity ... somewhere in the hundred-million mark for cost? Beyond any plausible research grant, but might just about happen as a major government research project.

    Give it a couple of years, a hundred billion transistor chip, I can imagine that actually being feasible.

    Or, of course, you go the biological route: grow a few hundred million neurons in a little vat of nutrient, see what happens... There have already been some small experiments like this with insects, haven't there?

    1. psyq

      Re: Better platform needed

      There is still a tiny issue of connectivity - despite the fact that synaptic connectivity patterns are of "small world" type (highest % of connections are local), there is still a staggering amount of long-range connections that go across the brain. Average human brain contains on order of hundreds of thousands of kilometers (nope, that is not a mistake) of synaptic "wiring".

      Currently our technologies for wiring things on longer distances are not yet comparable to mother nature's. Clever coding schemes can somehow mitigate this (but then, you need to plan space for mux/demux, and those things will consume energy), however - but still, the problem is far from being tractable with the today's tech.

  19. Paul Hovnanian Silver badge

    Canadian Scientists build computer ....

    .... dumb as a bag of hammers.

    Got to be a McKenzie brothers tie in somehow.

  20. Dropper
    Terminator

    Time Issue

    Don't forget you've got less than 3 weeks to get this thing finished.. will you be using it to figure out how to deflect a rogue planet smashing into the earth? Is that why it has an arm?

    1. Rattus Rattus

      Re: Time Issue

      *sigh* Really? Can we please stop giving air to that December 21st lunacy?

  21. kukreknecmi

    Isnt this bigger??

    http://www.kurzweilai.net/ibm-simulates-530-billon-neurons-100-trillion-synapses-on-worlds-fastest-supercomputer

  22. Anonymous Coward
    Anonymous Coward

    For comparison:-

    http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons

  23. Duncan Macdonald

    A better target animal might be a Portia spider

    Portia spiders are small but seem to exhibit intelligent hunting behaviour. This system might just be big enough to simulate the brain of a Portia.

  24. Stephen 27
    Unhappy

    Emotional connection

    Is it strange that I feel empathy for this poor wee brain? Trapped with limited ability to express its self. It might be in pain. Or if might be hungry....... FOR BRAINS!

This topic is closed for new posts.

Other stories you might like