back to article ARM daddy simulates human brain with million-chip super

While everyone in the IT racket is trying to figure out how many Intel Xeon and Atom chips can be replaced by ARM processors, Steve Furber, the main designer of the 32-bit ARM RISC processor at Acorn in the 1980s and now the ICL professor of engineering at the University of Manchester, is asking a different question, and that is …

COMMENTS

This topic is closed for new posts.
  1. GSV Slightly Perturbed

    Interesting Times

    [broadcast Eclear, sent 1310073565.0]

    xGSV Slightly Perturbed

    oBOFH Reg Readers

    So if you manage to simulate a human-level consciousness, would you consider that it stops being a simulation? Does the Human Rights Act not apply if it's not human?

    Inquiring, and indeed Enquiring Minds, wish to know.

    oo

    1. Heff

      Ethics

      XGCU Grey Area (apt, no?)

      I once had a great chat with an AI -focussed prof about the ethics of creating a parallel human mind incapable of forgetting mistakes, of unlearning the pain sensation and merely storing it as "bad". Good brainfood times.

      whilst you can probably simulate 1 billion spiking neurons with this array, whats the time-frame? I can simulate a protein folding on my graphics card, but a couple of nanoseconds of modelling takes a few hours of compute time : are we simulating 1% of a human brain at 1% of the speed, or are we simulating 1% at 1:1 speed ratios, or are we doing 1% at faster-than-human levels?

      The point : a million monkeys can write shakespeare idea : what if you had a monkey (this 1% brain model) and just ran it a million times faster than a monkey?

      TL:DR : if its quick, does it have to be vast?

      1. Jon Double Nice
        Coat

        If you've ever seen a monkey on amphetamines

        You'll realise that just making them go faster doesn't really help very much.

        1. Ian Yates
          Coffee/keyboard

          Aagghh!

          I can't get the image of a monkey on amphetamines out of my head!!

        2. Burch
          Windows

          Well

          ...they can type faster.

      2. Rosco

        Sounds like it's real time

        "What it does mean is that the simulated neurons can fire off a pulse to any other simulated neuron in the million-core system in about 1 millisecond, which just so happens to be about as fast as your neurons do it."

        It seems to be "real time". Sadly the architecture doesn't sound like they can do 100% of a human brain at 1% speed, which would surely be interesting. The analysis of the system behaviour won't be done real time anyway, just on logs so it doesn't really matter if it goes really slowly.

    2. Ian Michael Gumby
      Boffin

      @GSV Slightly Perturbed

      "So if you manage to simulate a human-level consciousness, would you consider that it stops being a simulation? Does the Human Rights Act not apply if it's not human?"

      Only if it could show that it had more intelligence than the average commentard. :-)

      Seriously?

      Its only a simulation until you can show that it can pass a Turing test. Even then, if you cut power, does that mean death, or just sleep if you can persist the last known state?

      If you can persist the last known state, then when you restore power and restart the simulation, you just continue where you have left off. So no need to question about Human Rights Act.

  2. Anomalous Cowturd
    Alien

    Brown acid?

    Woodstock survivor Tim?

    I'll stick to 'shrooms I've picked myself thanks!

    Have you seen the state of this guy's eyes?

  3. Will Godfrey Silver badge
    Angel

    Fascinating

    Does anyone know at what level it might become self aware, how to tell if it has, and what the ethical implications would be?

    1. BristolBachelor Gold badge
      Joke

      Perhaps...

      "...might become self aware, how to tell if it has"

      Because it hacks into the robotic car plants and reprograms them to build flying, killing machines (or just hacks into the preditor network)

      "...and what the ethical implications would be?"

      It names itself "SKYNET" because it has developed a sense of humor and kills everyone that it can.

    2. Destroy All Monsters Silver badge
      Terminator

      Of course not

      I anyone knew that, that would mean it had already been done.

      This is also just a coarse-grained simulation . Getting structure and interesting behaviour into this is another problem.

      And in the end, I bet a smart machine won't be a large neuron simulator at all. Planes are not hundreds of flapping wings either. Some cross between Watson and Cog I reckon.

      1. John Smith 19 Gold badge
        Happy

        @Destroy All Monsters

        "And in the end, I bet a smart machine won't be a large neuron simulator at all."

        Well you're a large collection of neurons and you seem to simulate intelligence quite well.

        So the question is are you a smart machine?

        1. Kamal Hashmi

          1

          > Well you're a large collection of neurons and you seem to simulate intelligence quite well.

          It's all that's is needed....

          > So the question is are you a smart machine?

          We are all complex machines - but not very smart.

          The only proper ending for the Terminator series of films is the future AIs make one that's much smarter than humans instead of just as smart but faster. It would end the war rapidly and the remaining humans would be contented pets.

  4. Anonymous Coward
    Joke

    That guy's picture looks like...

    ...the father on Third Rock from the Sun.

    Coincidence?

    I think not!

    1. LaeMing
      Go

      I thought he looked like

      the nuclear researcher / mum's potential love interest from 'The Manhattan Project'.

    2. Robert Ramsay
      Joke

      John Lithgow

      "Laugh while you can, monkey boy!"

  5. Filippo Silver badge

    sigh

    Again with the performance comparisons of real neural networks versus simulated neural networks? The human brain isn't more powerful than a supercomputer. It's an apples to oranges comparison. They are two drastically different devices that perform drastically different tasks. Yeah, it takes an unfeasibly powerful computer to simulate a human brain in real time. But, guess what, it would take an unfeasibly large number of human brains (if they could be coordinated) to simulate a meager cellphone in real time. It's pretty stupid to compare the performance of a real system to that of a software-simulated one.

    Call me when someone finally makes a neural network in hardware, then we can talk about comparisons.

    1. Anonymous Coward
      Thumb Up

      And while you're busy sulking, moaning, and being pedantic

      He is busy pushing forward the frontier, and moving closer to that goal.

      I think this is to be applauded, and is an excellent allocation of resources that I am happy to contribute tax money to.

    2. Tom Maddox Silver badge
      Stop

      I know, right?

      They didn't make a complete neural network from scratch, they just built a system to simulate a small part of one, so this effort is obviously a complete failure. After all, if you can't succeed completely in a gargantuan task in one go, you might as well not even start it.

      </sarcasm> for the <sarcasm>-impaired.

      1. wayward4now
        Linux

        Joe Lunchbucket and his brain

        Isn't this about as much of our own brains that we actually use?? Of course, I use far more of my own, but from what I've observed on the news, regarding politicians, district attorneys, chairman of the IMF, news pundits and all of the "silly people" we see in adverts, films and music videos, a couple of old radio tubes and a 9 volt battery ought to do the job quite nicely. Oh yes, add some pulleys too, on the off chance that actual work could be performed.

        1. Joe Cooper

          @how much brain use

          "Isn't this about as much of our own brains that we actually use??"

          Only if you've been struck in the head too many times; the "you only use 10% of your brain" line is a myth founded on a misunderstanding of how the brain works. Not only do you use all your neurons but there's also this whole chemical "volume transfer" of information and hormones and mroe.

    3. Anonymous Coward
      Terminator

      And in circa 2048

      when Filippo trundles up in his electric wheelchair to have his brain uploaded so he can outlast his physical body, they will cite the above post before sending him off to the Soylent Green plant.

      They can't allocate 25kW of ongoing power to EVERYONE you know!

    4. BristolBachelor Gold badge

      @Filippo

      I'm sorry; where did you see a comparison of supercomputers to human brains?

      I just read an article about simulating a section of brain using a computer. Similar simulations are done on computers such as weather and protein folding, so is that comparing rain with supercomputers?

      What they are doing here is building a simulator that simulates a bigger section of brain for less money. The idea is to understand how the brain works on a macro rather than micro scale. Hopefully this may then lead into better medicine and healthcare.

  6. nyelvmark

    We've cracked it boys!

    We've figured out something we're calling the 'D-type flip flop'. We think it's how computers are able to remember stuff. We'll be able to replicate Doom 2 in a matter of months.

    Or,

    So, you're the first simulated human brain. Can I ask you a few questions?

    - No. Go screw yourself.

  7. Anonymous Coward
    Terminator

    i for one...

    ...welcome our new ARM powered, neural networked overlords.

  8. Jaggies
    Trollface

    Will...

    ...it run RISC OS?

    1. Chika
      Devil

      Will it run RISC OS?

      And which version? Castle's 5, ROL's 4 or ROOL 6? Or will we get another variation?

  9. Eddy Ito

    Wonderful!

    "the human brain has somewhere on the order of 80 to 90 billion neurons"

    Obviously you're discounting politicians. Given the scale of the project is "1 million processors to simulate the activities of around 1 billion neurons" they clearly have double the equivalent of the US Congress and all the lobbyists in DC. Perhaps they could run a quick sim to explain that clusterfuck.

  10. Disco-Legend-Zeke
    Pint

    The Brain Handles Information...

    ...in the form of a hologram, the phase modulation being the arrival times of neural pulses.

    The n-gram of memory modifies the firing time of a given neuron.

    Many drugs also modify firing timing.

    1. Ken Hagan Gold badge
      Unhappy

      Re: The Brain Handles Information

      You may be right, in which case this simulation will discover absolutely nothing. Or it may be some *other* feature of a neuron's behaviour that we don't yet realise is significant, in which case (again) this simulation will discover absolutely nothing.

      And when it doesn't work, we won't have a clue why not.

      I'm all for blue-sky research, but this does seem to be a /complete/ shot in the dark. Would it not be smarter for these guys to take some lesser organism (like an insect), faithfully model everything that they believe is important, and then see if the simulation actually reproduces the observed behaviour? That experiment is doable and guaranteed to teach us something.

      1. nyelvmark
        FAIL

        Would it not be smarter...

        No, it wouldn't, because who gives a fuck what insects think? Hence, no funding.

        1. Ken Hagan Gold badge
          Unhappy

          But...

          I'm much more interested in the possibility of simulated thoughts of an insect than in the real thoughts of you.

  11. Anonymous Coward
    Boffin

    Nature will still be winning

    But when the day comes that artificial brains can be grown (and I'm sure that day will come) - and we then mass produce brains many times the size of our own to use as computers?

    That is one age I would love to be able to live to see - the possibilities are endless.

    1. nyelvmark
      Facepalm

      the day comes that artificial brains can be grown (and I'm sure that day will come)

      Why on earth do you imagine that artificial brains might replace computers? Might they also replace hammers, screwdrivers, garden gates, kitchen sinks, guns, anti-personnel mines, nuclear power stations, garden shears, or chain-saws?

      A computer is a machine that is capable of reading a list of instructions and executing them. It has no more intelligence than a screwdriver, never has done, and never will have.

      As I've said before, this entire confusion is caused by the ambiguity of the word "intelligent" and its careless use by IT folks.

      1. Anonymous Coward
        Facepalm

        Intelligence

        I didn't say anything about intelligence buddy. But show me a computer with a similar processing power, size, power usage and self repairing ability of a brain.

        A lot of money is being spent on organic computer research, and with good reason. If you cant see the potential of it, then fair enough.

  12. Anonymous Coward
    Alert

    I for one.....

    ..... am not sure about creating our new skynet overlord........

  13. Anonymous Coward
    Terminator

    And with other scientists working on time machines.....

    Behold the birth of Skynet.

    The only question is: have the TOKs he sent back to prepare stayed loyal or have they started to like humans?

  14. Anonymous Coward
    Anonymous Coward

    Good morning Dave

    ttfn

  15. ZenCoder

    You can't randomly generate the works of shakespear.

    The end result is a huge pile or random junk data. To pick the correct string of characters out of that junk you have a couple options.

    A) You compare it to an existing document. In this case you are not creating anything, you are simply copying the original in a very round about way.

    B) you have a intelligent observer read the junk and pick out meaningful content. In this case the intelligent observer is actually creating the meaning through the process of selection.

    1. Anonymous Coward
      Anonymous Coward

      bollocks

      if I hide a bar of gold in a rubbish tip does that bar of gold cease to exist until is found?

      1. Anonymous Coward
        Anonymous Coward

        @MrChriz

        If you post spurious irrelevant gibberish on a forum, does it become intelligent?

        No it doesn't. The OP is correct about randomly generated content - it is just the symbols without the meaning.

      2. Nigel 11
        Boffin

        It might do

        It's possible (but thermodynamically improbable) that the gold might dissolve while you weren't looking, at a rate very many orders of magnitude greater than for any other chunk of gold ever observed.

        If someone else dumps cyanide waste, or a mixture of nitric and hydrochloric acids, on top of your gold, the likelyhood of its disappearance is considerably enhanced ;-)

        In either case, it's an inanimate Schrodinger's cat until you observe it.

        1. TeeCee Gold badge
          Coat

          Re: It might do

          "....it's an inanimate Schrodinger's cat....."

          Is that inanimate as in dead or as in asleep?

      3. Ken Hagan Gold badge

        Re: hiding a bar of gold

        If you just *say* you've hidden a bar of gold in the rubbish tip, then perhaps it *doesn't* exist.

        Similarly, I bet there really is tons of gold at the bottom of the Pacific but (as we were all told earlier this week) that doesn't help a bundle because it is mixed up in a whole load of rubbish.

  16. Anonymous Coward
    Trollface

    Asynchronous processes

    They're building custom silicon. A company that was started to do asynchronous interconnects is doing the design work. They're based out of the Manchester CS department. I wonder why they chose ARMs over Amulet.

    1. Anonymous Coward
      Anonymous Coward

      to get

      funding from ARM?

    2. John Smith 19 Gold badge
      Thumb Up

      AC@09:53

      In fact Steve Furber *also* ran the Amulet project.

      But you're right if you want to cut the power bill asynchronous is *the* way to go.

      I suspect this might have something to do with being able to observe and log the state of all the processors at the *same* time so you can establish what they are doing.

      A question that gets *very* tricky when things aren't tied to some kind of central clock.

  17. Mips
    Childcatcher

    90bn neurons..

    .. is that the whole brain or the bit which does the thinking. I seem to remember that 90% of the brain is occupied with bodily functions.

    Human rights. Do you not think we might be moving to create a God?

    1. Joe Cooper

      10% is myth

      Don't try to rationalize it or understand it; it's mentally void.

    2. Peter Ford

      Done that already

      We already created a God. Quite a few of them, actually.

      This is a far more rational project!

  18. Nigel 11
    Boffin

    Brain: Classical or Quantrum computing device?

    Some people think that the real question that needs addressing is whether brains are classical computing devices, or quantum computing devices.

    If the former, then once the right interconnect and neuron code is arrived at, this simulator might be as smart as a cat.

    If the latter, it hasn't got a hope - you'd need that much computing to simulate a single synapse (and even then, only after making a lot of approximations).

    Brain as quantum computer is a minority view. However, a synapse is small enough and sufficiently low-energy that quantum effects must be of significance there. The eye, which is a sensor-extension of the brain, demonstrably is a single-quantum detector. And personally, I would be very surprised if evolution had not found a way to exploit quantum effects, rather than just treating them as a source of noise to be beaten into submission.

    An even more minority view is that consciousness is a quantum effect.

    As a parting shot, where is the code in a solitary spider-hunting wasp, for identifying appropriate prey, stinging in exactly the right place to paralyze it while avoiding becoming prey of the spider, selecting an appropriate site to dig a burrow, entomb spider, lay egg, etc? It is built-in, not learnt. Ditto in a honeybee or termite, for complex colony formation, though in these cases there may be some form of learning or "culture". None of these insects boasts more than a million neurons.

    1. Oppressed Masses
      Thumb Up

      Is Consciousness a Quantum phenomenon?

      Quoting Wikipedia Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine. We say that the halting problem is undecidable over Turing machines.

      As humans can solve halting problems we must assume the brain is not a collection of Turing machines and a simulation of the brain by Turing Machines can never be conscious. It seems to me that true artificial intelligence is impossible until this problems is resolved. It is not my area of expertise what do the expert think about this?

      1. Nigel 11

        We can't solve the halting problem

        Human's can't solve the halting problem either. There are many hypotheses in mathematics lacking a proof, such as the Goldbach Conjecture(*). We just give up on a too-hard problem (just the same as a programmable computer with a proper operating system will be interruptible by its real-time clock and devices, and ultimately, by its frustrated programmer.

        Godel proved that some of these hypotheses will in fact be undecidable within the accepted (finite) framework (set of axioms) of Mathematics. No complete self-consistent system can be based on a finite set of axioms. One may have to extend mathematics by admitting the theorem, or its opposite, as an axiom ... but of course, doing so for something that is in fact decidable risks defining as true that which is provably false, or vice versa.

        (*) that every even number greater than two can be expressed as the sum of two primes in at least one way. So "obvious", yet still unproven more than 350 years after it was first stated.

      2. Anonymous Coward
        Thumb Up

        interesting distraction

        Hmm, a tricky but fascinating area for Reg commentards on a slow Friday. It was Roger Penrose (think Hawking, but less famous and more clever) who pointed out that whether or not the brain is a quantum computer or not, it posseses many of its key properties. He co-authored a paper exploring how this might be biologically realised within neuron microtubules. http://www.cs.indiana.edu/~sabry/teaching/b629/f06/QuantumComputationInBrainMicrotubules.pdf

        I think he has backed away from it, half due to a screechy reductionist onslaught, and half due to "woo-woo mystics" sidling up to him at conferences. more here: http://www.scottaaronson.com/democritus/lec10.5.html

        The japanese were onto a similar brain-scale computer, their sixth generation computer programme - what ever happened to that?

        1. nyelvmark
          Boffin

          Penrose

          ...is getting old, and perhaps a bit eccentric. "The Emperors' New Mind" is still a good debunking of the AI myth, however. It's not something you can skim in an hour, though. You actually have to read it. Furthermore, if you're not prepared to accept Penrose's word, there are many, many more hours needed to read all his references. If you're still not happy, you need to read more widely about mathematics, logic and computer science.

          If you don't want to do all that, you could just believe whoever has the loudest voice, which is probably the organisation that spends the most money promoting their view.

    2. Anonymous Coward
      Anonymous Coward

      @Nigel11

      "Brain as quantum computer is a minority view. However, a synapse is small enough and sufficiently low-energy that quantum effects *MUST* be of significance there".

      I would suggest that if *MUST* (my emphasis) is correct then this would not be a minority view. I could easily get behind *COULD*.

      1. Nigel 11

        Levels of meaning

        My "must" referred to the physics. I could equally well have said that quantum effects must be of significance to the design of a start-of-the-art CPU (20nm gates, 0.8V supply voltage, etc).

        In the CPU, the significance is that they cause things regarded as bad by the designers, like charge leakage (leading to higher power consumption and waste heat) and electronic noise (meaning extra efforts have to be made to keep the circuitry reliably binary, again leading to increased power consumption).

        In a brain, it's presently very unclear what Nature has done with the quantum effects that must be present at the synaptic level. Worked around them as in the CPU? Or, the minority view, embraced them and worked out how to build a much better platform for thinking with? Or, the minority-minority view, allowed consciousness to arise as an essentially quantum phenomenon?

    3. Destroy All Monsters Silver badge
      Boffin

      "An even more minority view is that consciousness is a quantum effect."

      Unfortunately this assumption is useless.

      "Quantum effects" is just code for "some magic happens that gives you additional power; I will leave for the reader to imagine what that could be". In other words: low-level religious feel-good crap packed in modern jargon [yes, I'm looking at you, Penrose!]

      Even honest-to-god quantum computers would not help you solve NP-complete problems in polynomial time. In fact, the problems in BQP [bounded error quantum polynomial time] do not seem to be of any interest for daily jobs. Do do not need fast factorization for getting milk jugs out of the fridge.

    4. Ken Hagan Gold badge

      Shh!

      "As a parting shot, where is the code in a solitary spider-hunting wasp [...] None of these insects boasts more than a million neurons."

      But nyelvmark tells us that no-one is interested, so despite the fact that you'd probably deserve a Nobel prize for answering this question, you'd better go and waste money on something sexier.

  19. HMB
    Alert

    And they called it John Henry

    So you get the neuron count up to 85bn and well done! You have a baby, so who's going to raise the first AI and bloody hell, what's it going to be like as a teenager?!

    It's nuclear power all over again.

    For your entertainment over the next few decades, witness the almighty clash of the luddites Vs the technocrats!

    On the luddites team are millions of people all in complete denial with little gratitude as to how much technology has made their lives better. In this camp the more ignorant you are, the better you can argue!

    On the technocrats team are a much smaller number of technically minded people who understand what's going on. Let's hope they far sighted enough to avoid any unpleasantness that could spring up with the increasingly powerful and potentially world changing technologies of tomorrow.

    1. Disco-Legend-Zeke
      Pint

      Will We Become As...

      ...gods?

      As the machines we construct multiply and become more like us, will they begin to argue over Intelligent Design?

      More importantly, will there be some electronic equivalent of beer?

      1. tpm (Written by Reg staff)

        Re: Will We Become As...

        I sure hope so. Perhaps to stay in their good graces, I will come up with that beer alternative now. Perhaps something based on plasma....

  20. Mike 137 Silver badge

    Which one per cent?

    "...only going to be able to simulate about 1 per cent of the complexity inherent in the human brain"

    So which one per cent are they going to choose? The classic error of artificial intelligence gurus that keeps the intelligence truly artificial is to consider the brain as a single entity with intellect as its prime purpose. The brain is actually an integrated assemblage of several organs - evolved independently and performing multiple separate functions (admittedly with many local overlaps). But fundamentally, as Robt. Ornstein pointed out some 30 years ago, the brain is a body controller. The rest is extra. So an arbitrary replica of one per cent of the number and interconnections of synapses in an average brain dedicated to intellectual processing is not a model of the brain - nor would a similar replica of 100% of the number be.

    So this is all good fun, but really has nothing to do with a decent quality human brain. The Californian definition of artificial intelligence draws from the Californian definition of intelligence, and that of Southampton similarly - but they may not be identical, and neither may be all that representative of the real thing. Mr. Spock was a non-feasible fantasy too.

    Plus, think of the energy consumption. I can work till lunchtime on a couple of slices of toast and some coffee. A million CPUs (even ARM CPUs) are going to clock up some serious electricity bills. Then you have maintenance - we'll be back to the reliability problems of the early vacuum tube computers. So why bother Steve?

    1. Nigel 11
      Thumb Down

      Maintenance issues

      A brain is highly fault-tolerant, at least with respect to well-distributed failures. Neurons die as we age. One of the things we need to do is work out how to make networks of millions of CPUs similarly faut-tolerant.

      Why bother? Curiosity, and an assumption that at some point in the future the electricity bills will be reduceable. The Met office always has a next-generation weather-forecasting program to hand, which they can prove is better than the program currently used in all respects except one. That one respect is that on the currently available hardware, it forecasts tomorrow's weather several days after tomorrow!

    2. David Lester
      Pint

      Reply to Mike 137

      The topic for discussion next Monday with Kevin Gurney (Computational Neuroscience, Sheffield) is: "The striatum: what do we know? Can SpiNNaker model it convincingly?"

      But the point about Robots is well-taken. We have already made contact with both UK and EU robotics potential partners. What Tim has not focused on (there's rather a lot of work behind the press release) is that the system runs in real-time. And it's low power --- each chip consumes 1A at 1V (for 1W power consumption) and for the neural simulation it has the computing power of a typical high-end desk top. The full system runs at less then 50kW (depending on work-load).

      Still, the question we all have is: just what do you have to do to get an article filed under RiseOfTheMachines? Buy all the London-based staff beers?

  21. Pantelis
    Thumb Up

    Rejoice...

    "and how much drinking and brown acid they have done"

    Don't know about the brown acid but according to recent studies the old notion of alcohol killing brain cells seems to be a myth...

    http://health.howstuffworks.com/human-body/systems/nervous-system/10-brain-myths9.htm

  22. AceRimmer1980
    Terminator

    AI powered by a million ARM processors

    and it still can't control the ship in Zarch.

  23. nederlander
    FAIL

    Brain in a jar?

    OK time for my usual AI rant..

    1) There is nothing inherent in neural networks that skews towards any particular type of behaviour (such as pair bonding, aggression, communication with other agents, territoriality, pain response, etc.) If such behaviours were desired, a lot of work would be required in the design of the network topology and learning algorithm in order to increase the likelihood of them arising. So don't worry sci-fi nuts - there is no chance of accidentally creating either a monster or a ickle baby.

    2) Yet again the importance of embodiment has been overlooked. Until you have a billion sensor, billion actuator robot body there is no point in creating a billion neuron brain.

    3) Finally, self awareness isn't anything special. A thermostat is self aware.

    Rant complete. Thank you for your patience.

  24. Nigel 11
    Thumb Down

    A thermostat is self aware - really?

    How do you prove that?

    1. nederlander
      Pirate

      being the bat

      @Nigel 11

      The thermostat can sense the results of its own actions. Therefore it is self aware.

      There is no point (outside art, or worse - philosophy) of defining self awareness in a way that requires one to _be_ the subject in order to determine the extent of its self awareness. A useful definition of self awareness is one that can be determined by an independent observer.

  25. Anonymous Coward
    Facepalm

    Yeah, but...

    We already know the answer is 42.

  26. Matt Bucknall

    Can't help feel...

    ...that this kind of exercise is akin to implementing PC virtualization with a SPICE simulator. It's going to be massively inefficient any which way you look at it. If anyone were even qualified to implement a super efficient neural network in real hardware, it would be Furber! I hope this work leads him to such a solution someday.

  27. Goatan
    Trollface

    Lookalike

    Am i going crazy or does Steve Furber look like John Lithgow (3rd rock from the sun dude)

  28. Steve Martins
    Boffin

    Brains aren't synonymous with storage

    It sound like they are trying to replicate the synchronous firing of neurons as a result of data, rather than the synchronous firing of neurons resulting in data. I've found the best way to think of it is an event driven cascade resulting in further events.

    As a cascade moves through the brain, the neurons along the paths it follows fire. If there are sufficient firing in synchronicity the electric potential in that localised area raisies, and at a certain level causes a reaction with an enzyme which makes that path stronger, so in the future an impulse is more likely to take that path. Each pathway represents information that may or may not be correlated. Over time the correlations form patterns so they fire in synchronicity, leading to associative memory (which is why the best learing techniques use association).

    Maybe I'm wrong, but i believe mimicking the brains activity will need a drastic step change in the design of processors before they can truly replicate such behaviour!

  29. Martin Usher
    Unhappy

    ICL?

    A web search for ICL generates all sorts of results, nothing in the computing line. Alas, like every other engineering venture in the UK it either generates overnight profits or its gone -- in this case, to become Fujiitsu, a name I associate with laptops and incredibly overpriced government software.

    Even the Manchester bit is just a nod towards history. Manchester used to be at the center of computing but.....whatever, the future's all marketing, financial services and the like, isn't it?

  30. John Smith 19 Gold badge
    Thumb Up

    Why nature is ahead.

    Well volume wise it helps if you can do *true* 3d packaging.

    The best I'm aware of in this line was a Hughes project for some kind of missile guidance system (SDI?) stacking bare *wafers* on top of each other with feed through connectors made by putting drops of Tin on the surface and using a temperature gradient to "Drive in" the Tin to create a high conductivity path front to back. top layer sensors (vision?) then multiple layers of processing and memory.

    Today a thing called "SMART Cut" uses H ion implantation to create a weak layer <10 micrometres below the surface. Build the circuit on a regular thickness wafer, slice off the top and repeat. Thickness reduction of x30 roughly, putting very substantial power into a standard chip package. If you can get the heat out.

    However this still leaves you *fundamentally* in a 2d world. Neurons are simply *not* restricted in this way. They also allow fan outs of up to 10000 other neurons, while conventional transistor gates hit about 10 by design.

    Power wise the brains asynchronous architecture saves a *lot* of power and eliminates the whole clock distribution problem. Today *half* of all CPU transistors are dedicated to either transmitting the clock or restoring its rise/fall times or re-synching the local clock with the chip wide clock due to clock skew.

    To *really* get to human brain power levels they would have to go with Carver Mead's CalTech group using custom logic elements (but built on conventional foundary processes) as *analog* components of simulation. They've gone rather quiet lately but part of what they found was the design has to *incorporate* noise, not fight it (digital logic aims to swamp noise).

    They also found that "Computation" flowed in waves across their arrays of devices.

    BTW the Torus is a good architecture for super computers as a message passed even the long way round gets to its target *whatever* x,y direction it's sent in.

    An interesting point about this project will be if the processors will be black box neurons and its the connections and initial data values that will be settable *only* (IE like a *real* brain) or if they will tinker with the simulator code on the nodes once built. IRL that would be more akin to supplying drugs or replacement cells through in vitro grown stem cells.

    It's an interesting project and I'm not sure how much work has been done on the bridging stages between actual neaurobiochemistry and the big picture stuff. Thumbs up.

  31. Iain
    Go

    Will it run 17 times slower?

    Anyone get the reference to a mind bending SF book?

    1. John Smith 19 Gold badge
      Happy

      @Iain

      When Harlie was one?

  32. Anonymous Coward
    Anonymous Coward

    less intelligent than a man?

    As long as they dont simulate the parts of the brain do with thinking about beer and girls, then it should work out about 1000% more intelligent by my reckoning

  33. Anonymous Coward
    Anonymous Coward

    womb envy

    I sometimes wonder if male artificial intelligence developers have womb envy. Much as male-bashing futurologist articles consider a world where men are not needed, perhaps this is the opposite retaliation. An underlying resentment of the opposite sex perhaps?

This topic is closed for new posts.

Other stories you might like