back to article Memristors can maybe learn like synapses

Memristors can potentially learn like synapses and be used to build human brain-like computers according to Wei Lu, a University of Michigan scientist. He thinks that two CMOS circuits connected by a memristor is analogous to two neurons in the brain connected by a synapse. It is thought that synaptic connections strengthen as …

COMMENTS

This topic is closed for new posts.
  1. Gordon Pryra
    WTF?

    very good, but

    Theres a reason why it wont be cats that destroy the human race.

    Cats are stupid

    Super Computers are Evil Geniuses

    Why copy how the cat brain works?

    He should spend his grant looking at how the Evil Super Computer's brains work

    1. Graham Marsden

      On the contrary...

      ... cats are smart enough to know that their human owne^H^H^H^H slaves are useful enough to keep around to feed them, empty the litter tray, stroke them and so on.

      They're also smart enough not to let on that they already rule the world...!

  2. Jelliphiish
    Thumb Up

    this

    is one of those odd discoveries that's been around for about 30 years. apparently the maths predicted it ages ago and they've been making them accidentally in various processes too.

    there was something about it on material world on radio 4 the other night..

  3. Eddie Edwards

    A tad dismissive

    "This again seems a little forced to us; simulating an atomic explosion is a simple task with limited variables?"

    No, it's a complex simulation task that runs much slower than real-time.

    Equally, one could simulate any network of memristors on a supercomputer, but it will be a fuck of a lot slower than the actual memristor circuit.

    Given the existence of simulators, it's silly to say a memristor circuit can do anything a supercomputer can't. But it may be able to do it MUCH faster, especially in the arena of neural simulation where there is a direct correspondence between the neural circuit and the memristor circuit.

  4. Lionel Baden

    like the human brain = bad

    Supercomputers with learning diffuculties oh great !!!

  5. Graham Dawson Silver badge

    Forced eh?

    "simulating an atomic explosion is a simple task with limited variables?"

    Actually yes. It may be surprising, but the big thing about simulating a nuclear explosion isn't the variables, which are relatively few and well understood, but the volume of interactions. Pattern recognition is a whole different ballgame and is something computers are still pretty much crap at, in part because pattern recognition requires a certain amount of abstract understanding of the world. It requires context. Computers can't do context (which incidentally is why we're getting all this crap about the semantic web being reduced to people sticking tags on everything to provide a substitute for context), at least partly because, however massively parallel they might be, each processor thread is isolated from the others. So whereas a nuclear simulation might have each thread controlling the evolution of a single particle, you can't apply that model to the recognition of an image and expect to get the same performance as an actual brain - simply because the brain doesn't work like that.

  6. Identity

    Interesting...

    Seems consistent with the unproven-but-generally-observable tenet of chaos theory called entrainment.

  7. Mike Bell
    Coat

    Easy Peasy

    I'm growing more and more suspicious of the academic community and their outrageous suggestions about face recognition and how to do it.

    I mean, when I go into a bar, I see a whole bunch of instructions like LDA #A9, TXA and PHA rolling down the periphery of my vision. I don't have a clue what that means, but then I get these lovely white meshes that pop briefly into existence, nicely overlaid on people's clothes and body parts. And if I get lucky, the word MATCH flashes in front of my eyes.

    Come on, boffins, it can't be that difficult, surely.

    - Mine's the one that belongs to a pissed-off biker

  8. Eddy Ito
    Terminator

    "I'm sorry, Dave."

    I don't know if it will work as there isn't much of an imperative for the computer to recognize faces and read body language. It may be those things are hard wired from birth. It's to the benefit of a cat, or person for that matter, to know the difference between a vicious dog and a kind person or read a single face for anger or happiness. I just don't know what's in it for a metal box. Really, what can happen, it gets shut off?

    "Just what do you think you're doing, Dave?"

  9. Anonymous Coward
    WTF?

    Oh GREAT

    Now we're getting supercomputers that'll shit on our carpets and rip our curtains and furniture to shreds.

    And they call this progress?

  10. Anonymous Coward
    Anonymous Coward

    The more I learn this the more

    I don't know, I rather study banks of scr's in portable boxes. Hello disposable camera step one?

  11. Anonymous Coward
    FAIL

    okay, that was bad. don't get banks of portable scrs....

    heh heh

    Too much art bell I guess.

  12. Stuart Halliday
    Happy

    How?

    Just how does a bunch of Neurons decide that they are going to be specific memory of an object?

    What limits their memory at the end of this group to that object and not say to another bunch next door which are storing the memory of something else?

    Can someone explain that?

    1. Steve Roper

      Memories aren't limited to a group of neurons

      The human brain (and presumably other animal's brains) stores information such that memories are linked to each other via the various synapse groups. This gives rise to memnonic association; for example, the smell of a particular type of coffee might trigger a memory of meeting your wife for the first time in a coffee shop years ago, the sound of a slamming flyscreen door might remind you of a hot summer's day 20 years ago, or the sight of a particular shade of yellow might remind you of colouring in a particular picture when you were a child. So memory is not in itself limited, but what defines our particular recall and behaviour patterns is the set of associative pathways by which our memories are linked.

  13. John Smith 19 Gold badge
    Boffin

    Igor Alexander, IMperical College

    Worked with 2 others to build facial pattern recognition system WIZARD. Facial recognition in 1 TV frame (/25 sec).

    Used standard DRAMS (lots of small ones IIRC) to build a digital neural net. Probably would have been amazing if they'd secured funding and put it into some ASICs.

    This stuff has a *lot* of history attached to it.

  14. Ammaross Danan
    Terminator

    Programming

    Facial recognition is limited by the programming put into it. Boffins are doing fairly well so far, but throw a bit of hair in the way, and it can confuse their software. I think we're more likely to get a vehicle that can drive down a road by "seeing" than we are to get a good "pick out a face from this picture and recognize who it is" program.

    And, as a journo note, "curcuits", how did that even make it past a spell/grammar check? Do these articles seriously get written in a comment box with a submit button? Someone have a count of typos in this piece?

    1. Anonymous Coward
      Jobs Horns

      Real world

      Surely the programming put into it is limited by how fast the computer running the software can physically process it, and how cost effective the storage, memory and buses are?

      That's why the computers that came with Windows 98 came with Windows 98 and not Windows 7. It isn't just about the time it takes to develop, Windows 98 had certain design goals limited by the hardware they expected it to run on.

      Memristors could have a bigger effect on computing than maths co-processors GPUs and multi-cores did in the past.

  15. 2993858723
    Stop

    wording and such

    "spike timing plastic dependency" should probably be "spike-timing-dependent plasticity" (STDP).

    "He now wants to build prototype devices with tens of thousands of such CMOS circuit/memristor elements, and see if the collection can start responding to stimuli, such as an image of a face, in coherent and dependable ways."

    Using memristors? Forgive me if I sound skeptical, but there seems to be a disconnect between what electrical engineers, computer scientists, and biologists think are important in modeling neural networks, and Wei Lu seems to fit right in with the electrical engineers. Before investing in the production, perhaps he should talk to a computer scientist who could write a neural network model using memristors with STDP as the sole learning mechanism.

    I suspect the scientists would learn that STDP is an incomplete model; it is accurate in what it describes, but lacks abilities to regulate the system or learn according to basic evolutionary reward/punishment rules. Potentially, the first two things the scientists will learn: using STDP, it is easy to make a system that learns to 'die' (stop) when not enough stimuli are present, and it is easy to make a system that is over-active. In the second case, you could add some sort of desensitizing-resistor (desent-sister?), but my point is just that STDP is incomplete and a simple computer model can show this very quickly without needing hundreds of thousands of processors. Animals go through 'massive synaptic growth and pruning' stages, too (which are tough in electronics).

  16. Anonymous Coward
    Headmaster

    No quantum superposition?

    Unless a cat brain simulacrum takes into account Penrose's objective reduction (Penrose-Hameroff Orch-OR) it stands no chance of emulating consciousness. Without consciousness, mamalian level learning may be a difficult goal. Yes, I suspect cats have consciousness.

    "From the point of view of consciousness theory, an essential feature of Penrose's objective reduction is that the choice of states when objective reduction occurs is selected neither randomly, as are choices following measurement or decoherence, nor completely algorithmically. Rather, states are proposed to be selected by a 'non-computable' influence embedded in the fundamental level of spacetime geometry at the Planck scale."

    "In the case of the electrons in the tubulin subunits of the microtubules, Hameroff has proposed that large numbers of these electrons can become involved in a state known as a Bose-Einstein condensate. These occur when large numbers of quantum particles become locked in phase and exist as a single quantum object. These are quantum features at a macroscopic scale, and Hameroff suggests that through a feature of this kind quantum activity, which is usually at a very tiny scale, could be boosted to be a large scale influence in the brain."

    http://www.quantumconsciousness.org/publications.html

    http://en.wikipedia.org/wiki/Orch-OR

  17. Anonymous Coward
    Terminator

    Robots walk among us

    Heh.. maybe these wee beasties will make "I Robot" possible within my lifetime.

    Maybe Asimov's "positronic" brain meant Postive Reinforcement Logic.

    i.e. a simulation of the behaviour of organic neurons.

    i did wonder if it is possible to make an "uncommitted memristor array" in much the same way as a CPLD, so an array of microcontrollers (maybe based on a smaller more compact version of the PIC 10F core) interconnected by a memristor network and preprogrammed with the algorithms necessary to duplicate the behaviour of a neuron in the brain region of interest .

    AC, because I don't want the machines to know where I live when they take over.. :)

  18. gimbal
    Stop

    He thinks ... it's maybe....

    Newsworthy? Is it though?

    If wishes were fishes, we'd all be at the lake with pockets full of dynamite. If they come out with anything concrete, that might be nice to hear. The playing-to-the-sci-fi market thing gets kind of old, in news about supposed AI, you know?

This topic is closed for new posts.

Other stories you might like