back to article New AI chip from MIT gives Skynet a tenfold speed boost

A team from MIT has demonstrated a new type of deep-learning chip that dramatically speeds up the ability of neural networks to process and identify data. In a presentation at the International Solid-State Circuits Conference in San Francisco, the researchers showed off Eyeriss, a chip designed specifically for deep learning. …

  1. John Smith 19 Gold badge
    Unhappy

    OMG processors mixed with memory. Chapel Hill NC 1977?

    Whatever will they think of next?

    Again.

    Let's see if this this effort makes it out of the labs.

    TBH I've no idea what this "deep learning" they speak of is.

    Multi layer neural nets like real human brains have?

    1. DropBear
      Trollface

      Re: OMG processors mixed with memory. Chapel Hill NC 1977?

      Oh, "deep" is just the new buzzword in AI ever since the dog-dreaming Google AI reveal...

    2. Bc1609

      Re: what is deep learning?

      Yes, it is sort of like a human brain - simplified in some ways and made more complex in others, obviously. The "deep" bit of "deep learning" normally refers to the fact that you have multiple layers of neurons between input and output. As you increase the number of layers you increase the potential performance of the network, but you also massively increase the difficulty of tuning the network to get that performance. In many networks you also lots of feedback loops of varying degrees of sophistication, as well as lots of different types of neuron and connections and so on.

      We don't yet have a full understanding of really complex neural networks, and there's no real consensus on the "best" or "right" way of designing them. We have a lot of techniques that work fairly well, but we don't know whether they're the best way or how to prove whether they are or not. The maths and scientific understanding is quite a long way behind the empirical engineering (for now) and much of the design is trial, error, instinct and proprietary "black box" stuff (which is almost certainly 1% real progress to 99% "we did this and it worked but we don't know why so we'll keep quiet about it").

      Custom hardware is therefore important for researchers because, in the absence of a really sturdy mathematical base on which to build, progress is mostly made by trial and error, and faster chips like this allow more rapid iteration. If you're really curious about this stuff there's an excellent primer over at http://neuralnetworksanddeeplearning.com/.

      1. John Smith 19 Gold badge
        Unhappy

        Re: what is deep learning?

        "The "deep" bit of "deep learning" normally refers to the fact that you have multiple layers of neurons between input and output."

        Pretty much what I thought.

        I'm aware there is a step change between 1 layer and multilayer NN's

        But I'm not sure what it results in could be called "deep" in the sense of understanding

  2. Pascal Monett Silver badge
    Windows

    Anyone else find the concept a bit frightening ?

    So, my future phone is to have Cortana/Siri locally, check my face/fingerprints and decide if I can use it, maybe even decide to forward parts of my conversations automatically to some "interested" 3rd-party.

    All this just so people can continue their incessant jabbering whether or not there's anyone around to listen.

    Humbug.

    1. Bc1609

      Re: Anyone else find the concept a bit frightening ?

      This is almost exactly the opposite of what you're describing. Currently, the "scary" bit of Cortana/Siri/GNow (from a privacy perspective, not a RotM one) is the web-connectivity. Sadly, this connectivity is currently required for many of their functions, because their speech recognition/image recognition etc. is actually done on the massive servers of $US_TECH_FIRM using the raw data from your phone (mic recordings, etc.) as input. These chips could allow all that processing to be done on the phone instead of on the server, thus reducing the need to send all your data back to home. From the article:

      "You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications."

  3. Anonymous Coward
    Anonymous Coward

    Re. Skynet v10

    I'm actually all for having some AI on a phone, it would make life a lot easier.

    Add to this ultra compressing user data and cloud backuping it automagically whenever an open WiFi port is found, but only if battery is more than 60%.

    Also useful would be doing data deduping such as finding the 20 or so identical or near identical pictures of the same thing (I'm looking at you, SO) and compressing it by context so if you do a search for "trees" it will group them for you in order of relevance.

    The even more useful feature would be mobile malware protection: one of the big problems facing phone users is that malware evolves faster than antivirus.

    Algorithms that do signature based scanning so a suspect program runs with simulated user input in a VM running on a physically separate core with isolated memory.. that would be nearly unbreakable.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like