back to article Intel teases geeks with 2017 AI hyper-chip: Xeon Phi Knights Mill

Intel is working on a powerful Xeon Phi processor for servers and workstations that is "optimized" for artificial-intelligence software – and it's codenamed Knights Mill. Chipzilla's data center group boss Diane Bryant flashed up this slide during this morning's Intel Developer Forum keynote in San Francisco: The chip is …

  1. Kevin McMurtrie Silver badge

    A chart with an arrow shooting up into infinity! Where's my wallet?

    1. Destroy All Monsters Silver badge

      It's just the inflation curve.

  2. Sebastian A

    Optimised for something that doesn't exist yet. You can tell that the marketing drones have been busy.

    1. Dead Parrot

      Eh?

      I was tinkering with neural networks long enough ago to be coding the bloody things on a VT-100, back when Inmos transputers were still hot stuff (I'm fucking ancient). The tricky part has always been finding viable applications that fit within the processing power you can chuck at it...

      1. Anonymous Coward
        Anonymous Coward

        Re: Eh?

        As one Transputering old man to another, it is amazing to see a lot of those old ideas and architectures coming to frution.

        As you said,

        "The tricky part has always been finding viable applications that fit within the processing power you can chuck at it"

        And the amount of processing power available to those of us that "think parallel" has continued to grow.

        With around 3 TFlops available from a single chip, we've finally got to a point where a variety of applications are viable.

        Couple that with the personal-data-slurping/online-advertising gold rush, decline of traditional PC sales - and neural networks have become the Next Big Thing.

        1. Ian Michael Gumby

          Re: Eh?

          VT-100? LOL...

          Everything old is new.

          When you have networks that are now fast enough to have distributed memory and cpus that are powerful enough as well as have enough memory to retain state?

          Yeah, old ideas are now being tested.

          Its a good thing... for those of us who've been in this game for a long time (30+ years) but are still too young to retire... dust off all of those old texts and ACM/IEEE Symposium notes... :-)

          1. Destroy All Monsters Silver badge
            Holmes

            Re: Eh?

            > dust off all of those old texts and ACM/IEEE Symposium notes

            Seriously don't (unless you want to take an overview trip about history). Start at the leading edge textbooks and papers. The vocabulary, maths and approaches as well as practical knowledge about what works, what doesn't have all changed.

            1. Dead Parrot
              Coat

              Re: Eh?

              Well, I'd agree that a lot has changed. But next time you're killing a few hours waiting for a Windows update, ask yourself if it has all changed for the better, or if there might have been something useful in those old notes.

              Mine's the one with a copy of Harel's 'Algorithmics' in the pocket.

              1. Destroy All Monsters Silver badge

                Re: Eh?

                Certainly. I often like to get old books from amazon for not a lot of money. Springer-issued telephone books of proceedings are sometimes amazingly cheap (and sometimes outrageously dear).

                Plus, one can admire papers that have been typed on a real typewriter. DING!

  3. Katie Saucey
    Terminator

    An AI Hyper-chip eh?

    As long as Intel doesn't buy up Boston Dynamics and stuff the chips into this:

    https://www.youtube.com/watch?v=rVlhMGQgDkY

    humanity will be safe.

    1. David 132 Silver badge
      Joke

      Re: An AI Hyper-chip eh?

      humanity will be safe.

      Ah, the old joke is relevant again:

      We are Pentium of Borg. You will be Approximated.

      (Seriously, though - this is excellent work by the HPC team. Xeon Phi doesn't have the marketing dollars behind it that the consumer processors do - no dancing bunny people, blue men, or other gimmicks - but no matter, because customers in the segments which need Xeon Phi already know about it.)

  4. This post has been deleted by its author

    1. Anonymous Coward
      Anonymous Coward

      The problem is that shared-memory systems run out of steam due to memory contention.

      DDR memory is already pathetically slow compared to CPU speeds, and a cache miss can stall the CPU for of the order of 100 cycles.

      Basically shared-memory is good for around 8 CPUs; after that it becomes increasingly attractive to use a distributed-memory programming style to minimise resource contention so that performance continues to scale with #cores.

      Once one has done the hard work of partitioning the problem, the step to a different system architecture (interconnection of CPUs) is relatively simple.

      Now regarding Moore's law scaling. IMO the pace has slowed down and moving to a new process node has become incrementally more expensive.

      And as noted above, for PC purposes there are still few applications that will take full advantage of 2 cores, let alone 4 or 8. So a lot of the Moore's law benefits have been used on other features, especially pulling the memory controller onto the die and increasing the amount of cache memory available (attempts to mitigate the RAM bottle-neck). Also, there has been an increase in parallelism within the CPU - vector processing enhancements like Intel's AVX2.

  5. Novex
    Paris Hilton

    Er, I have a genuine question...

    If the RAM is stacked on top of the die, just how well are the CPU cores going to get cooled? Or is this package going to be running at very low frequencies?

    (Paris, because she's the only icon with a question mark in it. Oh, and I feel stupid that I don't know the answer to this)

    1. Ken Hagan Gold badge

      Re: Er, I have a genuine question...

      The RAM isn't an insulator. Even a 1mm thick layer of silicon isn't going to prevent the waste heat from the CPUs going straight through. Further, the multi-core nature of this beast means that the CPU heat is being generated fairly evenly over the whole die, so the thermal problem is probably easier than it was a decade or so ago when the CPU die probably had hot-spots.

      1. Ken Hagan Gold badge

        Re: Er, I have a genuine question...

        (Edit: in support of this, wikipedia reports that the thermal conductivity of silicon is 149 watts per metre-kelvin. I think this means you can pump 14.9 watts across a 1mm thick slice of silicon that is 1cm square with a temperature drop of only 1 kelvin. My estimate of 1mm thick for the RAM slice is probably generous. Presumably each layer is a *few* times thicker than the feature size, but the latter is measured in nanometres, so I think there are a few orders of magnitude to play with.)

  6. This post has been deleted by its author

  7. Rafael 1
    Coat

    Re: Well, at least we know what our future cyber overlords will be called

    The Knights who say Phi?

    1. Anonymous Coward
      Anonymous Coward

      Re: Well, at least we know what our future cyber overlords will be called

      They want an Intel Shrubbery and then they want you to cut down the biggest tree in forest with a Red Herring!

  8. jms222

    Ah yes the 20+ year old P54C

  9. NanoMeter

    Too bad

    it can't be used with ordinary X86 code.

  10. Destroy All Monsters Silver badge
    Windows

    Looks like powerful bovine excrement points

    Either it's "optimized for neural network operations" OR it's Intel x86.

    CHOICE TIME!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like