A chart with an arrow shooting up into infinity! Where's my wallet?
Intel teases geeks with 2017 AI hyper-chip: Xeon Phi Knights Mill
Intel is working on a powerful Xeon Phi processor for servers and workstations that is "optimized" for artificial-intelligence software – and it's codenamed Knights Mill. Chipzilla's data center group boss Diane Bryant flashed up this slide during this morning's Intel Developer Forum keynote in San Francisco: The chip is …
COMMENTS
-
-
-
Wednesday 17th August 2016 21:30 GMT Dead Parrot
Eh?
I was tinkering with neural networks long enough ago to be coding the bloody things on a VT-100, back when Inmos transputers were still hot stuff (I'm fucking ancient). The tricky part has always been finding viable applications that fit within the processing power you can chuck at it...
-
Thursday 18th August 2016 08:49 GMT Anonymous Coward
Re: Eh?
As one Transputering old man to another, it is amazing to see a lot of those old ideas and architectures coming to frution.
As you said,
"The tricky part has always been finding viable applications that fit within the processing power you can chuck at it"
And the amount of processing power available to those of us that "think parallel" has continued to grow.
With around 3 TFlops available from a single chip, we've finally got to a point where a variety of applications are viable.
Couple that with the personal-data-slurping/online-advertising gold rush, decline of traditional PC sales - and neural networks have become the Next Big Thing.
-
Thursday 18th August 2016 12:01 GMT Ian Michael Gumby
Re: Eh?
VT-100? LOL...
Everything old is new.
When you have networks that are now fast enough to have distributed memory and cpus that are powerful enough as well as have enough memory to retain state?
Yeah, old ideas are now being tested.
Its a good thing... for those of us who've been in this game for a long time (30+ years) but are still too young to retire... dust off all of those old texts and ACM/IEEE Symposium notes... :-)
-
Thursday 18th August 2016 17:41 GMT Destroy All Monsters
Re: Eh?
> dust off all of those old texts and ACM/IEEE Symposium notes
Seriously don't (unless you want to take an overview trip about history). Start at the leading edge textbooks and papers. The vocabulary, maths and approaches as well as practical knowledge about what works, what doesn't have all changed.
-
Thursday 18th August 2016 19:30 GMT Dead Parrot
Re: Eh?
Well, I'd agree that a lot has changed. But next time you're killing a few hours waiting for a Windows update, ask yourself if it has all changed for the better, or if there might have been something useful in those old notes.
Mine's the one with a copy of Harel's 'Algorithmics' in the pocket.
-
-
-
-
-
-
-
Thursday 18th August 2016 00:04 GMT David 132
Re: An AI Hyper-chip eh?
humanity will be safe.
Ah, the old joke is relevant again:
We are Pentium of Borg. You will be Approximated.
(Seriously, though - this is excellent work by the HPC team. Xeon Phi doesn't have the marketing dollars behind it that the consumer processors do - no dancing bunny people, blue men, or other gimmicks - but no matter, because customers in the segments which need Xeon Phi already know about it.)
-
-
This post has been deleted by its author
-
Thursday 18th August 2016 09:07 GMT Anonymous Coward
The problem is that shared-memory systems run out of steam due to memory contention.
DDR memory is already pathetically slow compared to CPU speeds, and a cache miss can stall the CPU for of the order of 100 cycles.
Basically shared-memory is good for around 8 CPUs; after that it becomes increasingly attractive to use a distributed-memory programming style to minimise resource contention so that performance continues to scale with #cores.
Once one has done the hard work of partitioning the problem, the step to a different system architecture (interconnection of CPUs) is relatively simple.
Now regarding Moore's law scaling. IMO the pace has slowed down and moving to a new process node has become incrementally more expensive.
And as noted above, for PC purposes there are still few applications that will take full advantage of 2 cores, let alone 4 or 8. So a lot of the Moore's law benefits have been used on other features, especially pulling the memory controller onto the die and increasing the amount of cache memory available (attempts to mitigate the RAM bottle-neck). Also, there has been an increase in parallelism within the CPU - vector processing enhancements like Intel's AVX2.
-
-
Thursday 18th August 2016 06:39 GMT Novex
Er, I have a genuine question...
If the RAM is stacked on top of the die, just how well are the CPU cores going to get cooled? Or is this package going to be running at very low frequencies?
(Paris, because she's the only icon with a question mark in it. Oh, and I feel stupid that I don't know the answer to this)
-
Thursday 18th August 2016 09:58 GMT Ken Hagan
Re: Er, I have a genuine question...
The RAM isn't an insulator. Even a 1mm thick layer of silicon isn't going to prevent the waste heat from the CPUs going straight through. Further, the multi-core nature of this beast means that the CPU heat is being generated fairly evenly over the whole die, so the thermal problem is probably easier than it was a decade or so ago when the CPU die probably had hot-spots.
-
Thursday 18th August 2016 10:12 GMT Ken Hagan
Re: Er, I have a genuine question...
(Edit: in support of this, wikipedia reports that the thermal conductivity of silicon is 149 watts per metre-kelvin. I think this means you can pump 14.9 watts across a 1mm thick slice of silicon that is 1cm square with a temperature drop of only 1 kelvin. My estimate of 1mm thick for the RAM slice is probably generous. Presumably each layer is a *few* times thicker than the feature size, but the latter is measured in nanometres, so I think there are a few orders of magnitude to play with.)
-
-
-
This post has been deleted by its author
-