back to article Nvidia to Intel: 'Which HPC chip brain will win? Let the people decide'

Nvidia is of the opinion that in the tussle with Intel as to whether Nvidia's Tesla GPU accelerators or Chipzilla's Xeon Phi many-core CPU coprocessors will dominate the supercomputing market, the HPC community has already voted with its checkbooks. "This community continues to invest in GPU computing – GPU-based accelerated …

COMMENTS

This topic is closed for new posts.

Xeon? You mean the silicon space-heater?

Nice thing about GPUs is that they can run rings around any Xeon with any conceivable improvement short of scrapping the chip's architecture and starting over... all without having to install an extra air conditioner.

3
1

Re: Xeon? You mean the silicon space-heater?

no intention of defending intel, but Xeon Phi is a different beast entirely from Xeon. It is largely a scrapping of the chip architecture-- previously it was called Larrabee.

0
0
Silver badge

Millions of x86 developers?

I seriously doubt that the existence of Visual Basic programmers for Windows is at all relevant in the HPC space.

7
1
Bronze badge
Linux

some chi facts..

I'm going to some of the applications talks later, so see what facts there are.

But talking with one of the intel experts i asked the question "will it run linux?" as a sort of joke.

Pause. "Definitely soon."

Apparently the Chi has some sort of larabee type arrangement, and so it is 60 cores with 4 threads. The instructions are i686, which means you could probably boot some flavour of linux... I don't know but it would be interesting to see!

In biophysics (namd, gromacs) it looks as if it is about 3x a xeon. Not being a CPU bod, it would be nice to know what that details. But as scientist, faster is nice.

All in all, I went to the larrabee talk at the previous SC, and this is a nicely timed introduction...

P.

0
0
Anonymous Coward

Re: some chi facts..

It's Phi not Chi.

It boots Linux today - that is how it operates. The card boots a linux kernel and operates as a node within a (Xeon) node. You can offload operations to it OR treat it as a node and run code on it natively OR some combination of the above, potentially with reverse offload back to the Xeon if that makes sense for the application.

FWIW a current generation Xeon is around 260 GFLOPS (2.7 Ghz x 8 FLOPS/clock x 12 cores).

Xeon Phi is around 1 TFLOPS (a bit more with the 7 series) so the theoretical performance increase is up to 4x. Obviously this will vary depending on your codes suitability to multi-core Xeon vs many-core Phi and the difference in vector units.

0
0
Bronze badge

thanks:-)

drat the screen keyboard..;-) Nice summary, so we;ll see how this FFT talk explains it all...

P.

0
0
This topic is closed for new posts.

Forums