Feeds

back to article Inside Nvidia's GK110 monster GPU

At the tail end of the GPU Technology Conference in San Jose this week, graphics chip juggernaut and compute wannabe Nvidia divulged the salient characteristics of the high-end "Kepler2" GK110 GPU chips that are going to be the foundation of the two largest supercomputers in the world and that are no doubt going to make their …

COMMENTS

This topic is closed for new posts.
Bronze badge
Coat

So.

Will it play Crysis?

5
3
Thumb Up

Re: So.

Not on full settings, obviously, but should look okay.

Comes to something when the block diagram makes cores look like the transistors in early ICs...

*boggles*

5
0
Unhappy

Re: So.

No, it won't.

All the pics of the K20 Tesla boards so far show that they don't have any output adapters of any sort other than the PCI-E lanes.

1
0
Holmes

Re: So.

The auto-landing technology for massive transport planes is not generally suited to the operating dynamics of your toothbrush, however demanding that particular toothbrush may be.

1
0
Gold badge
Linux

Re: So.

More to the point, will it run Linux?

1
0
Silver badge

So many cores!

I wonder how many neurons you could simulate with that.

0
0
Silver badge

Re: So many cores!

Barely one, depending on the level of simulation you require.

1
0
Ru

Re: So many cores!

Not quite the right sort of structure, perhaps? neural nets are all heavily interconnected, whereas these things are massively parallel but there's not so much communication between each little processing pipeline until the very end.

0
0

Re: So many cores!

@Ru

You are correct. I ported my neural net program to cuda and found that it was not the best match. Either reads are in order but writes are random, or the reverse (depending on which direction you decide to slice and dice the calculation and how the data structure is sorted in advance).

0
0

Re: So many cores!

I've had some impressive speedups using encog opencl for neural networks, http://www.heatonresearch.com/wiki/OpenCL mileage will vary depending on what you do but worth a look if you haven't tried it.

0
0
Mushroom

300 watts per square inch.

All that's required is a nice alumina substrate, nimonic alloy connections, a heatsink the size of Basingstoke and it's happy days.

3
0
Pint

Now I understand

why OC enthusiasts have been training with liquid nitrogen for years. Those who have money to spare might want to consider investiging in Linde AG (disclaimer : this is not an offer to sell or a solicitation of an offer to buy any securities. I am not in any way connected to the afore-mentioned company)....

Henri

0
0
Anonymous Coward

it's gonna be expensive, but think

on the money you'll save on heating the office

3
0
Silver badge
Boffin

Fermi, Tesla, and Maxwell would all want one

Just one?

Physicists (and not just physicists) always want more compute power than currently available.

3
0
Silver badge

Re: Fermi, Tesla, and Maxwell would all want one

"Physicists (and not just physicists) always want more compute power than currently available."

Strongly agree !

1
0
Silver badge

Theres an advert just over there as I read this ->

suggesting ARM+GPU=dream super

All I can say is this would make a good iHandwarmer for the fair weather golfer!

0
0
Pint

But-

Will it get me to my Friday beer quicker?

0
0
Anonymous Coward

People said the Cell was hard to program

Remember all the people who complained that the Cell was hard to program, due to the fact the SPUs were not the same as the PPC cores, and that the SPUs needed the programmer to explicitly manage moving data from main memory to the SPU memory?

Now this - is this any better (other than the fact that there are more than 8 cores)?

Will nVidia allow this chip to be sold in anything other than board level assemblies? That was the problem (from my perspective) with the Cell - IBM didn't want to sell the chip alone unless you were buying hundreds of thousands of them, so you had to buy a board from one of the board vendors like Mercury Computing. If nVidia won't let companies create their own boards (or package this into more useful form factors than 6U Compact PCI) then it will have similar issues.

0
1
Ru

Re: People said the Cell was hard to program

Cell was marketed as a general purpose CPU, though. These things are not. Cell needed a whole new rack of skills and tools that didn't really exist before its release, the new Kepler stuff builds upon existing tools and skillsets. Far as I can tell, your existing CUDA and shader programs can be ported across to the new hardware just fine, and will work that little bit better without you ever needing to know about the new features.

It isn't quite Apples and Oranges, but it isn't far off.

0
0
Anonymous Coward

Declaration of CPU Independence

Can schedule own work.

Can persist data.

Can talk to outside world.

0
0
Mushroom

...but did Nvidia ever fix the self-destructing chips ?

It'd be an expensive mistake to buy one otherwise !

1
0
MrT
Bronze badge

Class Action...

...case in the USA had ripples out into other countries as well. They were made to replace the known-duff ones (like the thousands supplied to the likes of Dell and HP) , and although this didn't fix the unknown-duff ones, i.e the ones they didn't deliberately sell on knowing they would probably fail, it caught a lot. Had my GS7900 in an old laptop swapped out at 4.5 yrs old last year, got one with double the RAM in it's place.

But, yeah: better check the quality of the glue - at 300W, they'll probably self-solder without having to resort to sticking the graphics card in the oven for 10 minutes or so!

1
0
This topic is closed for new posts.