back to article GPU-CPU hybrids promise the moon*

I've been busy all day talking, listening, and maybe even learning a thing or two at the 2010 GPU Tech Conference. The speaker from the NOAA session (topic of another blog) put the move toward GPUs into perspective toward the end of his talk today with two key points. His first point was that big HPC advances tend to come at …

COMMENTS

This topic is closed for new posts.
  1. Ian Bush

    Thanks for the memory

    Well for the moment I'll ignore the excellent point that hardware costs are far from the end of the story, and simply ask does this table refer to a Fermi based system with the same total memory footprint as Jaguar? After all, if you ain't got the memory, you can't solve the problem.

    1. Ken Hagan Gold badge

      ...and the bandwidth

      GPUs are hopeless for anything that isn't embarassingly parallel and purely computational. Even web-serving doesn't fit into that category. And if your problem really is embarrassingly parallel, purely computational *and worth spending several million on*, then you'd be cheaper and faster hiring an electronics engineer to glue together a modest number of FPGAs.

      GPUs are games, not game changers.

  2. Anonymous Coward
    Anonymous Coward

    ...and Chipzilla?

    If we are to believe Intel, come 2012, the multicore Knight's corner processor, combing elements of both x64 and GPU, should be on the market. It should have the best of both worlds, using its integrated GPU effectivily without having to learn to code specifically for it.

    Or is this too good to be true?

    1. Michael H.F. Wilkinson Silver badge

      This is to good to be (completely) true

      Some problems will transfer so easily to GPUtastic (my preference) code, but in many cases, had coding and UNDERSTANDING the algorithm at hand remains necessary. The highly data-driven image analysis software we develop falls heavily in the latter category

  3. Asgard

    Hang on, lets at least normalise that table...

    GPU Alternative Performance..2.3 PFlop/s............... 2.3 PFlop/s

    Cores................................. 250,000 CPU cores.. 2,300 Fermi GPUs

    Size........................................ 284 cabinets............. 23 cabinets

    Power draw.............................. 7-10 MW.................. 1.15 MW

    Cost..................................... $50-$100 million ...... $11.5-$23 million

    That's better (the marketing spin was getting to me). It shows cost differences isn't so great.

    Anyway I'm all for GPUs (and any other solution) as Intel CPU performance over the past few years seems to be running out of steam so to speak, as its really not moving forward as fast as I had hoped. Even the available amounts of ram per motherboard are not getting much bigger. in fact many motherboards are going over to 2 memory slots not more slots :( ... Its all starting to feel like Intel CPUs are stuck with small incremental improvements. We do need a new way forward for all sizes of computers if Intel can't provide.

  4. Dyip Blog.OCF

    more than worth it

    I do agree that only applications which are ‘embarrassingly parallel’ will run well on GPUs. However, in contrast to what many people say I don’t necessarily think the answer here is to simply ‘re-write the software’ or make it ‘GPU-riffic’ or ‘GPU-tastic’. I don’t believe it is that easy. If these codes were written in the 70s and 80s, etc, then it’s feasible that the developers will have changed roles, moved companies or are simply not in a position to help. I would argue that it requires a team of people which encompass both computation science skills but also importantly domain knowledge of the subject the code is to be used for, such as physics or chemistry. The two parts must work together; a collaborative approach is more likely to succeed. That may sound hard work and could put some IT managers off, but based on the stats in this article, it would be more than worth it.

This topic is closed for new posts.