back to article Of NVIDIA and hybrid computing

At SC09, it was hard to travel the floor without running into hybrid computing in some way, shape, or form. In fact, you couldn’t swing a cat without hitting someone or something related to using accelerators to maximize performance. (I know this because I actually did bring in a cat to test this theory… an actual live cat… well …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    WTF?

    <insert technology here> will save the world

    This year (and last?) it's GPUs.

    Wasn't it FPGAs the for a couple of years before that?

    And before that it was Beowulf etc.

    Did/will any of these "next big thing" technologies have any real impact, or does the HPC world have too much tried tested and proven code which won't work without serious modification on these hip trendy (but possibly impractical) new technologies?

    1. danolds

      Yep, lots of hype, but....

      ...GPUs just make too much economic sense. They crunch more numbers at a much lower cost than anything else. A couple of years ago, I might well have agreed with you that GPUs are over-hyped solutions that might not have any impact. But now, mainly because of the formation of a GPU ecosystem (programming language, etc.) and support from other vendors, I think that GPUs are becoming mainstream and that we'll see more of it in coming months.

      Hell, I'm even looking to join the crowd. For my Reg duties, I recorded a bunch of these video interviews - like the one with NVIDIA. I have a fairly big system in my office (two socket Barcelona, 8 GB memory, 10k RPM disk, with 2x NVIDIA video cards to drive multiple monitors). Even with this amount of brawn, I was still seeing longer than anticipated render times for these videos with Premiere Elements. I realized that my vid cards are CUDA enabled and decided to investigate the possibility of using them as render engines. Long story short, I found that it's not quite there yet - I'd need to upgrade my video editing SW (hundreds of dollars more) and do some technical work on the innards of my o/s....but, if I did that, I could cut my render times by a factor of 10. While this is a bridge too far for me now, I'm thinking that this capability will become embedded in new app and o/s versions in the future...

  2. quartzie

    decent gpu from intel?

    That would be a first...

    Larabee may look interesting on paper and indeed, nvidia sticking their nose right into HPC business may prod intel to actually do something about their laughable gpu line, but I won't believe it 'till I see it.

    The crapware most of today's notebooks on the reference platform run on is a prime example of intel's utter incompetence when it comes to gpus. Hail the day when an intel graphics chip gives a 2 year old midrange Nvidia at least a decent competition.

    @AC: hybrid computing so far seems to be a cheap way of getting around the silicon limitations we are facing today. FPGAs are still out there, but GPUs are now ready and acessible to a very large audience. There is also an advantage of pressing multicore processing education onto developers, who were frustrated with lack of single-core processing power.

    AFAIK, GPUs won't save the world either, but hybrid computing seems an interesting and effective alternative for specific tasks, right until someone comes up with anything better.

  3. Robert Hill

    @AC 14:18

    All of the technologies you mentioned have actually had success...that doesn't mean that there isn't room for improvement.

    And GPUs solve the one major _hardware_ issue that FPGAs coudn't - they ride on the CONSUMER driven cost/performance curve, as their hardware is funded by the investment in graphics cards, which are hugely important in the mainstream PC world and hugely profitable. As such, they have a chance economically that other technologies can't quite match. Now it is down to the software, and nVidia has done a heck of a job pushing CUDA, with packaged training, CBT, white papers, consultancies, etc. being thrown at the developer community. C and Fortran drivers are really the icing on the cake. It will turn out to be a very interesting year for hybrids - in the next 24 months we will know if they have succeeded or failed.

  4. ThomH

    @AC

    Though the IT world is probably one of the worst at hyping a technology for a sustained period that nobody subsequently cares about, I think shifting functionality to the GPU is a real trend since they are exceedingly good at the sort of media tasks consumers care about, most machines have one already and the industry is making steps towards commoditising the relevant programming interfaces.

    Obviously not so relevant on the desktop, but look at Apple: they've authored OpenCL and given it to Kronos, while being the dominant player in the sort of media devices that would really benefit from this type of technology. Though Apple don't actually seem to be fans of NVidia, so maybe that's not relevant to this particular fluff piece.

  5. Shaun Hunter
    Stop

    Discount Larabee not AMD

    At least AMD has a product. With Intel still licensing video cores on their chipsets from PowerVR, it doesn't seem their even near a release of any kind.

    @ AC Beowolf style clusters are the norm these days. You should maybe lean what they are.

    I think your right about the legacy code issue. Otherwise FORTRAN would have died years

    ago and wouldn't be a major feature on Tesla's marketing campaign.

  6. Anon

    Lashes for the author!

    Wrong sort of cat.

    1. danolds

      Wrong cat?

      Wrong sort of cat? What should I have used? But you may have a point...I did seem to draw some disapproving looks from some of the folks at the show while I did my testing. I thought it might have been due to my using a freebie Microsoft bag to hold my gear, but maybe it was because of the cat....

This topic is closed for new posts.

Other stories you might like