back to article Nvidia previews next-gen Fermi GPUs

Graphics chip maker and soon-to-be big-time HPC player Nvidia raised the curtain a little higher on its next-generation of graphics co-processors at the SC09 supercomputing trade show in Portland, Oregon, this week, and it is arguable that the GPU co-processors aimed at personal supers and massive clusters alike were the star of …

COMMENTS

This topic is closed for new posts.
  1. vincent himpe

    Tesla , Fermi

    i want a Feynman !

  2. Anonymous Coward
    Thumb Up

    ECC :)))))

    This is one area that has had consumer level hardware sold short, all for a siblge bit and a few checks on the memory controller. Or is it that memory is made as ECC (with the extra bit) and if it fails is branded consumer memory (tinhat off :).

    Anyhow, its a welcome adition to something that wanted to be taken seriously and to of gone so far without having it is somewhat of a oversight on many levels.

    Now how about boosting JAVA with all this raw power, even consumers would see that appeal and not look back, only downside though. It would force alot of people to not upgrade there computer/cpu and just upgrade graphics cards instead, might upset Intel and AMD :). Would also be nice to have JAVA truely sandboxed running inside a GPU, that would be appealing.

  3. Robert Hill
    Pint

    I...

    have a serious hardon now...this stuff is sweet.

    nVidia have just raised the bar - and shown that they are very, very serious about this HPC stuff. Hmmm, HPC specialist hardware backed by mass-market demand curves, investment and profitability...excuse me, I have to adjust myself.

    Soon, we will know the question to "42"....

  4. Anonymous Coward
    Boffin

    @paul gray re:ECC

    My understanding is that ECC has been a long intended part of NVIDIA's GP-GPU roadmap. CUDA was first integrated into chips that were primarily intended for gamers, and in a gaming environment, a memory corruption maybe shows up as an off color pixel, certainly not the end of the world for anyone. ECC and improved support for double precision floating point were both necessary extensions for NVIDIA to fully move into the general computing space, but I think they were feeling things out for a few generations of chip, getting feedback on their CUDA framework, etc., before committing to developing chips dedicated entirely to scientific computing. Sounds like they are ready to dive in now-- I hope that they get some traction in the market.

  5. Anonymous Coward
    Boffin

    Flat Earth and your InfiniBand AND OEM Ethernet chip makers.

    "Ideally, said Keane, you want the data to move direction from InfiniBand to the PCI-Express bus and on out to the GPU memory, where the data processing actually takes place. The CPU is relegated to a traffic cop, and only gets data in its memory when the application requires it to do some processing. This capability is not available yet, and Keane didn't say when to expect it, either.

    Finally, Nvidia this week released the beta of the CUDA Toolkit 3.0, which exploits the Fermi GPU's features. ®

    "

    forget infiniband, weres the frigin Ethernet OEM's and their cheap 10Mbit/s chips and routers for the masses......

    weres the real innovation of 'Message passing' multastking drivers that can take a generic Bonded windows driver and overlay message passing on a simple virtual tunnel to any virtually bonded channel..... hell weres the mas produced better than 1gigbit etehrnet and related 'Bonding' windows drivers , these are all simple things that could and would speed up the data thoughput paths and finally make the data get to were its need as fast as can be expected.

    then the other simple choices, you can start to look at optimising your core generic driver and app code to use any and all SIMD capabilitys, not just one or the other, didnt you people learn anything from the amiga multitasking message passing microscopic driver sets, open expansion code bases, and maximising the many Co-Processor usage thoughput of 25 years ago..... smaller,multitasking message passing kernels and driver code is better, faster, and most of all good for your profit margins on mass...

    question: were Exactly are the OEMs with their mass produced cheap better than antiquated 1gigbit etehrnet chips and related kit, and were are the windows generic optimised many ports on the fly 'bonded' ethernet drivers TODAY, and why have these companies not seen fit to innovate and make and sell these 10Mbit/s chipsets to the masses of SOHO and related users....

  6. Bob H
    Badgers

    @Anonymous Coward on Sunday@03:40

    Erm, do you mean Gbit/sec?

    Not sure what 10Mbit/sec would be good for now?

  7. Anonymous Coward
    Coffee/keyboard

    yeah, 10 Gbit plus

    yeah, 10 Gbit persecond plus ,sorry...

    so were are these and other speeds, 1 gigbit/s has been availablt for a very long time ,we should have far faster wired ethernet by now, hell there are faster wireless speed options coming online 2 gig 4 gig and 5 gigabit today or very soon now,putting generic 1 gig ethernet in the shade, shame.

  8. Andy 70

    i really really hope this works out.

    it'd be nice to see something real instead of press releases of blustering CEO's waving around something their kids knocked up in a CDT lesson.

    if this doesn't work, i'm worried it'll be game over, then AMD have the market sown up.

    oooh, sole provider to a niche market, watch those prices hike!

    I, and possibly my credit card, await.

  9. Inachu

    I am happy and sad at this news.

    They did not discuss cost for the end user.

    Will this video card cost us $800? Or more?

    I have a feeling it will cost $5,000

This topic is closed for new posts.

Other stories you might like