back to article Nvidia: An unintended exascale-super innovator

Jen-Hsun Huang, one of the cofounders of graphics-chip maker Nvidia, never intended to be a player in the supercomputing racket. But his company is now at the forefront of the CPU-GPU hybrid computing revolution that is taking the HPC arena by storm as supercomputing centers try to cram as much math into as small a power budget …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    I can hear Talking Heads in the background.

    re: "Cray has singed up to to support the standard, too, which makes sense because it is selling Opteron-Fermi hybrids."

    "signed up to support"?

    although with all of the fire references...singed might be close to to the truth.

    I'm going to have to dig up my quake cd when I get home.

  2. Yet Another Anonymous coward Silver badge

    OpenACC

    Translation - we like CUDA, write for CUDA and you have to keep buying from us, please ignore openCL. But we have to look like we are open, standards compliant and no-propriety to get government contracts so we will simply propose our own.

    Ironic because if you use the non-proprietry, open standard, openGL on PCs you are pretty much stuck with buying NVidia because nobody else writes drivers worth a BEEEP

    1. Alan Dougherty
      Alert

      And as a gamer, I won't buy anything other than Nvidia.. simply because I can expect driver updates on zero day release for major titiles, and, well just superb regular updates in-between..

      Not to mention that the hardware is as good as (*note, not necessarily better), than anything ATI can push out.. infact, with the current DX and hardware tes, ATI can't even approach NV's performance... if only the vendors would stop overclocking the stock cards past their basic bench, then maybe, the likes of the BF3 forums wouldn't be swamped with n00bs whining that they crash every other connect.

      1. TeeCee Gold badge
        WTF?

        "...superb regular updates..."

        Odd. I went back to ATI about four years ago on the back of an interminable series of godawfully buggy driver updates from Nvidia.

        Have they finally shipped something that desn't go titsup at the drop of a hat?

        1. L.B.
          Thumb Up

          I'm with you TeeCee

          Ever since AMD bought ATI, the drivers have been quite excelent and better and less buggy than any I ever had on my old NV cards.

          They even started producing Linux drivers for those that care about that.

      2. Ammaross Danan
        Coat

        @Alan

        "...simply because I can expect driver updates on zero day release for major titiles, and, well just superb regular updates in-between"

        There's a problem with that. AMD releases gfx driver updates monthly (consistantly), with beta builds available (and well announced) for zero-day games. nVidia releases new drivers less frequently, about once every 2-3 months. Not sure on zero-day, but I'm sure they do "beta" builds for new games too. I've just been in the Radeon camp for a while.

  3. Anonymous Coward
    Anonymous Coward

    "But the governments of the world want exascale computers by 2018 to 2010.

    Oops."

    Oops indeed.

  4. Andrew Garrard

    Call me a pedant, but...

    "along came Quake, the first OpenGL application"

    Er. No. I couldn't actually tell you what the first OpenGL application was (probably a demo included in someone's implementation of the OpenGL1.0 specification, which dates to 1992), but I'm damn sure that something was written before Quake (1995), if only because there were a load of SGI IRIS GL applications to port.

    It's true that Quake was the first application that popularized OpenGL support in consumer devices, though.

  5. Pperson

    Not really the programming

    > The problem with GPU coprocessors is that they are not as easy to program

    I have to disagree with that. Having coded a few different kinds of high-performance algorithms into GPUs, the actual programming is initially odd but you get used to it pretty quickly. To me, the problem is more fundamental: the pattern of memory access (coalesced) that is needed does not work for all (most?) things you want to make massively parallel - miss coalescence and your GPU spends most of its time waiting for memory. And unfortunately the 'L2' cache (shared mem - ~16Kb) is too tiny to make a difference for pretty much anything but matrix ops or other highly localisable problems.

This topic is closed for new posts.

Other stories you might like