back to article Your 90-second guide to new stuff Nvidia teased today: Volta V100 chips, a GPU cloud, and more

Today at Nvidia’s GPU Technology Conference in San Jose, California, CEO Jensen Huang paraded a bunch of forthcoming gear – all aimed at expanding the graphics chip giant’s reach in AI. Or in other words, stealing a march on Intel's machine learning efforts: the x86 goliath is desperately bent on stopping Nvidia and others …

  1. Anonymous Coward
    Anonymous Coward

    When there's a gold rush on, sell shovels

    Rent shovels to prospectors who don't have $150,000

  2. kcblo

    It sounds to me that the advent of exaflop supercomputer is sooner than expected.

  3. Denarius
    Trollface

    stopping driver accelerating at green light ?

    Either I missed something or someone has been hanging around Darwin or Canberra where blocking traffic is a mandatory daily performance. A few traffic lights refuse to acknowledge motorcycles also.

    Anyway, why new GPU for self driving cars ? Most of them on Barton already self drive by just following ruts with brick on accelerator from my bitter experience.

  4. Solarflare

    "Nvidia is teasing a new GPU Cloud service that will enter public beta in the third quarter of this year. Part of this is a software stack that runs on PCs, workstations and servers, and assigns workloads to local GPUs..."

    Is this just me being a bit of a conspiracy theorist, or does that sound like it will become essentially a bot network (that you likely have to opt out of) which uses your 'idle' cycles and bandwidth, as part of the privilege of buying an nVidia gpu?

    1. Dave 126 Silver badge

      Its been a common practice for CGI rendering workloads (which are suited to distributed across GPU/CPU resources) for a few years now - you install client software on machines on your local network to use their CPUs and GPUs to do the job quicker.

      For example, Keyshot is a real time ray-tracing program. Input a 3D model and assign materials and lighting, and the output is a photorealistic image:

      KeyShot Network Rendering allows you to take advantage of your network’s computer resources for rendering images, animations, and KeyShotVR’s. After the simple installation process, any user with KeyShot can send a “job” to be rendered on the network. The jobs are organized into a queue that all users can view. Jobs can also be sent from the internal KeyShot queue to network rendering.

      - https://www.keyshot.com/features/network-rendering/

      I didn't read the article as meaning that the the nVidia cloud will use *your* compute resources, a la Seti@home or Folding@home :)

  5. leon clarke

    120 teraflops using INT8

    Er, doesn't the F in teraflops stand for 'floating point'? Or has everyone been talking about flops for so long they've slowly forgotten what the term means? (Distinguishing so clearly between integer and floating point performance makes less sense now than in the '90s)

    1. Anonymous Coward
      Anonymous Coward

      Re: 120 teraflops using INT8

      I guess they could mean 8 bit floats. Not sure how much use they would be although wiki says the following on "minifloats."

      "In computing, minifloats are floating point values represented with very few bits. Predictably, they are not well suited for general purpose numerical calculations. They are used for special purposes most often in computer graphics where iterations are small and precision has aesthetic effects."

    2. diodesign (Written by Reg staff) Silver badge

      Re: 120 teraflops using INT8

      Yeah, it was a long day and brain wasn't fully firing. Nvidia quoted 120 "Tensor" TFLOPS (see my comment below), which we took to be marketing spiel for INT8. Duh, INT8 is integer so TFLOPS makes no sense. I've taken out the stat because Nv doesn't, TTBOMK, define exactly what a "Tensor" TFLOPS is.

      Edit: See article update.

      C.

      1. leon clarke

        Re: 120 teraflops using INT8

        Thank you. My faith in flops is restored!

  6. Alan Johnson

    INT8 performance in FLOPS

    Is it just me or should this be MIPS or perhaps MMAC/s? rther than FLOPS. I kept trying to figure out if INT8 was a new acronym for something other than 8 bit integers but no that is what it is.

    1. diodesign (Written by Reg staff) Silver badge

      Re: INT8 performance in FLOPS

      It's actually 120 "Tensor" TFLOPS which we took to mean INT8, but Nvidia claims it is not - so we've taken it out. Last time we asked, Nv wouldn't define what a "Tensor" TFLOPS is, so we've axed that stat and stuck with industry standard metrics (64FP and 32FP).

      We've asked Nv to clarify what a "Tensor" TFLOPS is. If they give us a clear explanation, we'll update the story.

      Edit: See article update.

      C.

  7. tirk
    Coat

    Where's Elon Musk?

    The article mentions Tesla and self driving cars after all?

  8. Anonymous Coward
    Anonymous Coward

    "and represents users as floating robot torsos"

    That's great for those whose focus is that part of the anatomy. What about bottom-men? Or women.

    1. quxinot

      Re: "and represents users as floating robot torsos"

      Sorry, that isn't this part of the internet.

      ...usually.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like