When there's a gold rush on, sell shovels
Rent shovels to prospectors who don't have $150,000
Today at Nvidia’s GPU Technology Conference in San Jose, California, CEO Jensen Huang paraded a bunch of forthcoming gear – all aimed at expanding the graphics chip giant’s reach in AI. Or in other words, stealing a march on Intel's machine learning efforts: the x86 goliath is desperately bent on stopping Nvidia and others …
Either I missed something or someone has been hanging around Darwin or Canberra where blocking traffic is a mandatory daily performance. A few traffic lights refuse to acknowledge motorcycles also.
Anyway, why new GPU for self driving cars ? Most of them on Barton already self drive by just following ruts with brick on accelerator from my bitter experience.
"Nvidia is teasing a new GPU Cloud service that will enter public beta in the third quarter of this year. Part of this is a software stack that runs on PCs, workstations and servers, and assigns workloads to local GPUs..."
Is this just me being a bit of a conspiracy theorist, or does that sound like it will become essentially a bot network (that you likely have to opt out of) which uses your 'idle' cycles and bandwidth, as part of the privilege of buying an nVidia gpu?
Its been a common practice for CGI rendering workloads (which are suited to distributed across GPU/CPU resources) for a few years now - you install client software on machines on your local network to use their CPUs and GPUs to do the job quicker.
For example, Keyshot is a real time ray-tracing program. Input a 3D model and assign materials and lighting, and the output is a photorealistic image:
KeyShot Network Rendering allows you to take advantage of your network’s computer resources for rendering images, animations, and KeyShotVR’s. After the simple installation process, any user with KeyShot can send a “job” to be rendered on the network. The jobs are organized into a queue that all users can view. Jobs can also be sent from the internal KeyShot queue to network rendering.
I didn't read the article as meaning that the the nVidia cloud will use *your* compute resources, a la Seti@home or Folding@home :)
Er, doesn't the F in teraflops stand for 'floating point'? Or has everyone been talking about flops for so long they've slowly forgotten what the term means? (Distinguishing so clearly between integer and floating point performance makes less sense now than in the '90s)
I guess they could mean 8 bit floats. Not sure how much use they would be although wiki says the following on "minifloats."
"In computing, minifloats are floating point values represented with very few bits. Predictably, they are not well suited for general purpose numerical calculations. They are used for special purposes most often in computer graphics where iterations are small and precision has aesthetic effects."
Yeah, it was a long day and brain wasn't fully firing. Nvidia quoted 120 "Tensor" TFLOPS (see my comment below), which we took to be marketing spiel for INT8. Duh, INT8 is integer so TFLOPS makes no sense. I've taken out the stat because Nv doesn't, TTBOMK, define exactly what a "Tensor" TFLOPS is.
Edit: See article update.
It's actually 120 "Tensor" TFLOPS which we took to mean INT8, but Nvidia claims it is not - so we've taken it out. Last time we asked, Nv wouldn't define what a "Tensor" TFLOPS is, so we've axed that stat and stuck with industry standard metrics (64FP and 32FP).
We've asked Nv to clarify what a "Tensor" TFLOPS is. If they give us a clear explanation, we'll update the story.
Edit: See article update.
Biting the hand that feeds IT © 1998–2019