Nvidia wants to give you between $0.5m to $5m (£0.36 to £3.6m), plus partner with you on marketing, development, distribution, and more. All you need to do is come up with great software that exploits the processing power of an Nvidia graphic processing unit (GPU). On Tuesday, Nvidia announced its new GPU Ventures Program and …
A Sign of Weakness
Not that this is a bad move by Nvidia but it leads me to conclude that neither CUDA, nor OpenCL nor GPUs are the answer to the parallel programming crisis. If monetary incentives are needed to motivate people to write applications for a processor, my bet is that something is wrong with that processor. What I mean is that there is something about GPUs that prevents them from being an ideal solution to general purpose parallel programming. That something is obvious: They are not universal and they are a pain in the arse to program.
The powers that be at Nvidia realize that they are in trouble. It is obvious that GPUs are not the answer and traditional CPUs are worse. We need a new programming paradigm and a new processor architecture to support the new paradigm. Nvidia should continue to milk as much money as possible with their current technology but they should invest their R&D money into something else. One thing is certain: Nvidia cannot say that nobody warned them.
How to Solve the Parallel Programming Crisis:
A computer with a soul = bad computer
Since when have computers meant to unstable and incorporeal, temperamental and prone to superstitions?
I sincerely hope I can buy my next computer from someone who defines their business model on things that are well defined and observable rather than the First Church of Glide.
...sorry, thought it solved all the worlds computing needs....or so some would make you belive.
Parallel programming crisis-schmisis
Parallelism exists in several forms - the most important being MIMD and SIMD.
MIMD can be a right bastard of a wanky one-eyed panicking Scottish prime minister in a financial meltdown to deal with -- however you do get greater control over how your parallel software beast scales.
SIMD is much, much easier to deal with, it's almost serial programming. But in general it does not scale as well - unless there are some clever compiler tricks, or as I suspect, inefficiency through overcomplication becomes an issue with your MIMD program.
So where is this all going?
- Firstly SIMD will be the way to go, eventually, of parallel programming. MIMD often requires considerable effort for too little gain. As SIMD compiler optimisation improves, the performance tradeoff will become less.
- Secondly, CUDA and the like work perfectly well in this paradigm.
- Thirdly, sorry El Reg, but OpenCL isn't the standard for number crunchers - GPU number crunchers only, perhaps. Most number crunchers are computational scientists, and many of them use, or use applications that use, libraries such as BLAS and LAPACK. MatLab, PETSc, etc. etc - they all use them.
As far as I know, Nvidia are the only ones going after this third, low-hanging fruit; once they've got a full, double-precision implementation within CUBLAS, a lot of people beyond university computer science departments will be very interested.
The dollar's fairly dropped. $0.5m = 36p ?
- Tricked by satire? Get all your news from Facebook? You're in luck, dummy
- Feature TV transport tech, part 1: From server to sofa at the touch of a button
- Google straps on Jetpac: An app to find hipsters, women in foreign cities
- Updated Microsoft Azure goes TITSUP (Total Inability To Support Usual Performance)
- The Return of BSOD: Does ANYONE trust Microsoft patches?