Intel doesn't seem to be in a hurry to get its own line of Knights coprocessors for HPC applications into the field, and maybe it doesn't have to be. To be sure, Nvidia is stealing most of the oxygen in the conversation about coprocessors for accelerating supercomputer applications with its Tesla family of GPU accelerators. Most …
Less Power Scotty!
No one seems to be mentioning power consumption,perhaps as if 22nm will solve that problem- theres a lot of work ahead if power projections havent been talked about.
With all their extensive CPU experience and immense financial clout you'd have thought that Intel would have come up with a half decent GPU by now. However they seem to be stuck in the game of pushing a god-awful instruction set and basic architecture as fast as possible. Not really surprising given the persaveiness of this architecture but disappointing nonetheless.
Pushing such a horrible and inefficient instruction set as the x86 one at the HPC crowd shouldn't seem to be much of a starter, even with the touted "ease" of writing code for it. I'm sure it'll take off in part due to this but when you're in the business of trying to eek every instruction you can out of a chipset, employing coders who can code for it compared to stock x86 coders who can just about do something other than click a mouse button, doesn't seem the most efficient way to tackle the problem and even in the short term the cost savings won't be great. Sure, x86 co-pros will make some ground, and as costs go down the usual mainstream inefficiencies of coder cost vs hardware performace will take a toll, but for the real high end systems still nothing will beat properly coded systems. But then how many (G/Co)CPU cycles do you really want to spend shifting a very limited set of registers around when they should be doing something useful instead?
All this, of course, and Intel produce some of the best code optimisation and compiler tools around...
So it will be...
Cool in a Crysis
yep, leaving now...
few points that may show my ignorance...
doesn't this sound an awful lot like the cell used in ps3s? (and IBM blades, but shush)
is nvidia's dominance in the HPC market down to CUDA being around for longer than OpenCL ?(given amd/ati's current price/perfomance advantage) I suppose AMD's intent is that everyone starts calling them APUs or whatever.
as for x86 still being around, that was down to the customers (itanic being a prime example)
unless chipzilla is going to buy arm and go RISC in a big way... but given the money they're throwing at ATOM and the like that is unlikely. (and as already mentioned on the reg, intel doesnt want to licence chips)
- Review We have a winner! Fresh Linux Mint 17.1 – hands down the best
- Vid Antarctic ice THICKER than first feared – penguin-bot boffins
- Antique Code Show World of Warcraft then and now: From Orcs and Humans to Warlords of Draenor
- iPhone sales set to PLUMMET: Bleak times ahead for Apple
- HTML5 vs native: Harry Coder and the mudblood mobile app princes