The more things stay the same, the more things are likely to change, and clear evidence of that could be seen today at the announcement of the latest Top500 Supercomputers league tables at the International Supercomputer Conference in Dresden. The tables, compiled every six months, show the fastest-performing systems installed …
What about the memory
All the really big HPC systems on the top-500 have only a couple of cores per node. Beyond that and memory bandwidth saturates. For HPC type applications most of the parallelism comes from large node counts. Unfortunatly using multiple nodes is much harder than multi-threading.
In my opinion very large core counts will only work for niche applications unless we start to see some inovation in memory system design but the memory manufactures seem only to be interested in making larger chips of the same old basic types rather than investing in significantly new technologies. Whatever happened to rambus?
Now That's You've Got All Those Cores....
.. beware geeks bearing parallelizing compilers. They're still generally a poor substitute for talented software engineers who know parallel algorithms and can cut good code.
Vendors will nevetheless swear your developers won't have to expand their body of knowledge (much.) Just like "if you know C, you know C++." Only more so.
- Product Round-up Smartwatch face off: Pebble, MetaWatch and new hi-tech timepieces
- Geek's Guide to Britain BT Tower is just a relic? Wrong: It relays 18,000hrs of telly daily
- Geek's Guide to Britain The bunker at the end of the world - in Essex
- Review: Sony Xperia SP
- FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know