The more things stay the same, the more things are likely to change, and clear evidence of that could be seen today at the announcement of the latest Top500 Supercomputers league tables at the International Supercomputer Conference in Dresden. The tables, compiled every six months, show the fastest-performing systems installed …
What about the memory
All the really big HPC systems on the top-500 have only a couple of cores per node. Beyond that and memory bandwidth saturates. For HPC type applications most of the parallelism comes from large node counts. Unfortunatly using multiple nodes is much harder than multi-threading.
In my opinion very large core counts will only work for niche applications unless we start to see some inovation in memory system design but the memory manufactures seem only to be interested in making larger chips of the same old basic types rather than investing in significantly new technologies. Whatever happened to rambus?
Now That's You've Got All Those Cores....
.. beware geeks bearing parallelizing compilers. They're still generally a poor substitute for talented software engineers who know parallel algorithms and can cut good code.
Vendors will nevetheless swear your developers won't have to expand their body of knowledge (much.) Just like "if you know C, you know C++." Only more so.
- Vid Hubble 'scope scans 200000 ton CHUNKY CRUMBLE ENIGMA
- Google offers up its own Googlers in cloud channel chumship trawl
- Bugger the jetpack, where's my 21st-century Psion?
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON
- Apple to grieving sons: NO, you cannot have access to your dead mum's iPad