The more things stay the same, the more things are likely to change, and clear evidence of that could be seen today at the announcement of the latest Top500 Supercomputers league tables at the International Supercomputer Conference in Dresden. The tables, compiled every six months, show the fastest-performing systems installed …
What about the memory
All the really big HPC systems on the top-500 have only a couple of cores per node. Beyond that and memory bandwidth saturates. For HPC type applications most of the parallelism comes from large node counts. Unfortunatly using multiple nodes is much harder than multi-threading.
In my opinion very large core counts will only work for niche applications unless we start to see some inovation in memory system design but the memory manufactures seem only to be interested in making larger chips of the same old basic types rather than investing in significantly new technologies. Whatever happened to rambus?
Now That's You've Got All Those Cores....
.. beware geeks bearing parallelizing compilers. They're still generally a poor substitute for talented software engineers who know parallel algorithms and can cut good code.
Vendors will nevetheless swear your developers won't have to expand their body of knowledge (much.) Just like "if you know C, you know C++." Only more so.
- Review This is why we CAN have nice things: Samsung Galaxy Alpha
- Ex-Soviet engines fingered after Antares ROCKET launch BLAST
- Hate the BlackBerry Z10 and Passport? How about this dusty old flashback instead?
- NASA: Spacecraft crash site FOUND ON MOON RIM
- Apple spent just ONE DOLLAR beefing up the latest iPad Air 2