* Posts by Ben "HPC" Smith

2 posts • joined 5 Jul 2011

King K super: does it refute hybrid HPC model?

Ben "HPC" Smith

GPU computing seems to be the only path for next generation HPC

Most of the issues described in the HPC Wire article do not hold water. If I wouldn’t rewrite my code multiple times already, and if I would be concerned of what if, I would still be running my programs on my old (very old) Commodore 64…. Hybrid computing (separate GPUs or integrated GPUs) will happen if we want to continue to increase our HPC capability. Right now it is the most cost/performance/efficient way for building supercomputers.

The new K Computer need to demonstrates not only performance, but also cost/performance and the ability to compete on more than top 10 supercomputers. If it will not go beyond the top 10, it is yet another one shot, second “Earth Simulator” kind of system.

Ben “HPC” Smith (http://hpc-opinion.blogspot.com)

0
0

Top500 founders talk big

Ben "HPC" Smith

My take on the Top500

In the June 2011 list (the 37th TOP500 List) for the first time, all of the top 10 supercomputers demonstrated more than a Petaflop of computing performance (more than a quadrillion calculations per second…). Here is my view on the top 10 systems (the truth, if you can handle the truth….).

Geography: USA with 5 systems, Europe (France) with 1 system, Japan with 2 systems and China with2 systems. Interestingly enough, in the top 5, one system is from the US, and 4 are from Asia. Is Asia taking the lead in HPC? Or will we see an increase in funding to enhance the HPC development in the US?

Vendors: Fujitsu, NUDT, Cray (3 systems), Dawning, HP, SGI, Bull and IBM. Complete diversity.

CPUs: 5 systems use Intel, 3 systems use AMD, 1 system with Sparc and 1 system with Power. 80% use commodity x86 solutions. The new appearance of Sparc in the top 10 is due to the new system made by Fujitsu. Some folks see that as the second Earth simulator – a nice demonstration of capability, no much spread beyond that.

Accelerators: 5 systems use accelerators (GPGPU, Cell) to achieve the desired performance. Interesting trend due to the compute/dense/economical efficiency of the GPGPUs. My prediction is that more GPGPUs will be used (that is a safe bet…. ;-) ), and we will definitely find them as part of off-the-shelf CPUs in the not-to-far future.

Interconnect: 5 systems use Mellanox InfiniBand, 3 systems use Cray proprietary interconnect, 1 system uses NUDT proprietary interconnect and 1 system uses the Fujitsu Tofu proprietary interconnect. Looking in previous lists, InfiniBand as a standard has gain momentum in the top 10 systems. From 3 systems in the top 10, to 4 systems (last list) and to 5 systems in the current list. Win for the standard-base solutions is a win for all of us – most of the high-performance computing systems are based on standard solutions (there are much more than 500 HPC systems in the world you know….) therefore development around standard solution in the high-end of HPC platforms brings better capabilities and feature set to the rest of the HPC arena.

For more info, you are welcome to check my blog - HPC-Opinion - http://hpc-opinion.blogspot.com/.

Ben

0
0

Forums