Feeds

back to article High performance for the masses

High-Performance Computing (HPC) has traditionally been seen as the domain of the über-specialist. It’s as close as the IT industry will ever get to “2 Fast, 2 Furious” – gangs of highly technical experts pushing their custom-built computers to the limit with an aim to win that ultimate prize, a place in the world …

COMMENTS

This topic is closed for new posts.
Gold badge
Troll

May I be the first to say...

...Hi Matt!

I await your long rant about how this survery in no way reflects what's really happening in the industry. (Lines up well with everything I've seen and heard though.)

0
0
Linux

System Z/S390/S360 CPUs

It is unfortunate that this article mixes up two application fields that IMO should be separate:

1.) Mainframes: High Performance Transaction and Batch processing, which is traditionally the domain of the mainframe with its SystemZ (earlier known as S390 or S360) instruction set. In terms of Specmarks, I guess these CPUs are in line with current x86 for single-threaded performance. IBM is too shy to publish Specmarks. Banks, insurance companies, airlines, big corporations use these machines. Cobol and Java are the favourite languages.

The biggest machines have dozens of processors, which are interconnected in an SMP fashion.

The article misses to mention the S390 CPU architecture.

2.) Specialised High-Performance Computing for scientific, engineering and other "number-crunching" applications. That is in my definition "HPC". S390 never excelled in this field, as S390 machines are designed for maximum I/O and transaction performance.

Cray-Vector machines like the XMP or the NEC SX-3 were the major players. Users were mostly physicist or engineers creating weapons, enginers, cars, planes or simulating the weather. Automatically vectorising Fortran was the major programming language.

With massively parallel clusters of cheap commodity boxes this has dramatically changed and also many more people, including finance are using these machines. They use Fortran, C++ and MPI to quickly pass messages between CPUs, typically using high-speed wires, not shared memory (even though there exist NUMA machines that look like an SMP for the programmer). Actually pretty much like the unsuccessful Transputer.

This article could have done much more to explain these differences (mainframe vs HPC) and also elaborate on the types of machines and application fields.

Tux, as most current HPC run on Linux. And IBM pushes it for their mainframes.

1
0
Megaphone

Nice article, shame about the graphs

3D pie charts? Seriously, it's as though Tufte never existed.

http://www.edwardtufte.com/tufte/books_vdqi

0
0
Gold badge

HPC and business analytics

I'm not sure the article is mixing them up at all. In context, I assume they are asking about HPC setups for business analytics -- which is, when broken down, pulling a bunch of junk out of a database and doing number crunching on it. This would make for quite the interesting race -- an HPC cluster could be built that is plenty fast, but the 390 would probably clean the cluster's clock at actual database operations. I would find it probable that the best performance would be obtained from an unholy meld of a off-the-shelf HPC system pulling data off a SystemZ.

I do agree, a short description of mainframe, SMP, cluster, and vector-processor systems would have been instructive.

0
0
This topic is closed for new posts.