back to article Top 500 supers - world yawns at petaflops

The annual International Supercomputing Conference kicked off this morning in Hamburg, Germany, with the announcement of the 33rd edition of the Top 500 supercomputer rankings. While petaflops-scale machines are far from normal, they soon will be. Not surprisingly, HPC vendors and academics are gearing up to try to push …

COMMENTS

This topic is closed for new posts.
  1. Fractured Cell
    Coat

    But...

    The most important issue here today:

    Can /ANY/ of them run crysis on full settings?

    i thought not.

    (yeah, I'll just get my coat, ok, ok!)

  2. Pete 2 Silver badge

    But how much useful work do they do?

    Mine's bigger than yours!

    No doubt the sys-admins, managers and owning companies of all these exotically named 'pooters have a lovely time measuring each other up. With kudos and bragging rights being distributed accordingly.

    The thing is, I can't help wondering how many of these giga, mega, tera flops (or even just "normal", non-floating point operations) actually go towards contributing to the numerical results these puppies presumably spend their days churning out. How many of these are consumed by the O/S, just keeping track of who's running which thread this precise microsecond, or which processor should handle the next interrupt from which piece of hardware. Or even simply unwinding stacks from all the subroutine / procedure / methods (depending on exactly which version of objectect-orientated FORTRAN-4 your application was coded in) that your highly elegant but operationally inefficient massively parallel visualisation uses, just so "the boss" can see a pretty screen to impress all his/her other bosses at the next measuring-up day.

    We all know that software gets slower quicker than hardware gets faster. So are all these huge number of processors, the arcane architectures needed to squeeze utility from them, the operating systems they run and the highly optimising but too-hard-for-humans-to-understand compilers really producing the goods. After all, just how fast do you need to execute a gettimeofday() call?

  3. Steve Loughran

    Storage

    What about storage capacity? Pure CPU speed is relevant for some problems, but petabytes of storage is starting to matter too...

  4. Toastan Buttar
    Happy

    What do people actually do with these machines ?

    I'm intrigued to know what sort of projects require such extreme number-crunching. What did researchers in these fields do previously ? Does increased computational capacity increase the resolution and/or accuracy of predictions ?

    And most importantly, will we ever get a decent weather forecast ?

  5. Anonymous Coward
    Heart

    @Steve Loughran

    The top 500 list normally also substitutes for a Bluearc client list. Their hardware accelerated NAS is used by a lot of the HPC boys who want massively scalable and really fast storage.

  6. northern monkey

    @Toastan Buttar

    Well I know the FZJ, LLNL and Argonne machines are used for (not primarily) lattice qcd calculations. It's pretty annoying being here in the UK since the STFC funding debacle has left us with an ageing (and crap) computer which we can't even use because we don't have enough money to pay scottish power!!

    Increased computational capacity obviously increases the range and accuracy of what we can do - and academic institutions had computers long before lattice qcd had been thought of (and a fair bit before qcd had been thought of!)

  7. Annihilator
    Heart

    "global thermonuclear war"

    Greetings Professor Falken

    Hello Joshua

    How about a nice game of chess?

    Forgotten how much I love that

  8. Tim Spence
    Thumb Down

    Zzzzzz

    Supercomputers used to be cool and interesting, but now it's simply about how much money you have, and therefore how many processors you can buy to plug into it. There's no technical innovation in that.

    It strikes me as though someone might have a couple of supercomputers next to each other, and they just connect them to each other, and call it a completely new twice-the-power supercomputer! No, it's just two supercomputers which talk to each other.

  9. Boris the Cockroach Silver badge
    Thumb Up

    RE: Toastan Buttar 2009 09:02 GMT

    And most importantly, will we ever get a decent weather forecast ?

    Actually what you can do as the speeds increase is subdivide the atmosphere into smaller and smaller cubes, with a wind speed, air temp, humidity, then model the atmosphere in greater and greater detail in order to produce a more accurate weather forecast

    Then a butterfly flaps its wings.....

    PS Modeling thermo nuclear explosions seems far more fun though

  10. Anonymous Coward
    Coat

    @But how much useful work do they do?

    A lot. Most of the code that runs on bluegenes is highly optimised (indeed it's sometimes a condition to get cycles!), to the level that chunks of it are written in assembler.

    Plus IBM's C compiler for the bluegene works an absolute treat.

  11. James Pickett

    Forecast

    Where's the new Met Office computer in all this? It uses enough power (1.2MW, allegedly) and taxpayers money...

  12. frank ly

    @northern monkey and Boris

    "Increased computational capacity obviously increases the range and accuracy of what we can do.."

    "...model the atmosphere in greater and greater detail in order to produce a more accurate... "

    Strictly speaking, you could do all this on my laptop, but it would need more memory and it would take thousands of years to do it. It would also be ridiculously expensive to support such an effort.

    The point about supercomputers is that they give you these high resolution and large dataset results in a usefully short time. Because of this, many computing activities that have obvious 'use and value' can now be performed.

  13. Annihilator

    Speed is a biproduct

    The goal of these supercomputers is generally to solve problems that can be done with a massively parallel approach. A more accurate way to view the supercomputer's perforance is by seeing a peta-flop capable rig and assuming it is really 1024 tera-flop rigs operating in harmony.

    The harmony is the challenge. Tim Spence, you're almost right. If the two supercomputers can synchronise and talk to each other sufficiently quickly, then you can consider them the same one.

    The scenarios? As Boris points out, subdividing the atmosphere is one option. Another is convex hull algorithms. Nuclear reactions is another. It all works by sub-dividing the problem into as small a part as possible. You're trying to emulate the "real world" computer - which it can feasibly be thought of. The challenge is applying the outputs of each component part and feeding it into the others without lag. Doing a calculation on one sub-part of the problem iteratively is nearly pointless - you need to calculate all parts simultaneously.

  14. John Ryland
    Gates Halo

    Private ownership

    Does anyone have any idea about supercomputers owned by an individual? What is considered on that list, if anything? Home supercomputing anyone?

  15. Toastan Buttar
    Boffin

    Lattice QCD

    That looks like a fascinating field of study. Must investigate it more deeply. Thanks, Northern Monkey.

  16. Anonymous Coward
    Anonymous Coward

    @But how much useful work do they do?

    "The thing is, I can't help wondering how many of these giga, mega, tera flops (or even just "normal", non-floating point operations) actually go towards contributing to the numerical results these puppies presumably spend their days churning out. How many of these are consumed by the O/S, just keeping track of who's running which thread this precise microsecond, or which processor should handle the next interrupt from which piece of hardware."

    A cluster (using commodity hardware and software at least) running an application designed for parallelism delegates high-level tasks from a master node to slave nodes as appropriate, which, because they run their own local OS, manage low level stuff like interrupts themselves. For example, a master node might send high level messages instructing 100 nodes to execute a process using 100 different data sets, and leaves them to it until they return a response. Of course, it depends on the application design ;)

  17. Dale 3

    Is 500 enough?

    Why the top 500? If it isn't fast enough to be in the top 10, nobody cares. In fact anywhere beyond the top 3 are probably not all that interesting any more.

  18. The Vociferous Time Waster
    Heart

    @James Pickett

    Sure that's not one point twenty one gigawatts?

  19. This post has been deleted by its author

  20. Chris Simpson

    F@H

    What about Folding@Home, Granted not one single computer but 5+ Petaflop

  21. Anonymous Coward
    Boffin

    LINPACK?

    Cough cough, splutter, eyes bulging out of your sockets in Beano / Dandy style - that's a name I've not heard for a while! I thought we were now all brewing with a heady mixture of BLAS, LAPACK, and PETSc.

    More is always better, to those bang into their number crunchery. We computational fluid dynamicists need all the poke we can get.

  22. Anonymous Coward
    Heart

    Suck it, HPaq!

    "In fact, it was called Compaq when it was near the top of the list, so that doesn't really count."

    Gee, that just happens to be the same time they stabbed Alpha in the back. That must be a coincidence, right?

    Suck it, Itanium Boy!

  23. Don Mitchell

    Computer vs. Computers

    These are awesome machines, but the distinction between a "super computer" and a "server farm" is somewhat blurry these days. These new machines are mostly big arrays of PCs running on a high speed interconnect. It is also the case that there are only very particular numerical problems that are easily parallelized in a way that machines like this can crunch on, without slamming into Amdahl' Law like a bug against a windshield.

    In many ways, it is more impressive that programs like Oracle's DB or MS SQL Server or the Red Dog kernel can achieve parallel acceleration of 128 cores, than it is that you can parallelize giant matrix calculations. It's a pity that so much of that computer science is still trade secret.

  24. Mike Powers
    Coat

    Simulations used for nuclear testing

    Nuclear testing is still going on, in fact; it's just done at "sub-critical" levels, so it doesn't actually go bang. It just makes a whole crapload of radiation and heat; this output is recorded, and fed into these tremendous superbrains, which run simulations to work out exactly how all the particles were bouncing around. Then it turns around and uses THOSE results to work out what a truly critical reaction would have done (that is, exactly how earth-shattering would the kaboom be?) This is how nuclear testing is done in a world where nuclear testing has been banned by treaty.

    *****

    Kind of ironic that the two most powerful computers in the world, both in Germany, have "Ju" (JEW) at the start of their names. (Okay, I know, I know! Hey folks, you were great, I'll be here all week tip your servers try the veal! Mine's the one with the SECURITY CLEARANCE REVOKED papers in the pocket.)

  25. Joe Futrelle
    Boffin

    speaking of weather forecasting

    http://weather.ou.edu/faculty_details.php?FacID=28

    "the detection of hazardous local weather using dynamically adaptive radars" means that when coarse-scale storms are detected, data gets fetched from higher-resolution radar closer to the ground (e.g., mounting doppler radar sensors on cell phone towers) which can increase forecast resolution down to about a city block at least six hours in advance--but only with fast enough computers. As a poster upthread said, we still need to build out denser sensor networks before we can get the data that these forecasting techniques require.

    I talked to a computational fluid dynamics person and he said that CFG researchers were essentially left with nothing to do in the 1990's because supercomputers had become fast to solve most of their problems for non-turbulent flows. But computers still aren't fast enough to do useful turbulence simulation. So many of the researchers left CFG to work in molecular dynamics. If computers become fast enough to solve real turbulence problems, it will generate a lot of new and useful results, as well as providing insight into one of the few remaining unsolved problems in classical physics.

  26. Anonymous Coward
    Welcome

    @Pete 2

    "We all know that software gets slower quicker than hardware gets faster."

    Yeah, that's why video games now look worse than Pac Man, and Photoshop is slower than Deluxe Paint ][ at loading a 20mb image.

    Oh, and I, for one, welcome the new icons.

  27. Charles Manning

    @ Don Mitchel

    Server farm != super computer.

    Server farms distribute the load after which each server is reasonably independent.

    A super computer still tries to act and work as a single machine, crunching on a single problem.

    I expect many super computers are involved in crunching climate models. If you're modelling the atmosphere as (10km)^3 parcels of air in increments of 1 day, then changing to (1km)^3 at 1 hour requires 24000 x more calculations to crunch the same basic model and allows more precise calculation. Some of these models crunch for months at a time.

    GIGO of course, unless you precisely understand what all the variables are, you'll just get bullshit results to 20 decimal places.

  28. Nick 26

    Re: F@H

    Chris Simpson wrote:

    "What about Folding@Home, Granted not one single computer but 5+ Petaflop"

    What Folding@Home does is millions of slighty different small problems.

    What a supercomputer does is one very large problem.

    The difference is tolerance to latency.

    The answer to one F@H problem is independent of the other problems and so it can be task farmed. If you're dealing with one big simulation then you need as fast as possible communication between all the CPUs otherwise you'll be waiting an eternity for your answer.

    Let's say you want to do some molecular dynamics on a piece of material the size of a grain of salt. Just holding the coordinates and velocities of all the atoms would require about 10 petabytes of memory. The drive for larger simulations, smaller approximations and finer resolutions will continue to feed these machines although Amdahl's law and other software engineering problems are raising their heads.

  29. Alan Parsons
    Joke

    Need a mathematician...

    ... or at least a better one than I am... How many bits do I need my tunnel encrypted by so that the soon-to-be-overlord-exaflop machine can't read my traffic?

  30. Bob H
    Black Helicopters

    @Alan Parsons

    About 128-bits, why? Because the overlords don't give a toss about your data.

This topic is closed for new posts.

Other stories you might like