back to article Top500: Supercomputing sea change incomplete, unpredictable

If you were thinking that coprocessors were going to take over the Top500 supercomputer rankings in one fell swoop, or even three or four, and knock CPU-only systems down to the bottom of the barrel – well, not so fast. While GPU and x86 coprocessors are certainly the main computation engines on some of the largest systems …

COMMENTS

This topic is closed for new posts.
  1. HighTech4US

    Titan has passed rigorous acceptance testing

    Reg quote: The entry of Tianhe-2 at the top has pushed down other systems in the June rankings, of course. The "Titan" XK7 ceepie-geepie at Oak Ridge National Laboratory, which has not been formally accepted by the lab yet,

    Yes it has: OAK RIDGE, Tenn., June 12, 2013 — Oak Ridge National Laboratory's Titan supercomputer has completed rigorous acceptance testing to ensure the functionality, performance and stability of the machine, one of the world's most powerful supercomputing systems for open science.

    http://ornl.gov/info/press_releases/get_press_release.cfm?ReleaseNumber=mr20130612-00

  2. Peter Gathercole Silver badge

    Did you see the number of cores on Tianhe-2

    It says over 3 million, and draws 17MW of power.

    I guess that what this says is that throw in enough hardware, even with the law of diminishing returns, you can have the #1 supercomputer.

  3. Tom 7

    I hope the parrallella lives up to its promise

    of 18Gflops per watt.

  4. zooooooom

    BG/Q - 3D ???

  5. zooooooom
    FAIL

    Real serious stupidity

    The Top500 has always been an eronious metric of anything, but the latest sets have really taken the biscuit. The overwhelming majority of consumed cycles can't exploit either phi or gpu with anywhere near the efficiency needed to justify their inclusion in a system, and yet here we have the largest systems built out of them. They really are in the territory where people should be building small dev clusters, with the odd single application shop thats already completely ported building large dedicated machines. This batch will be decommisioned before the software catches up.

    Oh well we now know that the Chinese can piss higher - I wonder what money changed hands for the phi's. :)

    1. Destroy All Monsters Silver badge

      Re: Real serious stupidity

      [citation needed]

  6. M Gale

    So I wonder..

    What are the seti@home or rosetta clusters capable of in terms of raw flops?

    1. Destroy All Monsters Silver badge
      Boffin

      Re: So I wonder..

      The entire BOINC network averages about 9.5 petaFLOPS as of March 19, 2013

      Those 33.86 actual petaFLOPS from the Milky Way sound pretty aggro.

  7. StephaneFr

    It's a circle

    <<when memory, fabric interconnect, coprocessors, and likely central processors will all be crunched down to single chip packages, we won't even be talking about coprocessors any more.>>

    The history shows that there is a circle : we design some systems which plug in into the machine. Then, after some years, we are able to add these functionalities inside the CPU, so we move them there.

    Wait another 10 years and again we have more powerful systems (than the one available in the CPU) on some additional board... everybody starts using them, we make transistors smaller or die bigger or whatever, and the CPU includes this functionality.

    You want floating point ? Buy the card and plug it into you PDP-11, oh well, no, add a co-processor, it's in a chip now, oh wait, let's move the FPU into the CPU, okay, it's there, but only one FPU ? Need many ? Buy this external card, ah wait, AMD have some APU's now, not powerful enough ? Buy a card then, etc. etc.

This topic is closed for new posts.

Other stories you might like