back to article Supercomputing speed growth hits 'historical low' in new TOP500 list

There be merriment in the Middle Kingdom: the Tianhe-2 supercomputer at China's National University of Defense Technology is the most powerful datacruncher on the planet (that we know about) for the third time in a row. The TOP500 list, published every six months, noted that the Chinese system hit 33.86 petaflops on the …

COMMENTS

This topic is closed for new posts.
  1. PleebSmash

    All is normal. Big 100 petafloppers will appear in 1-3 years. China could replace all the Knights Corner coprocessors in Tianhe-2 with Knights Landing and achieve (maybe) 100 petaflops in 2015.

    As for the aggregate PFLOPS and #500 position stagnation, is it really the end of the world?

    http://www.hpcwire.com/2014/06/23/breaking-detailed-results-top-500-fastest-supercomputers-list/

    "When examined as a whole, we're falling off except at the highest end...but what does this mean for end user applications? Is high end computing getting smarter in terms of efficiency and software to where, for real-world applications, FLOPS no longer matter so much? Many argue that's the case...and some will await the new HPCG benchmark and forgo Linpack altogether in favor of a more practical benchmark. That hasn't had an impact yet on this summer's list but over time it will be interesting to watch... Of course, keep in mind that a tapering off of GPU or other accelerated systems doesn't exactly mean that there is an overall slowdown. This is one segment of the HPC arena-there are many, many machines from academia and enterprise, that do not choose to run the HPL benchmark. Even if there are 20% of these machines missing from the list, the effect on that list would be felt in such a graphic. We asked Addison Snell of Intersect360 Research about the accelerator graphic above and he echoed this, noting that 'Change in share in the Top 500 doesn't necessarily reflect market trends.'"

    Look out for:

    OpenPower

    ARM64

    Knights Landing

  2. HCL

    HPC Bussiness Head.

    The market seems to be and will need to move towards applications availability and scaling rather than topping TF/ PF. in the real world except for some one off applications, there are not many practical research benefits that are accruing with the accelerator based systems. Also if we discount the benefits of reduction in foot print , the accelerator based systems are not really cost saving. Wider Application porting, scaling and availability is the crucial step that needs to be crossed before the market really makes the next big jump.

  3. mhoneywell

    Old

    Wasn't this announced last week?

  4. Shane 4

    Hold's nose shut and speaks out aloud, DO.. YOU.. WANT.. TO.. PLAY.. A..... GAME?

  5. Sceptic Tank Silver badge
    Terminator

    Does not compute.

    "HP has the top spot with 36 per cent of the list, compared to 356 per cent for IBM."

    Are IBM using those Pentium chips with the FDIV bug for their stupercomputers?

  6. jzlondon

    This article says more about the increasing irrelevancy of the Top 500 list and the difficulty in pinning down what actually constitutes a single computer these days than it does about any flattening of the trend in our compute capabilities.

    The need for massive compute capacity is greater than ever, but is now being filled by distributed, network based resources. You won't see Google, Amazon, Facebook, Microsoft or Apple on that list*, but I wouldn't be surprised if those companies could all comfortably top it.

    * Not quite true: one of the "supercomputers" on the list is actually an Amazon EC2 instance cluster, another is hosted on Microsoft Windows Azure.

    1. phil dude
      Thumb Up

      physical reality is tightly coupled..

      and sometimes(!) you need you computer to be as well.

      The top 500 is as relevant as it has ever been - what is the largest engineered system that can application X for Y hours to produce scaled set Z.

      On a side note the "cloud industry" has been pretty convincingly marketing that you can solve problems in a loosely coupled way, so long as you buy enough cloud time. I expect few reading this to get the jibe, but if you do, pipe up ;-)

      Since LINPACK implicitly tests the cpu,memory, all communications and the installation HVAC (Yes, they can fail), power supplies, it has become a pretty good test of all round machine stability - and bonus, it's objective. Change something, run it again...

      We would all love to have our favourite applications be the test on these machines, but that would only decrease the applicability, whereas linear algebra is phenomenally widespread....

      P.

      1. jzlondon

        Re: physical reality is tightly coupled..

        I'm not saying that there's no requirement for these machines, just that the bulk of the growth in demand is elsewhere. Increasingly the workloads can either be run effectively on decoupled architecture, or can be run less effectively but much more cheaply on it. It's also interesting, in light of your comment about physical coupling, that two of the machines on the list are virtual and hosted on commercial cloud services. What does that say about the internal consistency of the SC500 list?

        1. jzlondon

          Re: physical reality is tightly coupled..

          As an aside, "I expect few reading this to get the jibe".

          Not cool. Just sayin'.

          Besides, this is the Register and many of the readers and commenters are professionals or academics in the field.

  7. Nigel 11

    Marginal utility? Software breakthrough needed?

    The price of the hardware components has continued to fall, so why has nobody decided to build a bigger one, and why a lack of enthusiasm for upgrades?

    I'd guess that the problem is that we're close to the limits of what can be done in parallel with the types of hardware we have got. Single node speed has hit the physics limits, large multiple node counts run into interconnect bandwidth limits. Energy consumed scales with the number of nodes, useful work output does not. The %marginal value of an x% upgrade diminishes as the size of the supercomputer increases. What's needed is either a hardware breakthrough on the interconnect front (much more bandwidth), or a software breakthrough that can automatically generate more efficient parallel codes than a human programmer can (if that's possible).

    Nature's answer to the problem of using vast numbers of low-power processors (a brain) is interconnection much closer to a fractal dimension of three than anything we can do today.

    1. tflopper

      Re: Marginal utility? Software breakthrough needed?

      Nigel11 has it right:

      "Energy consumed scales with the number of nodes, useful work output does not. "

      Until a more energy efficient means of computation is developed, we have about hit the realistic limit of what can be done with existing technology without building a nuclear generator next to the supercomputer.

This topic is closed for new posts.

Other stories you might like