back to article Start learning parallel programming and make these supercomputers sing, Prez Obama orders

President Obama has signed an executive order that will pump US government money into American supercomputers just like before but in a more coherent fashion. The order, signed Wednesday, kickstarts a new "National Strategic Computing Initiative" [PDF] that will, through a single vision and investment strategy, try to keep …

  1. Ole Juul
    Coat

    continued failure to accurately predict the weather

    They can't even predict if Adobe flash is going to work on any given day.

  2. Bernd Felsche
    Facepalm

    Hang them by their lab. coats.

    "Current HPC … systems are very difficult to program, requiring careful measurement and tuning to get maximum performance on the targeted machine. Shifting a program to a new machine can require repeating much of this process, and it also requires making sure the new code gets the same results as the old code."

    Could be that the people who built those machines and software environments are relative nincompoops. Or it could be that they're trying to maximise their "value add" by portraying something that is fundamentally simple as something complicated that only a guild of highly-paid specialists can undertake.

    Dr Christopher Essex produced an nice video ("Believing in Six Impossible Things Before Breakfast, and Climate Models") on reproducibility of results and the tendency for uncontrolled models to introduce their own artifacts leading the Gospel-Out believers to often become enchanted by the "results".

    1. Peter Gathercole Silver badge

      Re: Hang them by their lab. coats.

      Unfortunately, there is relatively little commonality between HPC systems from different vendors, and as with most large problems, it's the interconnect between the individual system images in the clusters that is most important, and different vendors quite jealously guard their specific implementation to maximise the value of their investment.

      Unlike general purpose servers, there are a lot of tricks that go into HPC servers to make them as fast as they can be. Beside the interconnect, there's different ways of packaging multiple processors in as smaller power and space footprint as possible, and once you start putting so many processors into single system images, especially if they are heterogeneous processors (think hybrid or CPU/GPU processors), the way that the memory is laid out and accessed becomes very important. All of this can affect the way that the code has to be written. This is even though there are relatively efficient abstraction layers such as MPI, OpenMP and MPIch

      This means that in order to get the absolute maximum utilisation, there is a long period of tuning when porting code from one to another, For example, the installation of the Cray XC40s that are currently replacing the IBM P7 775s at UKMO is a project that is running for over a year, from purchasing decision to final switchover, and much of that is taken up with the porting and resultant checking of the models between the systems.

      I suppose that normal commercial systems vs. HPC systems is a bit like the difference between a Ford Transit and a Formula 1 car. You definitely want to invest in making the F1 car as fast as possible.

      Any programme that will result in a consistent, efficient programming model that abstracts the system specifics to allow increased portability of code would be very welcome by pretty much everybody in the field.

      1. Bernd Felsche

        Re: Hang them by their lab. coats.

        "I suppose that normal commercial systems vs. HPC systems is a bit like the difference between a Ford Transit and a Formula 1 car"

        Yes indeed. The F1 cannot be built by strapping a bunch of Transit engines together; nor can you take the ubiquitous "white van man" and plant him/her into an F1 car and expect anything like a good result.

        Or can we? We do have drivers moving "seamlessly" from a Dacia Sandero into a Bugatti Veyron. It's the Engineering of the latter that facilitates the transition. It's a vastly more sophisticated car yet is as simple to drive as an ordinary hatchback; even at much higher speeds — though not necessarily the top 10% of the performance envelope.

        To carry the theme further; we have a road traffic environment with mostly much more powerful, faster cars than ever before and the driver are becoming less skilled and interested; yet the roads are far safer, cars are more efficient and (relatively) seldom need to be caressed by a magic monk in blue overalls. That "miracle" is the result of Engineering over the past 60 years.

        In my initial posting, I focused on the inability/unwillingness of the industry to provide a platform in which those who are not au fait with the particular intricacies of a specific machine can still extract maximum performance. That's like Bugatti selling the Veyron and insisting that the be driven by Bugatti's own chauffeurs.

        I don't make that comparison out of ignorance. I make it to highlight the shortfall in real Engineering in I.T.. Sure; the make the lights blink faster and the bits bang harder; but most don't seem to have tried to fill the need for machines that are truly useful to the people who can make productive use of the machines. The people who are the interface to the real world.

        It's as absurd to expect application "programmers" to write code differently for HPC than for a "conventional" environment. It's a distraction from the problems that they *should* be solving, is not a good use of their time and more likely than not, will be done in a half-@rsed way; requiring it to be redone, over and over again, yielding results that are essentially irreproducible.

        So the maximum petaflops of the HPC become irrelevant as they were all expended "quickly" producing a result that is useless.

        The call by government to make the necessary Engineering happen is bound to be unproductive if not counter-productive. Commercial users of HPC must demand what they need from the manufacturers; and remain severely unhappy if they don't get it delivered. That is the only leverage that the free market can pull. If users need to form groups to produce common demands, then that wouldn't be the first time in history.

        Let the boffins/buffoons at the manufacturers work out how to squeeze performance out of their own hardware because they are the ones who have intimate knowledge of its foibles. Make them provide a uniform application software environment that is portable and produces consistent results in application software.

        1. Peter Gathercole Silver badge

          Re: Hang them by their lab. coats.

          If you thing that all that is required to create an HPC is to bolt many utility systems together, the you really don't understand the problem. There is a diminishing return as you spread some workloads across more and more cores, even though this is what they are forced to do nowadays because they've pushed single core performance pretty much to their economic limits.

          I was following the porting of the UM weather model onto the Power 7 processor in p7 775 systems, and I can say without any hesitation that there was a turning point where adding more processors made the model run slower, and the drop in performance was very rapid.

          Understanding why this is the case can be a challenge, and one that cannot be generalised or codified such that it can be addressed by current software development tools. There may become a time when this is possible, but the cost of doing this has not been justified, and may never be worth the effort. It may be that it will never be worth the man-effort to do it, so we may have to hope that initiatives such as cognitive computing are able to deliver.

          There's been a problem with computing for the last five decades or so. The rate of performance increase has been such that software engineering has never needed to keep up. In fact, the creation of software has been allowed to become spectacularly lazy in the assumption that the machines will just be fast enough to cope with inefficient software. This can be seen in the stupid memory footprint and significantly poor performance of much of the desktop tools that are used today.

          The only places where the efficient running of code has been important is in embedded controllers, and ... HPC. So maybe rather than producing even more software engineering, software houses should go back to more simple engineering techniques, more like HPC than vice-versa.

          Your extension of the car analogy is interesting. It's very strange that it needed a serious push from regulation before much of this increase in engineering effort was justified. And with the increased engineering comes the cost of the magic monks in blue overalls is getting higher per visit, such that it won't be long before it is cheaper to scrap a vehicle than to repair it relatively early in it's life. But in spite of this increased engineering, there's still justification for Bugatti Vayrons and F1 cars, and the skilled drivers to get the absolute best out of them. Just as there will be a requirement for real HPC systems, with manually tweaked code.

          BTW, there is a whole sector of commodity HPC systems, bought in fixed configurations, with canned application development tool-sets the like of which, ironically, are used by F1 teams! They're just not the headline systems that are in the news, the top 10% as you put it.

  3. Frederic Bloggs

    Weather? Certainly useful but...

    I didn't see any mention of breaking encrypted data in your "useful" list. This may have been an oversight.

    1. Primus Secundus Tertius

      Re: Weather? Certainly useful but...

      How does natural language work? Is it really brute force and ignorance? Will we ever get beyond the current 90% accuracy of language translations, dictation machines, style checkers, etc?

      1. Michael Wojcik Silver badge

        Re: Weather? Certainly useful but...

        There's little evidence that problems in natural language processing can be solved simply by throwing faster hardware at it. We're already at the point of diminishing returns from the throw-more-data-at-it approach beloved by Google, and we can build much deeper Deep Learning systems using conventional hardware - no need for HPC, much less exascale HPC.

        The problems of NLP are actually difficult. They're not just an issue of combinatorial explosion.

  4. Primus Secundus Tertius

    Find a coder

    It is difficult enough to find good programmers for the classic von Neumann sequentail architecture, let alone designers of parallel algorithms.

    The old analogue computers were essentially parallel machines.

  5. phil dude
    Thumb Up

    Can't Wait....

    With Exaflops, we might be about to do an ab initio calculation of a whole protein....(my target would be Rhodopsin studying the cis-retinal conversion process on photon activation...)

    Just as with all science, the cost of funding this infrastructure is insignificant compare to the cost of *NOT* funding improvements.

    P.

  6. John Brown (no body) Silver badge
    Thumb Up

    Summit, Sierra,Titan and Sequoia

    Not exactly innovative naming. Must be engineers rather than marketeers.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like