Reply to post: Re: 2 million lines of FORTRAN code @Bronek

A 16 Petaflop Cray: The key to fantastic summer barbecues

Peter Gathercole Silver badge

Re: 2 million lines of FORTRAN code @Bronek

The problem is twofold.

Firstly, the Unified Model needs to have quite large sets of data per cell. Currently, the systems are sized with ~2GB per core, and each core is at any one time calculating one cell on the grid. This is to do with the way that the information is arranged, and although current GPUs can address large(ish) amounts of memory, they cannot manage to provide enough memory per core for a "few thousand compute units on a single card". Until the GPUs have the same level of access to main memory that the CPUs and DMA communication devices have, this will always be a block.

Secondly, all of the time steps are lock-stepped together, and at the end of each time step, results from each cell are transferred to all of the surrounding cells in three dimensions (called the 'halo'). As I understand it, the halo is being expanded so it is not just the immediately neighbouring cells, but the next 'shell' out as well. This makes weather modelling more of a communication problem than a computational one, and one of the deciding factors over the decision over the architecture was not how much compute power there was, but how much bandwidth the interconnect has.

To do this work on a system using GPUs for some of the computational work would require significantly more memory than can conveniently be addressed in the current GPU models, and because there are different GPU-to-main-memory models around with each generation of hybrid machine, getting the data into and out of the GPUs is not generic, and currently requires to be written specifically for every different model at the moment. There are also no standardised tools to assist.

Personally, I feel that the current GPU hybrid machines are a dead-end for HPC, as were the DSP assisted systems 30 years ago (nothing is new any more), but what we will see is more and more different types of instruction units added to each core, making what we see as GPUs today just another type of instruction unit inside the core (think Altivec crossed with Intel MIC if you like).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon


Biting the hand that feeds IT © 1998–2019