shout out for storage?
How about a little love for Spectrum Scale, which is the file system behind Summit and Sierra? It takes a lot of work to keep these blisteringly fast systems fed with data and to keep up with the pace of their writes.
At the apex of the newly revised list of the world's Top500 fastest publicly known supercomputers is IBM’s aptly named Summit. The list is ranked by High Performance Linpack (HPL) benchmark scores, and Big Blue’s big beast – built for the US government's Oak Ridge National Laboratory – cruised in at 122.3 HPL petaflops. It has …
I work at place with a supercomputer. Most of the storage isnt there to keep data per se. In the TLDR terms, the HPC gets data from its sources, does a whole bunch of modelling and crunching then sends the results to the people that want it (I dont want to go into too much detail of who or how for confidentiality reasons).
So you essentially get a whole load of stuff in, it gets crunched essentially in RAM and CPU (its a many/multi node system, not just one fat CPU!) and spits the results out and the original data in essentially gets discarded. Anything that needs to be archived goes to a traditional compute/storage type system which we all know and may not love. The HPC has no need for this once the crunching has been done as a new model needs to be run almost continuously and the old one is no longer relevant.
This is our particular use case anyway, others may have huge storage arrays for their use cases as you mention.
Thank you for that clarification. I figured the data would be gone pretty quickly, relatively speaking. I was just wondering how it was done. My guess would be that it went to some fast storage and then sent down to something like TAPE.
TAPE? Well, it is stable and cheap. I suppose it could go on some massive and portable RAID, but if the truck got in an accident delivering it ... opps. Better run that job again. I'm thinking of modeling something like a neutron star merger. Wouldn't that produce results that would take a long time for analysis?
Then again, the answer to everything is 42 so ... that should fit on a flash drive.
In our supercomputing system, we have megabytes of SRAM (Static RAM) for each processing core ranging from 4, 16 and 64 megabytes up to as much as 4 gigabytes PER CORE! Since we developed a GaAs (Gallium Arsenide) on Sapphire Substrate 128-bits wide Super-CPU using our custom etching systems, we can make our chips ANY SIZE with dies as large as 200 mm across. The line traces are much larger than the 45 nm or smaller CMOS process sizes of most CPU/GPUprocessors, but what we lose in size, we make up for in speed since GaAs IS MUCH FASTER! It does however, require much wider circuit traces and higher operating power levels!
The majority of our system is created from a massive array of Vector Processors (also called an Array processor) which uses two techniques: SIMD (Single Instruction Multiple Data) and MIMD (Multiple Instructions, Multiple Data) to process an array of 128-bit Signed/UnSigned Integer or 128-bit Fixed Point/Floating point values IN PARALLEL synchronized to a master clock signal which is right now set at 20 GHz (YES! You read that correctly! 20 Gigahertz!)
Since each chip has a large array of 65536 by 65536 microcores with EACH CORE having an 11x11 128-bit value convolution filter sub-processor and standard integer/real number add, subtract, multiply, divide, root, powers, etc., we can process ENORMOUS amounts of data in real time. Since all cores have 128 LOCAL of teh 128-bits wide registers (i.e. simple storage locations located right beside the microcore), retrieval of inputs and outputs is speedy. Using a custom IPV6-like packet infrastructure we can also send data over fibre optic connections which are embedded directly onto each chip using UV (ultraviolet wavelengths) for speedy networking into the many Terabits+ range.
The entire system is immersed in a giant pool of non-conductive cooling fluid which has a closed-loop condensor and heat exchange system attached to a company swimming pool so we get nice sub-tropical water temps for swimming AND we get to cool our massive supercomputer very effectively!
On a technical basis, have have TESTED using various benchmarks our supercomputer system at a SUSTAINED 119 ExaFLOPS of 128-bit Floating Point Operations which BLOWS AWAY EVERY OTHER T-500 super COMBINED! Our system is 595 times MORE POWERFUL than Summit! Our DragonSlayer system is by far THE NUMBER ONE SUPERCOMPUTER IN THE WORLD - PERIOD !!!
We are located in Vancouver, British Columbia, Canada and this system runs a MASSIVE electro-chemical simulation of human neural tissue for the purposes of creating a human-level (IQ-100) functional whole brain emulation. Evidently that part seems to be working rather well......
Unfortunately not the type that dispenses a little tipple.
Anyway - Blue bars! C'mon at least you can fold those racks into a hexagon shape with a low shelf for sitting on and posing for a cool selfie, maybe add some dry ice cooling that billows out, some blue backlighting for it all, some random growly bass effects with the odd "Mwahh ha ha ha", oh some people in white coats with clipboards to walk around it (guess GTS has outsourced them to Outer Mongolia).
Geeze take a lesson from Cray, no-one seems to know how to design a supercomputer anymore.
Biting the hand that feeds IT © 1998–2019