Japan is plotting its return to global supercomputing dominance, with its science ministry seekings funds to design the successor to its K supercomputer, to be completed by 2020. According to The Asahi Shumbun, the new project aims to create a super with 100 times the processing capacity of the Fujitsu-Riken Research Institute- …
Will it have panties over the input/output device?
Or will they just pixelate it?
Wonder if they'll still use Sparc. It's possible, I suppose, due to some kind of sense of pride.
Still, it's easy to see that they've had their ass kicked by smaller x86-64 systems along with clusters of GPUs; but it may be that they can pull out a much faster design and impress the world. At least it would be more interesting and unique.
I would think that it would depend upon the workloads they planned on executing on it. Some workloads do not respond well to CPU-GPU clusters, and a pure CPU cluster would suite them. But for sheer benchmarking performance, it is hard to see how a CPU-only cluster can compete with a CPU-GPU stack, at a given cost point. My bet is that they go the CPU-GPU hybrid, and the real question is how hard is that to do with SPARC chips, given that most of that work has been with x86s.
Still, it is fascinating to watch a country invests heavily in infrastructure that it knows it needs to be competitive...
Well, the Fujitsu SPACs aren't too shabby for certain calculations. If they are intending to do plain floating point calculations with numbers that fit into a 64 bit register though; no integer or hugely long string work then yeah, just stick with Intel or AMD.
However, there are a few big reasons when designing a massive systems like this that you may take the performance hit and go for SPARC. If I recall correctly (do say if I'm wrong or out of date):
- SPARC should use less power for the same flops output,
- The memory controller is not on the CPU, so it is easier to have lots of memory accessible at once, and to move stuff around the system without blocking up the CPU.
- Better standard interconnects for shoving lots of data around.
- It also deals with threads differently to x86, which can be useful in some cases, but I can't remember the details now.
Remember that SPARC stands for Scalable Processor Architecture; it (used to be) easier to engineer big systems, though it is clear you can do this with x86 now.
Anybody know if it is still worth using SPARC for systems like this?
"had their ass kicked by smaller x86-64 systems along with clusters of GPUs"
Looking at the Nov 2012 Green500 list, it looks like it was the BlueGene/Q boxes that did the ass-kicking of the K computer.
All your Bitcoin are belong to us!
Maybe they should do an ASIC variant.
So roughly 5.7bn raspberry pi's assuming standard clock and no use of the onboard gpu. All hooked up over some cheap 100baset, the performance would be dire, but the racks would probably be one of the finest lego constructions ever!
Might be a little saner to use those adapteva chips.
Are you sure? Several million cores, maybe.
K had 705,000 cores, 10 Petaflops, and commissioned in 2011. Several years of chip development later (the new monster is due in 2020) I can't see it needing ~1,400 times that number of cores for just 100 times the floppage(TM).