Japan needs a little good news these days, and it comes from the International Super Computing 2011 conference in Hamburg, Germany, as the K supercomputer, a massively parallel Sparc-based cluster built by Fujitsu, has taken the lead in the number-crunching race as gauged by the June 2011 edition of the Top 500 supercomputer …
And the ones not on the list?
I wonder where GCHQ's systems fit in.
its a 486 running windows 3.1
Is interesting, but what on earth is the US doing with all those systems ? OK, the NSA will need quite a few to analyse all those phone conversations and trawl through all that internet traffic, but they have a hell of a lot...
@"but what on earth is the US doing with all those systems"
Here's a clue (the icon).
The land of the free is becoming far from free. The US authorities have everyone so whipped up into such fear that most people can't see the growing tyranny appearing all around them.
The irony is one of the so called "Founding Fathers of the United States" said it best when he said "Anyone who trades liberty for security deserves neither liberty nor security" - Benjamin Franklin
re: last para
Most of the US's top supercomputers are owned by the Dept. of Energy (DoE) who happen to be tasked with monitoring our nuclear stockpile-- I guess a lot of the oomph goes into simulating nuclear decay and the effects of bombardment on electronics so we know when various bombs need to be decommissioned. In their spare cycles, they cary out various complex simulations for understanding the effects of different sorts of war events and modeling new kinds of weapons.
Wikipedia probably has good (and more detailed) entries on the primary missions of each of our big supers.
It's the nukes.
The thing that convinces the government to fund them has historically been nuclear weapons research. We don't get to test bombs anymore, so wouldn't it be nice if we could simulate every conceivable aspect of them? That's pretty much what they want and, really, it's the only sensible thing to do if you've got a nuclear arsenal and aren't allowed to pop the things off now and then. It's no coincidence that the biggest ones usually end up getting built at places like LANL, although universities are starting to acquire them too.
However, once they're built, there's usually a lot of spare capacity, and that goes into everything from biophysical simulations to designing antennas. A great deal of American scientific work--even totally innocent stuff like cancer research and figuring out ways to clean up toxic things--benefits from the defense budget, and this is one of the ways. Any computer this powerful will have people lining up to use it, and many of them aren't even weapons engineeers.
Why why why
Why oh why Linpack? What a waste of time, but good for people who do nothing productive, but like to measure electronic dicks.
At least Paris prefers flesh n blood
... can run Crysis, at 32 megapixel and 60fps *for everyone that bought the game at the same time*. Cheers.
Finally, someone can run Crysis... er no, wait, does it run in x86 too?
Do they sell those with 10MW power plants too?
Interconnects and efficiency
How does the efficiency of a machine like the Tianhe-1A scale over the number of nodes used? I know it's a proprietary interconnect, but from what is known, could the efficiency per nodes used be inferred? e.g. does the efficiency go up when using only a portion of the nodes, like if the system was used more for multiple jobs with multiple partitions than single monolithic jobs.
[PS I think Roadrunner said "meep, meep"]
@"but what on earth is the US doing with all those systems"
No: NSA/DoD systems don't show up here (hush hush). DoE has two parts: NNSA (roadrunner, cielo) overseas the nuclear stockpile, and does some related research, Office of Science (jaguar, hopper) does energy-related research (what it says on the tin). For instance, you can see some of the projects for Jaguar below.
Also, most of the US machines (by count) aren't government, so they are probably doing drug discovery or trading stock.
Like everything in the top 10, it's running Linux
Linux is the amazingly scalable OS running on that beast!
Its not a single instance of the OS you know. Its 17,000 (or whatever they said was the # of nodes), instances of Linux, so it scales to ~32 cores, big deal.
Do you have some links on your claim? I would like to read more, please.
Links, to what?? Cluster computing is a group/cluster of individual machines, which are linked together with an interconnect, (a network). Interconnects are either:
Ethernet based for capacity computing, think particle physics work, or finance, e.g. QM or
Capability based, whereby you require very fast, low latency message passing between distributed codes. e.g. CFD, Computational Chemistry codes. Low latency interconnects are: Quadrics, SCI, Myrinet and Infiniband, plus a few proprietary ones from Cray, IBM< and this Fujitsu.
All compute bodes are standard linux boxes, running a fairly standard Linux OS, although you might have added specialist maths libraries. Hence why the scalable linux statement was silly.
If you still want links look up "High performance computing".
Link to the Linux source
Link? Top 500 websites. Click on every single computer in the top (and about 98% of the entire Top 500 list) and you'll see:
Be it a single OS or thousands of Linuxes, they still need to communicate very effectively to grab 98% of the entire Top 500 list.
It's not like someone could replace all these Linuxes with, say, the ATARI 512's TOS, and still grab 98% of the entire Top 500 list.
"Its not a single instance of the OS you know"
I've not seen any information about the detailed arch. Is it a ccNUMA type in which case the number of OS instances could be far fewer than 17000.
Yes, but Todd Rundgren is correct.
These super computers are basically a large cluster on a fast switch. You just add a new PC to the network, and voila, you have increased performance. So, it has nothing to do with scalability when we talk about one large SMP computer, such as a IBM POWER795 with as many as 32 POWER7 cpus. Or Oracle Solaris M9000 server with as many as 64 cpus.
When we talk about one single large SMP computer, Linux is never run on them because Linux scales bad vertically. Linux scales to ~32 cores or so, on one large server.
Linux scales excellent in a large cluster with lots of PCs (good at horizontal scaling), but scales extremely bad on one single large server (vertical scaling). Linux merits is on a large cluster. Google runs a large cluster with Linux servers. There exist no Linux server with as many as 32 cpus. But there exists large super computers which are basically a cluster, for instance the SGI Altix with 1024 cores - which is just a bunch of blade PCs on a fast network.
"which is just a bunch of blade PCs on a fast network."
Strange then that it runs just one copy of Linux per 2048 cores
Sure it runs 2048 cores, just as the ALTIX SGI server does. But it just a bunch of PCs on a switch.
Let me ask you: have you thought about this?
IBMs biggest Unix server P795 has 32 cpus
IBMs biggest Mainframe z196 has 24 cpus
Oracles biggest Unix server M9000 has 64 cpus
HPs biggest Unix server (Integrity?) has 64 cpus (I think)
And they fiercly fight for benchmarks. IBM was so proud over their P595 TPC-C benchmark world record. Why can not IBM simply put in 64 cpus in the P795? Why had IBM to rewrite the old and mature Enterprise AIX that ran on big Unix servers for decades - when P795 was to be released? The P795 has 256 cores, and that was too much for AIX to handle. The earlier P595 Unix server had 128 cores which was manageable by AIX. Why dont IBM put in 64 cpus? Or even 128 cpus? Are there some difficulties when you dont do clusters?
Why does Linux stutters on SMP servers with 32-48 cores?
the SGI has a single system image running on 2048 cores not 2048 copies of Linux
Why should this be used in commercial chips?
The oomph of the SPARC64 VIIIfx comes from a custom-designed HPC-oriented vector instruction set called HPC-ACE. The scalar components are similar to slightly tweaked, cache-starved, low-clocked versions of the existing (slow) SPARC64 VII core. An 8-core SPARC64 VII at 2GHz with a smaller cache doesn't exactly sound like the Holy Grail of commercial computing, so I can't see why they would be "fools" as you said not to commercialize it.
@"K super consumed 9.89 megawatts"
I have to say that "K supercomputer at Riken" looks an impressively big room. Although I pity the poor engineers who have to descend into those dark server tunnels between the racks. It would be best to tie a rope around their middle before they go in and then if they pass out from the server heat, then drag them back out again before they cook! ;)
"Near Kobe, Japan"
Must be a Beefy system then.
10 megawatts? Pah...! The NSA's latest will pull 60 MW
In fact, they have two...
Will still dwarfed by Charity Engine.
Try a few multicore Raspberry Pi's
and a few usb hubs and you could probably do that on a car battery in a couple of years...
Do the poor Japanese have permission to turn it on more than once a year? As a result of the recent disasters they are experiencing power-cutbacks. Offices running aircon at 80 deg (at a very hot/humid time of the year), factories are changing their working week to flatten out demand too. Turn this beastie on an half the nation's lights will go out.
- Comment Renewable energy 'simply WON'T WORK': Top Google engineers
- All ABOARD! Furious Facebook bus drivers join Teamsters union
- Webcam hacker pervs in MASS HOME INVASION
- Nexus 7 fandroids tell of salty taste after sucking on Google's Lollipop
- Useless 'computer engineer' Barbie SACKED in three-way fsck row