If you were thinking that coprocessors were going to take over the Top500 supercomputer rankings in one fell swoop, or even three or four, and knock CPU-only systems down to the bottom of the barrel – well, not so fast. While GPU and x86 coprocessors are certainly the main computation engines on some of the largest systems that …
Titan has passed rigorous acceptance testing
Reg quote: The entry of Tianhe-2 at the top has pushed down other systems in the June rankings, of course. The "Titan" XK7 ceepie-geepie at Oak Ridge National Laboratory, which has not been formally accepted by the lab yet,
Yes it has: OAK RIDGE, Tenn., June 12, 2013 — Oak Ridge National Laboratory's Titan supercomputer has completed rigorous acceptance testing to ensure the functionality, performance and stability of the machine, one of the world's most powerful supercomputing systems for open science.
Did you see the number of cores on Tianhe-2
It says over 3 million, and draws 17MW of power.
I guess that what this says is that throw in enough hardware, even with the law of diminishing returns, you can have the #1 supercomputer.
I hope the parrallella lives up to its promise
of 18Gflops per watt.
BG/Q - 3D ???
Real serious stupidity
The Top500 has always been an eronious metric of anything, but the latest sets have really taken the biscuit. The overwhelming majority of consumed cycles can't exploit either phi or gpu with anywhere near the efficiency needed to justify their inclusion in a system, and yet here we have the largest systems built out of them. They really are in the territory where people should be building small dev clusters, with the odd single application shop thats already completely ported building large dedicated machines. This batch will be decommisioned before the software catches up.
Oh well we now know that the Chinese can piss higher - I wonder what money changed hands for the phi's. :)
Re: Real serious stupidity
So I wonder..
What are the seti@home or rosetta clusters capable of in terms of raw flops?
Re: So I wonder..
Those 33.86 actual petaFLOPS from the Milky Way sound pretty aggro.
It's a circle
<<when memory, fabric interconnect, coprocessors, and likely central processors will all be crunched down to single chip packages, we won't even be talking about coprocessors any more.>>
The history shows that there is a circle : we design some systems which plug in into the machine. Then, after some years, we are able to add these functionalities inside the CPU, so we move them there.
Wait another 10 years and again we have more powerful systems (than the one available in the CPU) on some additional board... everybody starts using them, we make transistors smaller or die bigger or whatever, and the CPU includes this functionality.
You want floating point ? Buy the card and plug it into you PDP-11, oh well, no, add a co-processor, it's in a chip now, oh wait, let's move the FPU into the CPU, okay, it's there, but only one FPU ? Need many ? Buy this external card, ah wait, AMD have some APU's now, not powerful enough ? Buy a card then, etc. etc.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'
- Product round-up Ten Mac freeware apps for your new Apple baby
- Product round-up The Glorious Resolution: Feast your eyes on 5 HiDPI laptops