Just think of the computing power gain available when, come 2012, intel's 50core+ "knight's corner" co-processors will become available. Looks like Moore's law still has got legs.....
Putting a petaflops into a single server rack isn't as difficult as the Defense Advanced Research Projects Agency had thought. Back in March, DARPA — the research arm of the US military that brought you the Internet — put out a call to all nerds in the Ubiquitous High Performance Computing program to come up with an …
If SGI keeps messing around, they will have to file for bankruptcy again, this time final... Dawning Inc in China started to make supercomputers which are not only cheaper and faster but burn half the electricity of anything SGI can offer. El Reg, please note: speaking about a single precision supercomputer is like speaking about lilliputian giant, this is a contradiction in terms.
So the above mentioned Dawning Inc is beating SGI right now by offering NVidia Fermi GPUs attached to nice x86 blades. Fermi is 3 to 10 times faster in double precision than ATI's GPU, indeed ATI's GPU architecture is unsuitable for general purpose computing to begin with. The funny part is that Dawning also offers standard Infiniband interconnect, apparently SGI got a clue from them.
So while SGI is trying to figure out how to politically please their x86 masters, it may disappear altogether... as so many others. In the meantime, cheap, small and economical supercomputers are already available in China and Japan.
No real mention of the power budget for this thing.
It's *very* smart of them ot have multiple supplier paths to achieve a shippable solution.
at least one of those mfg *should* be able to deliver their roadmap. I hope they all will. This seems like a good incentive.
As for the management software to make *effective* use of the hardware that's another story.
Moore law *may* have some life left in it. Amdahl's law seems like it's a *lot* tougher to crack.
If DARPA are right and 1 Petaflop is what is needed to deliver *human* level processing power this could be the start of *real* AI research.
5 decades and a couple of billion dollars in.
It is one thing to make a computer have a theoretical performance of 1 petaflop and another to be able to make actual use of the computing power. My (limited) experience with GPU programming shows me that we are not there yet. The peak numbers are impressive but in practice we cannot achieve that performance easily.
The biggest problem with parallel computers so far, is how to write effective software for them on one hand and the time required to do that. So if we take a current day super-duper 1000 core machine and it takes three years to properly write management s/w and tune compilers and applications in order to fully exploit its performance potential, then... I guess we have missed our target.
So SGI trying to mix and match such a number of technologies, without telling what will be the tools and their maturity is at least problematic. Maybe not for them, if they are paid by DARPA money, but for everyone else trying to make actual use of the machines.
On the other hand, I believe that at some point we'll have to revisit the hardware accelerator part in order to get a certain performance level at a certain power budget. FPGAs are moving fast, maybe faster than what was believed, and I believe that we can see a clear alternative here. Maybe an FPGA cannot provide the peak performance numbers as stated by GPUs, but they can provide more acceleration, as they can pack more integrated functionality (e.g. better data handling, moving, processing). With hardware description languages and dev environments able to produce hardware from typical high level software programming descriptions things are looking better.
Biting the hand that feeds IT © 1998–2019