PR Opportunity
Just reconfigure it to be good at the LINPACK tests.
Which is the most powerful computer in the world? Easy: the Tianhe-1A at Tianjin in China. Just ask the chaps at the TOP500 supercomputer list – or President Barack Obama. "Just recently, China became home of ... the world’s fastest computer," said the prez in his latest State of the Union speech. That makes the high …
...not news?
I couldn't comment authoritatively on why it doesn't do well in LINPACK, but the answer may be because the parallelizable components of the test scale well to the many thousands of cores in the top supercomputers, as opposed to the 200-odd in this machine.
If they'd reconfigured their machine to make a (proportionally) bigger dent in LINPACK with the same number of cores as the Chinese super machine, this would be interesting. But there's no substance to this hot air, it's effectively just a whinge that the benchmark doesn't suit the architecture...
.. my testis better than your testis.
Which brings us back to the basics of the exercise: dick waving.
I happened to notice that the yanks are really fond of having "the first/best $IMPLEMENT in the world", Cue Philadelphia being "The first city in the world to have free running water" (yeah right, ancient Rome might have something to say about that) and "the world's most beautiful murals" (don't even know where to start with this one), Boston being "the home of the first computer" (ha-ha, interesting claim) etc etc...
Pick any tourist-oriented leaflet in any city in the US: they're all full of such wild (and very amusing) claims. It almost looks like the "the world" means "anywhere I can get in a 2 hours drive" for some.
I'm not too impressed even by the Chinese machine. One can build a bigger computer by connecting more smaller computers together.
The meaningful measure of computer power isn't throughput, but latency. That's the real bottleneck, in principle, to solving any given problem within a short time, one that can't always be cured by throwing more hardware at it.
So, if someone could make a processor that runs at 10 GHz, and returns the result of a double-precision floating-point divide one cycle after being given the numbers to be divided, I'd be impressed. Even if it's hideously expensive and draws a ruinous amount of power.
Just one of those, to handle the bottlenecks in a program, while a massive pile of ARM or Atom processors in parallel handle the easy part of the program, could get you your answers significantly quicker.
FPGA-based reconfigurable computing is old as old. I remember an article in Byte Magazine circa 1990 about how you could make machines go zoom with PAL components.
Yes it works, the problem being the program that maps your application to the (modern) FPGA array. Not so easy.
I suspect the reason this machine is faster at some problems, is that if your problem is genome related what you really need is a 2 bit integer machine, not a 64bit floating point monster (which is what LINPACK wants). The floating point capability would be completely wasted on such a problem. That is the kind of thing the FPGA is good at. Most super computer designs focus on floating point since that seems to be what most work requires.
The CHREC kids aren't bragging: Reconfigurable processors really *are* wildly better than Von Neuman architectures at some classes of computation. Floating point isn't one of those, but integer arithmetic (and every algorithm that can be based on it) sure is.
As an earlier commenter pointed out, the pain lay in the "reconfigurable" part - mapping the algorithm to the device. Many have tried to develop tools that take in an algorithm expressed in a high-level language (usu. C) and spit out config files for the target device, but none have really succeeded. It still takes some human cleverness to puzzle out a good FPGA implementation.
An Alien for a surprisingly still-alien technology, despite having been around forever.