Minor quibble
The GPU in the iPad 2 was rated at about 19 GFLOPS (see http://www.anandtech.com/show/6426/ipad-4-gpu-performance-analyzed-powervr-sgx-554mp4-under-the-hood) which puts it about in line with desktop costs per MFLOPS.
If you need lots of compute power, you’re either already using GPU accelerators or taking a close look at them. And if you’re serious about tapping into the full processing power of the GPU, perhaps this gear will fit the bill. Dubbed the ioMillennia, it’s a 3U, PCIe Gen3 switch from One Stop Systems that can handle sixteen …
Bitcoins mined per unit time divided by total cost (in Bitcoins, natch) to run the system. The capital costs can be amortized over an arbitrary period, say one Moore period (18 months or a year, feel free to argue). It should come out as a dimensionless ratio.
Downside it may change over time as the absolute cost to mine Bitcoins is a function of time in the long run.
Also, power costs vary with time and place.
Hmmm... forget about it.
This box takes up another step towards commodity HPC. A lot of businesses could afford to put a box like this under their analysts desks, giving them serious compute firepower. The question of course, is what do you do with that? At the minute you need advanced programmers to make use of this kind of hardware, at least if you're doing a task custom to your business, as opposed to, say, running Folding @Home.
You can imagine a kind of high-end spreadsheet, that puts all this compute firepower in an easy to use package. When someone invents that, all these data analysts will be able to really use the HPC boxes. This would greatly benefit all kinds of financial analysis. And can you take it further? Retail sales data? Industrial sensor data?
Yes. but the problem is getting that retail sales data/industrial sensor data into and out of the GPUs.
You still have to have a very capable system which can read the data from disk and display the results.
Just throwing lots of flops at a problem isn't the stotal solution.
To analyse large amounts of data you need to get them quickly into that box, and that's not going to be easy. You'd need lots of very local RAM.
So it's very questionable if you can do "Big Data" on such a box. Simulations are more likely to a degree, but eventually it's going to be bandwidth starved.
BTW there already is a simple to use interface to process large amounts of data. It's called SQL and was designed to everyone with half a brain can learn it and create astonishing analyses.
This would top the June 2004 Top500 list. Basically, 9 years ago this would have been the fastest machine on earth. Not just the fastest production machine, the fastest machine at all. The top one at that time was the Earth-Simulator in Japan at just under 41 TFLOPs and burning 3.2 MW of power.
The comparison should compare like-for-like workloads. FLOPS is an interesting base for comparison but is just that: a base. The cost of power of the whole system should be factored in and If you need Peta-FLOPS of computing power then it might become a real-headscratcher as to how you can do that with commodity hardware today.
Long-term actually owning any of this hardware is going to be too expensive for the calculations that "always manage to outgrow the available hardware" but getting a price for say 1000 Peta-FLOPS for 100 days may soon become a reasonable possibility. Isn't this where Google is aiming to be? Could be mucho-millions in it from the scientific community if they, or anyone else, can deliver.
I thought one of the issues with a "generic" way of doing things is bottlenecking I/O, which the very hot and fast cards can interface with the system well, there is a lot of bandwidth needed to get the same data in and out of the system.
However the fact things are so powerful now, is very interesting and a slight unnerving, that said no matter how underpowered a cray II is I want one because they look really COOL!
now a 3U based system so you could potentially cram 14 in one rack... one very hot, and power hungry rack...
and maybe 5 racks per row, 5 rows in a reasonable office datacenter... hmmm
oh and the powerstation next door to make it work...
you could probably reuse the heat generated to heat the building/water/small community.
"you could probably reuse the heat generated to heat the building/water/small community."
Back to my childhood. Liverpool University used to heat a swimming pool with heat from a mainframe (one of those with big cabinets with tape spools in) in the late 60s early 70s. Happy times.