IBM's Cell attack will gain some added muscle next month thanks to a new blade server. The system will run on a refreshed version of the Cell chip that includes better support for mathematical calculations and memory. As a result, the Cell-based blades could tempt a larger set of customers. The QS22 blade serves as a follow on …
Less processing power than my GeForce
I get ~500GFLOPs out of my nVidia Geforce graphics card with faster (GDDR3) memory and how much do they want for a Cell blade I wonder, a lot more than my graphics card I suspect and I'm a generation behind, the 9800GX2 will do over 1TFLOP, of course you need an algorithm that can be parallized to get that, but using the CUDA SDK it's pretty easy to write code, it's not Occam on a T800 hypercube, but it is much faster if not quite as much fun.
Apples and Oranges
Nvidia GPU (ATI too AFAICT) are very good for very parallel algorithms, as befits their SPMD nature. But a GPU cannot handle MPMD problems as well as a Cell. Cell is rather inherently an MPMD processor: the SPEs can each run different kernel and inter-SPE comms is fully supported.
Also, within the Cell you have more than adequate bandwidth between the PPU and the SPUs, while on a GPU you have the PCIe bus limiting bandwidth between the CPU and GPU, and while "bandwidth within the GPU" can apply it is very tricky.
You might profitably compare a PC with 8 GPU cores to a Cell with it's 8 SPUs. In that case one may find ways around the difference between Cell's internal bus bandwidth and PCIe bandwidth. But that's still apple and oranges becuase of the different nature of a GPU core and an SPE.
I've programmed both. If Cell machines were not so stupidly expensive (8800GT=$200CDN, PS3=$400CDN, real Cell computer=$10KCDN+), I cannot see how GPUs could compete. I do not include Tesla because it is really a GPU with a lot more RAM - same PCIe bus.
You have to consider the nature of the problem when comparing processors. Cells throw lower than GPUs but GPUs are rather restrictive of the algorithms they can be used with.
I really wish there were such a thing as a standard tower form factor PC built around a Cell (or a pair). Maybe ditching Rambus will help this happen? That all the Cell computers other than PS3 are of the "enterprise blade form factor or 1U rack form factor with attendant high price" is unfortunate, it keeps the cost up. Perhaps it is time for another attempt on the x86 market by PPC?
LOL. But I'll keep looking for it nevertheless.
How about a dual processor Cell PC with 4 x16 PCIe slots stuffed with 8800's or 3870's? Beginning to see the difference btwn the two kinds of processors yet? ;-)
And to be fair I cannot see why a really massive PCIe card could not be made with N>8 GPUs on it and a really decent inter-GPU bus. It would require it's own idiosyncratic programming API, but neither GPUs nor Cells are programmed like general purpose CPUs. Nvidia might hate it, but IBM's ALF and DaCS would probably apply and make life easier for the people needing to (learn how to) program such a card.
Well, that's my $0.02 anyway.
I agree with all of E's points, plus lets not forget that your Geforce is eating up power by the kWh whereas the cell is pretty tame in its power consumption -- therefore also in its thermal footprint. That is a major concern in data centres.
"IBM can support 16 times as much memory in its systems (up to 32GB) thanks to a shift away from Rambus to DDR2"
WTF ? I'm no great fan of Rambus but it has no inherent limit of 2GB - it must surely be an implementation issue or something...
Need volume sales to get the price down.
If someone made a physics copro board with a cell we could get loads quite cheap....
financial services type performing large calculations?
Real financial software steers clear of floating point representation for money values because pennies in the pound can't be represented exactly. Besides, the burden of the actual arithmetic is small alongside the processing of business rules.
>> Real financial software steers clear of floating point representation for
>> money values because pennies in the pound can't be represented exactly
"Real financial software" is meant to refer to the large mathematical models for derivatives pricing (stochastic calculus, curve fitting, monte carlo simulations etc) so uses extensive floating point but all based on doubles whereas the original Cell only provided floats.
Penny accuracy only counts in final P+L type calculations (think "balance my cheque book") which is hardly going to consume my grid of thousands of compute nodes, which is handy as it's busy calculating risk for all my positions.
"...pennies in the pound can't be represented exactly"
They can in BCD floats :-) (IIRC, some current IBM CPUs support them)
Or are you thinking of LSD? Ah, the pleasure that came of re-discovering the hack that allows mixed-radix calculations in a single register.
What is LSD? Other than the obvious answer? Can you give some references? Google tells me about drugs...
- Xmas Round-up Ten top tech toys to interface with a techie’s Christmas stocking
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Review Hey Linux newbie: If you've never had a taste, try perfect Petra ... mmm, smells like Mint 16
- Analysis Microsoft's licence riddles give Linux and pals a free ride to virtual domination
- NSFW Oz couple get jiggy in pharmacy in 'banned' condom ad