Dell has beaten a 56-node Isilon system's file-serving benchmark performance with just eight nodes - and flash file access punch. The benchmark is the SPECsfs2008 NFS file-serving one and Dell's Compellent system achieved 494,000 IOPS with 8 nodes, each having 12 fast SLC flash drives and 120 slower MLC flash drives; an all- …
Comparing an all SSD config to a disk based one is a little off.
How would an 8 Node all SSD config of Isilon S400's go I wonder?
"That system was equipped with 1,288 x 300GB SAS 10K disk drives and 56 x 200GB SSDs"
Re: It did.
Still not comparing like-with-like. Compellent had 144 SSD disks vs 56 for the Isilon. And 2 years ago the IOPS on flash drives was lower than it is today AFAIR.
Which is the problem with a lot of these SPEC benchmarks. They tend not to be updated frequently as it takes significant resources to run them, and vendors often compare their latest generation product to one several years old.
It's about the bottom line...
...in terms of price/performance/consumption (of space, energy, cooling etc.) Rest of it is interesting but differences in architectures certainly will not invalidate the comparison.
How does a benchmark on a system 2+yrs ago, a few models older, a few generations of OneFS older; to a system with an unreleased OS, on a apples to Oranges disk subsystem?
And how relevant or realistic is this benchmark at all? It's bragging rights, but no one builds a system to match this benchmark workload.
I much prefer $.00 per IOP. And not node count or disk config or max IOP's in a synthetic benchmark.
don't forget Exanet
The NFS stuff there is Exanet (FluidFS)
The architecture is totally different Exanet vs Isilon so comparing them isn't really right, they are really built for different purposes (Exanet tech not being anywhere remotely as scale our as Isilon).
The disclosure results are confusing,
Though maybe "4 appliances" actually means "4 pairs of controllers", or 8 controllers total (in which case the disclosures would make sense)
I find it very unfortunate Dell Exanet still haven't broken through the 24GB ram/controller setup, especially in this age where memory is so cheap and plentiful. The Exanet nodes I had back in 2008 had 24GB, here we are almost six years later and that number has not changed. I can only assume this limitation is because there are still critical portions of the software that are 32-bit.
Though nice to see Exanet still alive and kicking regardless. It holds a nice spot in my dark heart. I'd certainly consider using it if I could use it with a 3PAR. Much preferred Exanet to HP's Ibrix/X9000.
As usual, too bad SpecSFS has no cost disclosures. Show me some Compellent love on SPC-1.
Re: don't forget Exanet
You can see the last 8-node Exanet test here from 2008 (note 24GB/controller there too)
What would happen if . . .
"Dell's result makes El Reg's storage desk ask itself: what would happen if Dell brought out an even larger, more powerful Compellent array?"
Come on El Reg. The Compellent array isn't the scaling factor here. A more powerful array won't help the 4-node limit or the 1PB max namespace supported by the FluidFS. And, BTW it was 2 SC8000s used in the benchmark.
I just don't get the point of this Dell benchmark. If max performance for a SPECsfs-like use and a few 10s of TBS of capacity are all I needed, I could take a couple beefy servers, add a nice complement of I/O, RAM and flash, turn on RAID 10 as in the benchmark and dispense of that mess servers, FC SAN and the SC8000s all together.