IBM has blasted the SPC-1 benchmark into the stratosphere with a flash-equipped POWER6 server. The Power 595 server, using 48 cores and 84 STEC flash drives, achieved 300,993.85 SPC-1 IOPS (pdf). The total system cost was $3.2m with the per-SPC-1 IOPS being $10.77. IBM held a previous record with its SAN Volume Controller (SVC …
13K per drive
The .pdf says 84 drives at $13,235 each. Then it says 35% discount, so $8603/drive.
That is absolutely insane. Who is going to pay even 1/2 of that ?
This is the crown jewel example of benchmarks gone wild/wrong.
-Paris, because I might pay $8.6K per pop.
Re: Who is going to pay even 1/2 of that?
Q: Who is going to pay even 1/2 of that?
A: Quite a few people with large SAP installations who have the p595 anyway. More than one would expect at a first glance. FAR more.
Math error ?
84 x 70 Gb does not equal 58Tb, try 5.8 Tb.
Do the math...STEC SPC-1 numbers suck compared to HDD
Whew...just unbelievable how the SSD spin-machine continues to hide the hideously ugly economic realities of Flash based SSD.
1) STEC Dollars/IOP is 200%-500% higher than other industry-leading HDD based systems (try starting with 3-Par and work through the list).
2) Cost/GB is $1,125, that's 50 to 100 times higher than HDD systems in SPC-1
Wake-up call...SSD works when IOPS are CHEAPER than HDD -- that's how we justify the high cost of SSD CAPACITY vs. HDD. Who is going to buy STEC now that we've seen conclusive evidence that BOTH iops AND GBytes are many times more expensive than HDD?
3) STEC Cost/IOP is 15x higher than DRAM-based SSD, and DRAM based SSD only costs 10% more per GByte than STEC.
Wake-up call...who is going to buy volatile DRAM cached Flash SSD for $1,100/GByte and $11/IOP when they can get mirrored and battery-backed persistent DRAM SSD for $1,200/GByte and $0.60/IOP?
Also of note, the STEC drives are advertised to do 45,000 (4k) IOPS, and in the SPC-1 test these drives only do 3,500 at 8K -- even though the volatile DRAM write caches were enabled on the STEC units. Now...for many years HDDs have generally delivered MORE than their advertised IOPS/disk on SPC-1, while STEC delivers only a tiny fraction of advertised IOPS/disk. What's up with that?
Finally, contrary to the myth propagated by Flash SSD makers (and repeated here in this article), the vast majority of SPC-1 results, including 3-Par, are NOT short stroked. A quick check of the benchmark results is all it takes to dispel that nonsense.
SPC-1 Price/IOPS rankings -- STEC in 17th place behind HDD arrays
Here it is...
Looks like Microsoft UK Researchers have had it exactly right all along about the nonsense economics of Flash SSD -- they absolutely nailed it.
Uh, this is not a storage array
... its a locally attached set of drives. It actually looks like 14 such arrays, each with 6 SSD's. So while the SPC has allowed DAS before, it has always been confined to a single enclosure, not 14.
Also note that the report is classified as 'Submitted for Review' and not 'Accepted'
So they get a result of 301k, vs the TMS value of 291k. The real kicker is that the latency of the TMS at 100% load is 0.86ms vs 4.75 for the IBM system. The IBM's latency is closer to a traditional array like the HDS 2500 than it is to the ram/ssd/flash based systems like the TMS RamSAN 400
And oh, there is also a small matter of cost... the TMS cost $200k vs $3m for the IBM system. Nobody in their right mind would go with the ibm solution. It would be cheaper to get a pair of the TMS systems and stripe them. Bank the extra $2.6M
Well I don't think you understood what this benchmark is all about, and that is not meant in a negative way :). I must admit I didn't get it first either, I was like "what the f***", but after having a look at the FDR I got it.
This is not a storage benchmark. Basically the SDD's are kind of irrelevant.
This is a benchmark that shows what the Virtual IO Servers, that you use to virtualize disks inside a power server, can pull through of IO's per sek.
So if you for example compare this benchmark to the IBM DS5020 benchmark (I use IBM<->IBM so as not to bring any platform religion into this), then the difference is that in this benchmark what corresponds to the DS5020 control unit, the whole cabling, switches and HBA's on the servers that run the workload, are all implemented in virtualization software inside the power 595.
And seen in that light it's a pretty good result, that server virtualized storage can beat dedicated hardware and do it with a 5 ms response time.
The actual stack is psysical SAS->Virtual server SCSI adapter ->Virtual client SCSI adapter.
It could most likely be done faster with 'virtual SAN' NPIV
I would very much have liked to see the CPU utilization and RAM usage of the VIO server under this benchmark run, as that is what is really interesting.
And as for the price.. well hey the IBM dorks priced the whole machine load generator and all into the price. So basically the price that is listed here is the storage + a highend server that would run your whole DB workload and all. Not just the storage part as it is in all the other submissions.
Chris Mellor's question and Golden STEC-Eggs
Mr. Mellor opined, and asked:
"Another impressive SVC IOPS feat was the QuickSilver project with one million IOPS achieved by using 40 Fusion-io ioDrive SSDs. Why hasn't IBM used such a configuration in the SPC-1 benchmark?"
Simple...IBM's and EMC's resale profits on the STEC SSD are roughly 5x higher than FusionIO.
Going with Fusion-IO would be like killing the Goose that Laid The Golden STEC-Egg.
How Golden? Well...80GBytes of SLC Flash costs $320.00 (need to 2x this to cover write amplification) and the controller BOM cost (STEC or Fusion-IO) is in the $200 range. Total BOM cost for either is under $1,000. Meanwhile, the 80GB Fusion-IO drives in HP's TPC-H cost $3,000, while the the 69GB STEC in IBM's SPC-1 costs $13,500.
Simple math: With STEC, here is more than $10,000.00 of incremental profit margins to split with IBM and EMC.
STEC's price/performance ratios are now shown (in multipla audited benchmarks) to be absolute crap compared to spinning rust. Therefore, STEC's only workable value proposition is it's ability to flood the coffers of storage vendors with SSD hype-cycle cash.
In this respect, Fusion-I/O just can't compete.
STEC Press Release; investors fooled by IOPS malarky?
Splashed all over the financial news sites last week was a STEC Press Release that says:
"The integration in IBM's Power 595 system, which deploys six STEC ZeusIOPS Solid State Drives (SSDs) within each expansion drawer, achieves an unprecedented 300,993.85 SPC-1 IOPS".
Ok...investors read this and see six STEC SSDs doing 301K IOPS -- 50,000K IOPS per SSD. This aligns quite well (and just oh-so-conveniently) with the numbers STEC states in it's SEC filings: 80,000 IOPS (read) and 40,000 IOPS (write).
Naturally, even sophisticated investors conclude that STEC's performance claims vs. HDD are now validated. Therefore STEC SSDs really ARE worth $13,000 each because $13K/50KIOPS equals a measley 26 pennies per IOP. Since even the best cost/IOP HDDs ring in at somewhere around $1/IOP, STEC's entire market premise is confirmed. STEC investors keep their STEC shares, and probably buy more.
-- Problem is, it required EIGHTY-FOUR SSDs in the test, not six, to get 300K IOPS.
-- Problem is, each STEC SSD only does 3,580 IOPS, about 1/14th of claimed IOPS
-- Problem is, STEC cost per IOP is $13K/3.6K = $3.77/IOP -- not better than HDD,
but many times WORSE than HDD.
Of course, if a STEC investor saw THESE numbers, they might conclude that they should dump STEC now.
Funny thing. STEC's cofounders, the CEO and COO dumped a quarter-billion of their STEC in August.
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- VMware reveals 27-patch Heartbleed fix plan