Fujitsu's Eternus disk array has topped the SPC-2 benchmark charts, with an IBM SVC-Storwize V7000 combination in second place. The SPC-2 storage benchmark measures how an array performs doing large sequential I/Os and looks at three workloads: large file processing; large database queries; and video-on-demand. The benchmarks …
You missed one: The IBM DS8870 high-end disk system has a published SPC-2 benchmark result of 15,423.66 MBPS.
David Sacks, IBM
Why doesn't Pure Storage publish a SPC benchmark?
Unfortunately, we're not allowed to publish SPC benchmarks. The SPC benchmark doesn't allow results run on storage arrays that feature deduplication, and in our product, the Pure Storage FlashArray, deduplication and compression can't be turned off (a good thing! we dedupe/compress at such high performance there's no reason to disable it). Given that deduplication is rapidly becoming a standard base feature of modern all-flash storage devices, SPC risks becoming a benchmark of the passing disk era. In fairness, the SPC folks are reportedly working on this, so we look forward to testing SPC when/if it is ever enabled for deduplication.
Unwittingly, SPC is becoming one of the best flash marketing organizations. They are showcasing the ridiculous cost and extent one has to go through to deliver performance with disk (in the case of the recent benchmark you highlight, an array that costs $1.27M, is $5.60/GB raw and $17.85/GB usable, and delivers only around 70 usable TBs in 3 full cabinets!).
Most storage architects in this day and age have realized if you are buying storage for performance, all-flash storage is the obvious choice. Pure Storage has focused on making all-flash storage not only fast, but affordable for a majority of data center workloads (databases, VMs, VDI).
Storage Efficiency and SPEC Benchmarks
Pure Storage is right - dedupe and compression are requirements of modern storage systems. In fact, we think data efficiency will define the competitive landscape in storage over the next 10 years. Given that, benchmarks that don’t incorporate efficiency technologies are useless. It is like running tests without RAID or management overhead - and who would do that today?. As an OEM supplier of both dedupe and compression technologies, we take a broader broader view of data efficiency than does Pure. In our work with storage manufacturers, software suppliers and online service providers, we are helping partners with in-line, post-process, volume specific and other workload specific tunings. As these features sweep through the storage industry, performance benchmarks will have to account for these advancements. We look forward to the day when our OEM customers run SPEC benchmarks with dedupe and compression enabled on their hardware and show just how fast, resource efficient, scalable and effective they are!
- Twitter: La la la, we have not heard of any NUDE JLaw, Upton SELFIES
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Is that a 64-bit ARM Warrior in your pocket? No, it's MIPS64
- Apple to devs: NO slurping users' HEALTH for sale to Dark Powers
- Apple 'fesses up: Rejected from the App Store, dev? THIS is why