back to article Another day, another load of benchmarketing, this time from HDS

I had hoped we’d moved beyond the SPC-1 benchmarketing but it appears not. If you read HDS veep Hu Yoshida’s blog, apparently the VSP G1000 is "the clear leader in storage performance against the leading all flash storage arrays." Yet when you look at the list of AFAs Hu uses for that comparison, there are so many missing that …

  1. Dave Hilling

    The rules for SPC-1 actually prevent many of them from even trying. For example If I remember right you cant use deduplication or compression for some units such as pure you can't turn it off.

    1. Archaon

      The rules for SPC-1 actually prevent many of them from even trying. For example If I remember right you cant use deduplication or compression for some units such as pure you can't turn it off.

      That is correct. I believe the concern being that SPC-1 doesn't use real data so those kind of analytic algorithms could completely bork the results for those arrays.

  2. Nate Amsden

    So what's the alternative?

    Having a level playing field is a good thing, unless someone can come up with a better test than SPC-1. It sure as hell beats the 100% read tests so many vendors like to tout.

    It's not realistic to expect people to bring in a dozen platforms(even if they can, a big reason I am a 3PAR customer today is NetApp outright refused me an evaluation in 2006 so I went with the smaller vendor and well I'm happy with the results) to test with their own apps.

    When my (current) company moved out of a public cloud provider 3 years ago, we were looking at stuff(of course I have a 3PAR background) and were looking at 3PAR and Netapp at the time. We had *NO WAY* to test ANYTHING. We had no data centers, no servers, nothing(everything was being built new). Fortunately we made a good choice, we didn't realize our workload was 90%+ write until after we transferred over(something I'm very confident that the NetApp that was spec'd wouldn't of been able to handle).

    I spoke to NetApp(as an example, I don't talk to EMC out of principle, same for Cisco) as recently as a bit over three years ago and again they re-iterated their policy of not giving any eval systems(the guy said it was technically possible but it was *really* hard for them to do)

    Last time I met with HDS was in late 2008 and they were touting IOPS numbers for their (at the time) new AMS 2000-series systems. They were touting nearly 1M IOPS.. then they admitted that was cache I/O only(after I called em on it - based on the people I have worked for/with over the years most of them would not of realized this and called them on it).

    So unless someone can come up with a better test, SPC-1 is the best thing I see all around, from a disclosure and level playing field standpoint by a wide margin(beats the pants off SPEC SFS for NFS anyway).

    I welcome someone coming up with a better test than SPC-1, if there is one (and there are results for it) please share.

  3. Archaon
    Mushroom

    Magic

    I wish we could move away from benchmarketing, magic quadrants and the “woo” that surrounds the storage market. I suspect we won’t any time soon, though.

    I assume you're referring to marketing about benchmarks, rather than benchmarks themselves. If that's the case that's fine as benchmarks are absolutely necessary. Even if the SPC benchmarks aren't perfect they're a lot better than trusting vendors who would say that you can 5 bajillion IOPS out of a 2 bay NAS box if they could get away with it. *

    That said the Magic Quadrant can go shoot itself in the face while falling into the sun covered in petrol. Icon -->

    * 100% read and all cached in server RAM, naturally.

  4. Anonymous Coward
    Anonymous Coward

    SPC 1: Size Matters

    The active working set in an SPC1 benchmark is approximately 6.75% of the configured ASU capacity. In this case, 30.9TB. 30.9 x 0.0675 = 2.08TB

    The HDS system has 2TB configured as System Cache (32 x 64GB) + 2TB (8x 256GB) Cache Flash Memory

    This entire workload is running in memory.

  5. unredeemed

    SPC is a relatively okay benchmark to cite, granted it may not have much real-world application to most customers, since they won't be buying a config near what was benchmarked.

    A better joke on a benchmark is probably anything from DCIG, or how Symantec uses a special tool called Gen_Data to show dedupe numbers, or how DataDomain metrics are measured with a synthetic data tool as well.

    Looking through the fud is half the battle!

  6. Mr Atoz
    Childcatcher

    But EMC does do SPECsfs

    "No Pure, no Solidfire, no Violin and obviously no EMC (“obviously” because they don’t play the SPC game)."

    EMC is all over the SPECsfs tests with their silly and unrealistic Isilon lab queens, how is SPC so different? Aside from the obvious block vs file.

    1. Anonymous Coward
      Anonymous Coward

      Re: But EMC does do SPECsfs

      The problem for EMC with the SPC-1 test is that they need to provide a price for the tested configuration, which in turn means it's real easy to call out a lab queen configuration. Not so for SPECsfs where they can push any unrealistic config to get the result with no disclosure on the actual cost required to achieve that result.

  7. storman

    Avoid benchmarks and evaluate with YOUR workloads

    Very good comments about the problems with “standard” benchmarks like SPC and SPEC. Vendors can play way too many games and half of them simply choose not to play at all. This is why products like Load DynamiX exist. It’s a storage performance validation system that enables you to accurately model your real production applications in a test lab to evaluate any flash or hybrid storage product. The key is that the Load Dynamix load generator emulates YOUR application workloads, not some benchmark that has nothing to do with your production environment.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon