back to article All-flash array bake-off: Load DynamiX finds six AFAs go into four

A Load DynamiX six-vendor all-flash array bake-off seems to have involved a short list of just four vendors. Two appear to have gone missing. Load DynamiX has software to enable a storage workload to be modelled and run using potential suppliers’ arrays to test out which ones are best suited to the job. LD_Case_Study The …

  1. Anonymous Coward
    WTF?

    Okaaaayyyy...

    ... and this is useful how?

  2. Marc 25

    NEWS FLASH

    Bank in big bang for buck flash shocker!

    Translates to:

    Someone ran a proof of concept and then chose the cheapest box anyway, Software vendor brags about how their software helped them make this silly decision.

  3. GrumpyOF

    Can anyone explain----

    the detail in the table:

    specifically what does K IOPS /Dollars mean? If it said K IOPS/Million Dollars then I would understand--

    Vendor A (as an example) 144/5.9 = 24.4 rounded down = 24 and vendors B and C are 29 and 18 respectively. So how does Vendor D end up at 32?

    Similarly Relative Throughput / K Dollar for Vendor D should be 4.88 not 3.8.

    However if the cost price is not 1 000 000 but rather 1 280 000 then the numbers make sense.

    A further comment is that if the SLA requirements are specified clearly and require 100 K IOPS at a latency of not greater than , say 2 milliseconds, then the choice comes down to A or marginally B.

    Or just change the SLA requirement to suit the cheapest solution anyway

    1. Anonymous Coward
      Pint

      Re: Can anyone explain----

      Thank you.

  4. Anonymous Coward
    Anonymous Coward

    .

    Just so much wrong with this whole 'case study'.

    Not doing yourself any favors LoadDynamix.

  5. thegreatsatan

    worthless

    seriously, if people are struggling to comprehend the metrics you are measuring, maybe you should use ones that people do understand or that the industry as a whole recognizes

  6. LR110

    A comment from Load DynamiX

    Thanks Chris and GrumpyOF for pointing out some mistakes. Mea culpa. The fact that the customer eliminated 2 vendors before the final testing phase was not clear. The chart was not properly labeled and there was a data entry error in one of the cells - Vendor D should have been priced at ~$1.3M. Also, that one column should have been labeled “relative” IOPs per $ or per million dollars as you correctly point out. Data is truly in the eye of the beholder.

    As full disclosure, we have purposely modified the numbers from the actual customer analysis to protect the confidentiality of the data. We at Load DynamiX take vendor independence very seriously, which is why we don’t name vendors or provide the exact numbers direct from a customer. This brings me to the value of this type of storage performance testing…

    This is just one example of dozens of exercises which we have performed in the last 12 months helping customers to reveal significant differences between their shortlist of vendors using their workloads to drive the testing process. Each time we do it, we see products respond quite differently depending on the workload characteristics as you’d expect.

    Load DynamiX worked with this customer to accurately simulate their production Oracle environment so that they could truly understand how potential AFA products would perform for their unique application workloads. Even we were surprised by some of the Oracle workload performance results which only serves to highlight the importance of modeling your workloads and not just following vendor rule of thumb sizing guidelines. The results will vary as performance is a very much a function of the workload it is running. In this case, as an example, they saw over a 5X difference in average response times. This is a huge variation and the customer used the data to make the trade-offs that affect millions of $. In this particular situation, and in many others, they did not choose the cheapest solution. The analysis in the chart shown in this case study presents the data in the way this particular customer wanted to make their decision; to make the optimal price/performance tradeoff.

  7. Boyan

    There are some things very wrong with this paper:

    1) for $5.9M you get an AFA that only does 144k IOPS and massive 16.12GB/s throughput. This is a very strange config and throughput is totally out-of-balance with IOPS. Nothing is mentioned for the network that can push this huge traffic. What is it, is it included in the price, is it not, etc.;

    2) The price is a total rip-off. Even the Kick-Start package from our company (disclaimer, I work there, obviously) makes north of 200K IOPS and starts at about $20k:

    https://storpool.com/get-started-build-high-performance-cloud

    4) The paper doesn't say anything about capacity of the systems. And storage systems are priced per GB/TB usually, so we're comparing apples to...nothing;

    4) The calculation of the column "K IOPS / Dollars" is not correct. The correct one is: 144K IOPS / $5,900,000 = 0.000024. Even if what they really meant was "IOPS / K Dollars" then the number is 24.4, but it's insanely high. Again from the Kick-start package above the "IOPS/ K Dollars" is 10,000 (!).

    Cheers,

    B

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon