back to article HDS blogger names HDS flash array as latency winner

Flash array response times climb as load increases, with good ones able to delay the response time uptick for longer. HDS blogger Hu Yoshida showed a chart produced from SPC-1 benchmark data showing how different flash arrays behave in response-time terms as the SPC-1 percentage benchmark task load increases from 10 per cent …

  1. Anonymous Coward
    Anonymous Coward

    The hockey sticks on many of those systems are because the results are taken from systems tested will all spinning media. At that time the goal wasn't the lowest latency but the highest IOps on those systems. Also with SPC-1 the vendor is in charge of the 100% load point, so they can artificially limit the test to stop before they reach the hockey stick moment, which people didn't do at the time.

    But well done HDS.

    1. Nate Amsden

      They aren't completely in charge of it, they could pick an artificially lower level if they wanted, but there is some upper limit(forgot what exactly been a couple years since I looked at it) that say if ANY of the response times are above something like 30ms the results at that level are not accepted.

      1. DLow

        Hi! Daniel from NetApp here.

        Actually the (big/sharp) hockey stick do say a few things, mainly that the system will not gracefully come “out of performance”, it will dump it on you. But that discussion is for another day.

        IOPS is cool but latency is what’s important. The goal has always been lowest/low latency. Or as much IOPS as possible at X amount of latency if you like. It was never a max IOPS race anyhow.

        HDS look very good here and they should get some cred and I wont dig into it, maybe someone else will do the detailed work.

        In regards to controlling the 100% load results you can. Sort of. But it’s a bit more complicated than that. Easiest way of saying it is that you, as the testing vendor, sets a limit. A latency limit.

        For HDD that limit is usually 20ms (30ms is the SPC limit as noted) as that is what most transaction applications consider as max latency before getting angry hence you want to have some head room.

        For SSD that limit should be 1ms as SSD and Flash was introduced to drop latency far, far below what HDD can deliver and 1ms is also what all vendors of SSD/Flash systems has as a starting point.

        So setting 1ms as the latency ceiling you get X number of IOPS out of the system being tested. The system might be able to deliver 10x the IOPS but not at 1ms or less.

        Its not avoiding or cheating the hockey stick IMHO, its showing what a system can do up to a certain latency point. Simple as that.

        I know this is a simplified view and explanation and this could turn into a long discussion which I sadly don’t have time for (I have flash to sell. ;) ) so I end it here.

        As for other type of tests, the only other I know that has a real life connection is the “VMware mixed workload test” that ESG did. I have not seen them publish that for a while though.

        I thought that was a pretty good test; VMware platform and then simulating Oracle, Exchange, Webserver and Backup/table scans/indexing etc at the same time to show how a system would cope with multiple workloads all while being (well) below 20ms for all apps.

        Cheers,

        Daniel

  2. Anonymous Coward
    Anonymous Coward

    It would also be extremely interesting to complement the SPC benchmark with the age of the arrays. The G1000 is brand new, and I suspect that a lot of the systems that perform worse are also a lot older.

  3. CheesyTheClown

    How does it compare to...

    Windows Scale Out File Server in the same price category with Storage QoS enabled?

    How does it hold up against an OpenStack Swift solution?

    I guess if you're stuck using VMware, it's probably a good solution... maybe.

    These guys are so fixed on comparing the slow to the slow.

    1. Anonymous Coward
      Anonymous Coward

      Re: How does it compare to...

      Who knows, why don't you show us some SPC-1 results for those and then we can all judge :-)

      Thought not !

  4. This post has been deleted by its author

  5. Anonymous Coward
    Anonymous Coward

    I'm not sure you got the point I was making, what you're thinking about is the maximum IOps at a given latency cut off which is capped,

    However the tester can choose to limit the workload and so sacrifice their maximum IOps number in favor of showing a lower latency number.

    This isn't as important for all SSD systems but has been used by in the past by Netapp among others with disk and hybrid systems to avoid the hockey stick in favor of a low latency number.

    If you can't beat them on IOps you can always brag about lower latency :-)

  6. FJ-DX

    The HDS blogger doesn’t include .......

    .... Fujitsu's latest ETERNUS DX200 S3 and DX600 S3 benchmark results on the chart.

    Regarding response times:

    best SPC-1 result to date:

    ETERNUS DX600 S3 : all ASU, 100 % load - response time 0.61 milliseconds

    runner up:

    ETERNUS DX200 S3 : all ASU, 100 % load - response time 0.63 milliseconds

    So - who's the latency winner?

    1. Anonymous Coward
      Anonymous Coward

      Re: The HDS blogger doesn’t include .......

      Re Fujitsu...

      Granted but your playing the latency game described above.

      At 100% load point the Fujitsu's were more than 1,600,000 IOps lower than HDS's result at the same load point and if you look at their latency at the same IOps (200K-320K IOps) the HDS has a very similar latency. However the HDS goes on to deliver many thousands of more IOps than Fujitsu :-)

      1. FJ-DX

        Re: The HDS blogger doesn’t include .......

        Well, I didn't start playing the "latency game" - mind G1000 is an enterprise/high-end system. DX200 is an entry, DX600 is a midrange system and of course they don't play in the same league.

        So if you compare apples to apples you'll find rarely any entry/midrange hybrid system that tops our SPC-1 performances numbers - both IOPS and latency

        Hermann

        1. Archaon
          WTF?

          Re: The HDS blogger doesn’t include .......

          Well, I didn't start playing the "latency game" - mind G1000 is an enterprise/high-end system. DX200 is an entry, DX600 is a midrange system and of course they don't play in the same league.

          On paper the HDS result is impressive because it manages low latencies at such high IOPS. Naturally it will eventually crap out like any array but HDS are showing off how far ahead of everyone else that point is.

          I don't dispute the performance of the Fujitsu arrays in their class, but I'm not sure why you brought them up? The DX200 and DX600 are no more relevant in comparison to a G1000 than latency figures from an I/O Accelerator would be.

  7. Anonymous Coward
    Anonymous Coward

    FlashSystem

    IBM FlashSystem leads here with around 100 us latency - no wonder they don't list it.

    1. Anonymous Coward
      Anonymous Coward

      Re: FlashSystem

      The IBM result you mention is an SPC-1/E benchmark not a true SPC-1 result so wouldn't be on the same chart anyway. It's got great latency but not even in the same class in terms of IOps and if you want features then you need to bolt on SVC at the front end and incur additional latency.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like