back to article DataCore scores fastest ever SPC-1 response times. Yep, a benchmark

DataCore has recorded the fastest ever SPC-1 microsecond response times, which are three times faster than all-flash EMC VNX. Together with DDN it demonstrates that parallel IO processing using multiple cores is set to become the new norm. El Reg was contacted by an industry source to provide contextual background to the …

  1. Lusty

    Is there a link to the SPC1 result including spec? You also didn't mention that in SDS, IOPS are also useless until both copies of data are written (to minimum 2 nodes) to make the data as safe (consistent) as in multi-controller SAN.

    1. Ian Michael Gumby
      Boffin

      @Lusty

      Actually its 3 copies.

      If the values don't match (consistent) which one do you trust?

      If you write at least 3 copies... the odds are greater that 2 of the 3 will match.

      1. TheVogon

        Re: @Lusty

        "If the values don't match (consistent) which one do you trust?"

        The one with the valid checksum.

      2. Lusty

        Re: @Lusty

        No you're confusing a copy in a write consistent state with RAID. If you have one controller and it fails you've lost the data your app thought was written. Two controllers and you don't acknowledge the write until both have written to non volatile storage so you're safe. Both controllers use RAID to keep their data usable which is different from consistent. If you have three nodes you still only "need" two copies to cover controller failure but you must not ever ever confirm a write unless you guarantee one controller can read back. In a one controller system such as this one controller means game over and that means not enterprise ready for anything important. It's great for VDI and such because we don't give a hoot if a VDI instance disappears since they are stateless when done well.

    2. Anonymous Coward
      Anonymous Coward

      Lusty -- you missed this...

      Guessing you haven't seen this.

      http://www.storageperformance.org/benchmark_results_files/SPC-1/DataCore/A00178_DataCore_SANsymphony_10.0_DN-HA-HC/a00178_DataCore_SANsymphony-10.0_DN-HA-HC_SPC-1_full-disclosure-report.pdf

      While you're getting caught up -- you might notice that in the 2-node HA configuration, Lenovo and DataCore seem to have demonstrated super-linear scaling at even lower response times than the single node result.

      Hardware-defined Storage? That's so 1990's...

  2. I Am Spartacus
    FAIL

    Nice of then ....

    .... to publish te wifi name and password.

  3. dikrek
    Boffin

    Second article about this so soon?

    Hi all, Dimitris from NetApp here.

    Interesting that there's another article about this.

    Plenty of comments in the other reg article:

    http://forums.theregister.co.uk/forum/1/2016/01/07/datacores_benchmark_price_performance/

    Indeed, latency is crucial, but Datacore's benchmark isn't nicely comparable with the rest of the results for 2 big reasons:

    1. There's no controller HA, just drive mirroring. Controller HA is where a LOT of the performance is lost in normal arrays.

    2. The amount of RAM is huge vs the "hot" data in the benchmark. SPC-1 has about 7% "hot" data. If the RAM is large enough to comfortably encompass a lot of the benchmark hot data, then latencies can indeed look stellar, but hitting the actual media more can be more realistic.

    Thx

    D

    1. Lusty

      Re: Second article about this so soon?

      Point 1 was what I was referring to above. The latency can be zero for all I care, if a controller fails and causes data corruption as a result the storage is worthless to almost everyone. The round trip time for their disk mirroring to another box will change results drastically if it wasn't used here. No idea whether SPC-1 requires the data to be safe but it definitely should.

      I don't work for a vendor, I work for a VAR and we sell both types of solution. That said we rarely sell SDS/converged in proper data centre solutions outside of VDI

      1. Anonymous Coward
        Anonymous Coward

        Re: Second article about this so soon?

        Agreed add a second node for HA and you've at the very least doubled both latency and price for the same IOPs figure. Then add in the mirroring for OS drives, spares and infrastructure to support those nodes and its starting to look a bit less of a science project. Now why didn't they just do that in the first place ? As for the expert analysis it's sadly lacking here.

  4. John Smith 19 Gold badge
    Unhappy

    "multiple cores in a multi-core processor to handle IOs "

    Sorry but it's taken how long to get this technology working?

  5. Androgynous Cow Herd

    Without Benchmarking

    There can be no benchmarketing

  6. Anonymous Coward
    Anonymous Coward

    NetApp comments -- nothin but FUD, dispelled here...

    Dimitris from NetApp opines, thusly:

    "Indeed, latency is crucial, but Datacore's benchmark isn't nicely comparable with the rest of the results for 2 big reasons:"

    "1. There's no controller HA, just drive mirroring. Controller HA is where a LOT of the performance is lost in normal arrays."

    Comments: very many SPC-1 results are published without HA. In the SPC-1 lexicon non-HA is known as "Protected-1" and HA is called "Protected-2". You can see dozens of these non-HA results by simply clicking here...

    https://www.google.com/?gws_rd=ssl#q=%22was+protected+1%22+site:storageperformance.org&filter=0

    ...and now you know!

    Among the dozens of disclosures you've now found for non-HA results is the current #1 -- Huawei. Did you just fail to notice that? Hard to believe...

    "2. The amount of RAM is huge vs the "hot" data in the benchmark. SPC-1 has about 7% "hot" data. If the RAM is large enough to comfortably encompass a lot of the benchmark hot data, then latencies can indeed look stellar, but hitting the actual media more can be more realistic."

    Clearly Dimitri, you know very little about SPC-1. Where on earth did you get your 7% number? SPC-1 is 100% hot data -- it's derived from TPC-C, remember? Moreover all writes to SPC-1 must be destaged to physical media in real-time, the SPC-1 benchmark code and any SPC-1 Audit confirms this to be the case.

    Perhaps, Dimitri, you should begin by reviewing NetApp's own SPC-1 benchmarks to understand these things before you say anything else that makes NetApp look dumb.

    Finally -- to completely dispel 100% of your comments and the false premises you use to spread your FUD, all one has to do is look here at DataCore's HA result

    http://www.storageperformance.org/benchmark_results_files/SPC-1/DataCore/A00178_DataCore_SANsymphony_10.0_DN-HA-HC/a00178_DataCore_SANsymphony-10.0_DN-HA-HC_SPC-1_full-disclosure-report.pdf

    Note superlinear scaling (1.2 million IOPS on 2-nodes in HA) and response times even lower than the earlier non-HA result.

    I could see your comments as merely uninformed, except that NetApp is one of the original SPC members and as such you should know this stuff. Why don't you?

    1. dikrek
      Boffin

      Re: NetApp comments -- nothin but FUD, dispelled here...

      <sigh>

      Go to the spec here: http://www.storageperformance.org/specs/SPC-1_SPC-1E_v1.14.pdf

      Page 32.

      Do a bit of math regarding relative capacities vs the intensity multiplier.

      See, it's not uniform nor 100% hot at all. Different parts of the dataset are accessed different ways and certain parts will get more I/O than others.

      i've been dealing with SPC-1 for years.

      Not with NetApp any more but facts are facts.

      Thx

      D

      1. Clockspeedy

        When you're hot, you're hot....(was Re: NetApp comments -- nothin but FUD, dispelled here...

        ...and when you're non you're not.

        Dimitri, if adding 2+2 and coming up with 3.14159269...was all there was to math, you'd be a genius.

        Every real-world application has relative IO intensities that are associated with the applications IO stream requirements. SPC-1 models this with a pretty fair degree of accuracy. You're thinking resembles the ancient logic of IOmeter -- which we all know is pretty much useless.

        SPC-1 models these relative IO stream intensities accurately, more than any other available benchmark. If you can suggest a better benchmark for customers to use, please do.

        Your employer (on the other hand) seems to think SPC-1 is pretty good. I understand Nimble's Mr. Daniel is a big fan of SPC-1...have you talked to him?

        "Naturally, I’d love to see IT architects simplify their purchasing problems by requiring “SPC-IOPS”, not merely “IOPS” in their requests for proposals (RFPs)."

        https://www.nimblestorage.com/blog/making-sense-of-storage-performance-benchmarks/

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon