back to article HGST says its NVMe flash card will manage 750,000 IOPS

How does damn near three quarters of a million random read IOPS grab you? HGST is at last shipping its NVMe Ultrastar SN100 PCI flash card, first announced seven months ago. The speeds and feeds look impressive, with HGST saying it’s the “industry’s highest-performing NVMe compliant SSD series.” The specs are: 1.6TB and 3. …

  1. Lusty

    Very cool

    Trouble is you'd need something that produces that much random IO either within the server or on 4 10GbE links to take advantage of this properly so it seems a bit niche. No more niche than a lot of current high end hardware mind you, servers seem to be big enough these days for most normal purposes. I guess animation and CGI maybe would use this? Very busy non distributed databases? I'd love to hear what everyone else can think of that needs 750k IOPS in a single system.

    The evil that is VDI will of course use every IO it can get its grubby mitts on, but I shan't count that because those IOs should be in the ruddy end point ;)

    1. Bronek Kozicki

      Re: Very cool

      SLOG device and L2ARC cache on a busy ZFS server ...

      1. -v(o.o)v-

        Re: Very cool

        Not really. SLOG is very small in size and is better served with RAM based products not NAND flash. L2ARC requires some system RAM for structures and to effectively use such large L2ARC would need tons of it. Not a cost-effective product for ZFS caching IMO.

    2. Anonymous Coward
      Anonymous Coward

      Re: Very cool

      Modeling, simulation, analysis of reasonably sized data-sets, might even be entertaining to watch in "Big Data" if all the crunchers have them, then your limitation would be on ingesting the partitioned data.

    3. An0n C0w4rd

      Re: Very cool

      Unless my calculations are out:

      743,000 x 4k read ops/sec = 2,972,000 kb/sec = a shave under 3GBytes/sec

      160,000 x 4k write ops/sec = 640,000 kb/sec = 625 MBytes/sec write

      Without pondering PCIe bus saturation problems (only using 4 lanes of PCIe so there should still be capacity, in theory) I've definitely seen applications that could chew through those throughputs, or make a pretty sizeable dent in them anyway. Netflix Open Connect comes to mind as one of the more obvious applications.

      Plus, it's not just the IOPS you need to consider. It's the latency. Even if you can't hit the IOPS, if you reduce the latency of your application 5x or more, the cost could be justified in various situations where the read or write of that piece of data is a blocking action for something else, e.g. a database. If you have to hit the DB 20x to do one action, you just sped that action up tremendously.

      1. gs4avs

        Re: Very cool

        Regarding latency -- the total latency seen by the app must be the sum of the latency through the stack and the flash device latency. My understanding is that NVMe helps the former and not the latter, right? If I am not mistaken, random read latency at the flash device level is in the order of 100-150 us. What is the stack/driver latency adder to that and how much of that does NVMe remove?

        Also, this HGST drive claims write latencies of 20 us. Is this based on completing a write into a holding buffer in DRAM or something similar or does it represent a write into the flash itself?

    4. Ian Michael Gumby
      Thumb Up

      @Lusty, ...Re: Very cool

      There are use cases that can take advantage even if you don't have the network bandwidth.

      Look at Spark, or SOLR/Lucene where you need to have a fast local disk for spill.

      If you virtualize your server then more ops get eaten up.

      So its a good thing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like