back to article Memory and storage boundary changes

Latency is always the storage access bête noire. No one likes to wait, least of all VMs hungry for data access in multi-threaded, multi-core, multi-socket, virtualized servers. Processors aren't getting that much faster as Moore's Law runs out of steam, so attention is turning to fixing IO delays as a way of getting our …

  1. James Wheeler
    Thumb Up

    It's all about I/O

    Nice analysis of a subject that gets too little attention. Whether the new technologies live up to the promise or not, everyone who cares about performance should study the table of relative speeds that scales an L1 cache access to one second.

  2. JeffyPoooh Silver badge
    Pint

    Some missing data points

    Mercury delay lines.

    Punched cards.

    Punched cards (spilled on floor).

    Line Printer output being typed-in again.

    1. Lusty

      Re: Some missing data points

      Technically, mercury delay lines had zero latency because at the time programmers optimised their code to execute as the data arrived at the end, and optimised the tube to be the required length. The term "optimised code" has been abused ever since...

  3. gnufrontier

    HPT

    We do need to shave off as many milliseconds as possible so we can get those arbitrage opportunities down to the 1/1000 of a cent level.

    There is always overhead. Flying time may be 45 minutes but the overhead is 2 hours minimum even if one lives next door to the airport.

  4. Lusty

    300ms?

    If your SAN has 300ms latency something is very very wrong. Are you suggesting this SAN is being accessed in London from New York over an Internet VPN or something?

    1. DougS Silver badge

      Re: 300ms?

      I thought the same thing. In my experience the delay of accessing over a SAN is pretty much invisible when accessing data on a hard drive (the 'delay' is actually negative for writes because of the cache)

      It only starts to matter with SSDs, but part of that is because no one had to care about latency to a SAN before because when you are waiting 20 ms on a hard drive who cares if the SAN switches are adding another millisecond on top of that?

  5. markom68

    I believe that some numbers are wrong...

    Sorry, but I believe that at least those numbers are wrong:

    1. (0.2 microsecs vs 30/100 microsecs for read/write access to NVMe SSD.) - 30/100 microsecs equals 0,3microsecs and that is pretty the same as 0.2 microsecs ....?

    2. 300msec SAN array access? - if that is true, i do not how can we have average latency on hybrid SAN Array lower than 2msec (from host perspective)???

    Regards,

    Marko

  6. Anonymous Coward
    Anonymous Coward

    Great chart

    Even though the comparison might not be 100% accurate, I love the units of measure. Even my mum could understand why I want my NVMe. (Tip 'o the hat to Dire Straights.)

  7. cbyrne@flywheeldata.com

    I'm familiar with NVDIMM-F and NVDIMM-N, but what is NVDIMM-B?

    1. MityDK

      I'm pretty sure it's meant to say NVDIMM-N, but it's latency is in the 10s of nanoseconds. There's some problems with the chart, but overall it's useful as a relative measure.

  8. chris 17 Bronze badge

    Will change the whole cincept of ram plus storage when storage is as fast as RAM. Need more RAM, just buy a bigger disk

    1. Storage_Person

      @chris17

      You're missing the important difference between memory and storage, regardless of the underlying technologies. Storage is persistent whereas memory is not, which makes them very different in terms of use cases. There is a lot of work going on at the moment around how to handle persistent memory, and it's important because of what happens when processes die or just don't clean up after themselves properly. Treating storage as RAM will result in you running out of storage rather quickly.

      (Memory continues to be significantly faster than storage for both read and write access, mind, so there will continue to be a reason to have both outside of that which I outline above).

  9. emv

    some of the numbers are wrong but I love the theme and charts. great comparison of technologies. 3DXP is right in the NVDIMM space

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019