back to article IBM FlashSystem chief architect Andy Walls is a two-pools kind of guy

The latest FlashSystem from IBM uses InfiniBand to hook up the 900 flash box to the SA9000 and A9000R servers running the SVC software. Where does that leave NVMe over Fabrics? FlashSystem chief architect, IBM Fellow and CTO Andy Walls talked flash technology with The Register this week and said NVMe over Fabrics was …

  1. chrismevans

    The Fibre Channel argument is a false one. Implementing NVMeoF is only possible on the latest releases of Fibre Channel (16/32 from memory), so customers will have to replace technology somewhere. I doubt end users are fully 16 or 32Gb/s today. So if they have to rip and replace, it may be the perfect time to look at harmonising networks and using Ethernet.

    What's more likely is FC gets retained because the storage teams don't want the network team messing about with network topologies they don't understand. Plus, isolation sometimes is no bad thing, if you have the scale to justify it,

    1. returnofthemus

      The Fibre Channel argument is a false one.


      *We should stress this is not a commitment by IBM to produce an NVMe over Fabrics-accessed FlashSystem".

      Let's not get ahead of ourselves, haven't we already witnessed DSSD crash and burn ;-)

      PS Somewhat reminiscent of the time when the two Chris's got together with a Storagebod on a podcast predicting the 'end was nigh' for IBM storage, LOL!

    2. Anonymous Coward
      Anonymous Coward

      Gen 5 (16 Gbps) FC isn't new - it's actually quite *mature/common/standard (*delete as appropriate)

      My recollection is that Brocade announced their Gen 5 (16 Gbps) FC switches in May 2011 - so whilst technically it is one of their "latest" releases, the launch was 6 1/2 years ago, I'd strongly suggest it's NOT bleeding edge and beyond most customers.

      Absolutely I'd agree that not all Gen 4 kit has been retired yet, but based on what I see in the field, Gen 5 is by far the most common switch type (with a very small smattering of Gen 6). On that basis, most people I see could go NVMeoF and not change their switches.

      Small side note though - being ready to go NVMeoF is one thing - when will it actually be ready e2e? It isn't yet, so going FC now and knowing you get NVMeoF later, seems like a smart (and safe) bet.

      On your isolation/not wanting the network boys screwing stuff up, I don't disagree with your points at all!

      1. returnofthemus

        Re: Gen 5 (16 Gbps) FC isn't new - it's actually quite *mature

        Don't discount the other big player in this space who have also had 16G FC support for a while, though at one time they did appear to be pushing an all-Ethernet agenda.

        However, looks like market resistence brought them back into the fold to the point where they now have a 48-port x 32Gb FC line card for their MDS chassis's.

        Much like the death of the Mainframe, the death of FC has been greatly exaggerated.

    3. Anonymous Coward
      Anonymous Coward

      There is a huge difference in the scope of replacement. For FC to go NVMe you only need to upgrade the host adapters where needed. For NVMeoF you need to replace FC switches with DCB capable Ethernet switches as well as compatible host adapters.

  2. Anonymous Coward
    Anonymous Coward

    I'd add that such a scenario would cause latency issues due to traffic confliction. I've certainly seen that all too many times where someone is trying to cheap out and dragging performance down, sometimes even down to unusability. I don't think that will be a huge issue here but, speaking as a washed up engineer, don't fuck this up. Not at that price point.

    1. returnofthemus

      latency issues due to traffic confliction?

      Certainly an industry term I'm not familiar with, is that why Ethernet carries all the traffic???

  3. Anonymous Coward
    Anonymous Coward

    IBM's suggested A9000 solution sounds complicated

    I think IBM is right about FC being the most acceptable network for NVMe over Fabrics.

    I expect NVMe SSDs will become the standard for All-Flash Arrays over the next few years, NVMe over FC will become just another protocol option on All-Flash Arrays, and NVMe will be an option on the server FC HBA driver. At that point, FC-NVMe will become as much of a no-brainer as an FC speed transition.

    I could see a point where an "Enable NVMe" check box on a server HBA utility is available, and the All-Flash Array GUI "Provision Volume" wizard has a "Traditional Volume/NVMe Volume" pull-down, and that would be all that is required. A few years after that, FC-NVMe will be the norm, and traditional FC is relegated to a legacy compatibility mode.

    The idea of separate cables looping around front-end A9000 controllers to back-end arrays, and having to provision in two different places on the storage system seems complicated. I think IBM will end up needing a more streamlined solution.

    1. Anonymous Coward
      Anonymous Coward

      Re: IBM's suggested A9000 solution sounds complicated

      I read this as two systems strapped together for vanity sake, both with differing access semantics. Good luck with auto tiering in that situation. But I do agree FC is the path of least resistance (at least for now) and 16Gb is becoming pretty ubiquitous especially where all flash is being deployed.

  4. russtystorage

    FC-NVMe is one of several viable choices

    On the topic of access to next generation storage, I completely agree with Andy. FC is well established, works very well and can support NVMe access. This isn't to say that NVMe-oF using Ethernet access won't also have a place, but rather customers will have a choice. If they are invested in Ethernet, they can use that infrastructure for their storage access, if invested in IB (there are a few) that will be a choice and certainly FC-NVMe will have a place. I completely agree that for many large enterprises that have invested in FC access to storage, there is no reason to rip and replace, FC will support NVMe as well as other choices, thank you very much.

  5. Anonymous Coward
    Anonymous Coward

    XIV code, not SVC Code

    FYI, A9000 has it's linage in XIV Code, not SVC Code.

    The back-end of A9000 being InfiniBand really does not impact if the front end is SCSI over FC or NVMe over FC, just like a more monolithic array might use PCI vs PCIe, vs a Fabric extension PCIe. The A9000 host interfaces are on the Grid Controllers, which talk iSCSI over 10 Gig Ethernet and SCSI over 16 gig FC.

    The SVC linage product is V9000

    1. returnofthemus

      Re: XIV code, not SVC Code

      Which begs the question, what the hell is the SA9000???

  6. returnofthemus

    On a Final Note....

    a great summation on why IBM's FlashSystem's have no need to incorporate NVMe at present.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019