back to article Cisco goes 32 gigging with Fibre Channel and NVMe

Networking titan Cisco is adding both NVMe over Fibre Channel and 32Gbit/s Fibre Channel speed to its MDS Director and UCS C-Series server products. Fibre Channel speeds are doubling from 16Gbit/s to 32Gbit/s, what Brocade calls Gen 6 Fibre Channel. That company launched its 32Gbit/s Director product last July – a Director …

  1. CheesyTheClown

    Ugh!

    Let's all say this together

    Fibrechannel doesn't scale!

    MDS is an amazing product and I have used them many times in the past. But let's be honest, it doesn't scale. All flash systems from NetApp for example have a maximum of 64 FC ports per HA pair (which is so antiquated it's not worth picking on here) and that means that the total system bandwidth of the system is about 8Tb/sec. Of course, if you consider that HA pairs suggest you have to design for a total system failure of a single controller which cuts that in half. Then consider that half that bandwidth is upstream, the other half down. Meaning, half is for connecting drives to the system, the other half is for delivering bandwidth to the servers. So we're down to 16 reliable links per cluster. There has to be synchronization between the two controllers in an HA pair. So let's cut that I. Half of we don't want contention related to array coherency.

    An NVMe drive consumes about 20Gb/sec bandwidth. So, that's a maximum capacity of 25 online drives in the array. Of course there can be many more drives, but you will never reach the bandwidth of more than 25 drives. Using Scale-Out, it is possible scale wider, but FC doesn't do scale out and MPIO will crash and burn if you try. iSCSI can though.

    Now consider general performance. FC Controller are REALLY expensive. Dual ported SAS drives are ridiculously expensive. To scale out performance in a cluster of HA pairs would require millions in controllers and drives. And then because of how limited you are for controllers (whether cost or hard limitations) the processing requires for SAN operations would be insane. See, the best controllers from the best companies are still limited by processing for operations like hashing, deduplication, compression, etc... let's assume you're using a single state of the art FPGA from Intel or Xilinx. The internal memory performance and/or crossbar performance will bottleneck the system further and using multiple chips will actually slow it down since it would consume all the SerDes controllers just for chip interconnect at a speed 1/50th (or worse) than the internal macro ring bus interconnects. If you do this in software instead, even the fastest CPUs couldn't hold a candle to the performance needed for processing a terabit of block data per second. Just the block lookup database alone would kill Intel's best modern CPUs.

    FC is wonderful and it's easy. Using tools like the Cisco MDS even makes it a true pleasure to work with. But as soon as you need performance, FC is a dog with fleas.

    Does it really matter? Yes. When you can buy a 44 real core, 88 vCPU blade with 1TB of RAM on weekly deals from server vendors, a rack with 16 blades will devastate any SAN and SAN Fabric making the blades completely wasted investments. Blades need local storage with 48-128 internal PCIe lanes dedicated to storage to be cost effective today. That means the average blade should have a minimum of 6xM.2 PCIe NVMe internally. (NVMe IS NOT A NETWORK!!!!!!) then for mass storage, additional SATA SSDs internally makes sense. A blade should have AT LEAST 320Gb/sec storage and RDMA bandwidth and 960Gb/sec is more reasonable. As for mass storage, using an old crappy SAN is perfectly ok for cold storage.

    Almost all poor data center performance today is because of SAN. 32Gb FC will drag these problems out for 5 more years. Even with vHBAs offloading VM storage, the cost of FC computationally is absolutely stupid expensive.

    Let's add one final point which is that FC and SAN are the definition of stupid regarding container storage.

    FC had its day and I loved it. Hell I made a fortune off of it. I dumped it because it is just a really really bad idea in 2017.

    If you absolutely must have SAN consider using iSCSI instead. It is theoretically far more scalable than FC because iSCSI uses TCP with sequence counters instead of "reliable paths" to deliver packets. By doing iSCSI over Multicast (which works shockingly well) real scale out can be achieved. Add storage replication over RDMA and you'll really rock it!

    1. G Olson

      Re: Ugh!

      Not everyone needs massive scale with Faster Than Light storage. In small focused compute environments, SAN and FC continue, and witih NVMe will continue, to provide value. iSCSI?! No thanks, I don't want to negotiate with network admins and InfoSecurity madness.

    2. irrision

      Re: Ugh!

      Did you really just complain about performance in the range of 8Tb/s per storage array being a bottleneck? Most shops aren't running HPC or other workloads that need single array scalability in even a fraction of that number and those that are aren't running one array or for that matter are running something more custom rather than an off the shelf array. NVMe over FC is a clear path to the eventual adoption of NVMe based fabrics but there has to be a middle ground for the transition and this is the most logical way to do it.

      FC will live on for a long time especially with the availability of NVMe over FC among enterprise customers because FC works, their engineers understand it, and the switches are extremely reliable and require virtually no maintenance once they're turned up. Not everyone is reaching for hilarious performance numbers in their environment and they're perfectly satisfied with much lower per I/O latency with their mostly random I/O VM and database workloads and not looking to push petabytes of data around.

    3. The Average Joe

      Re: Ugh!

      Well when FC16 came out VMware could only run it at 8gig, so much for spending the cash on HBA's and switches when the stupid OS would not support it.

      The only reason why this is a good thing is that the FC16 and FC8 gear will be on firesale and us bargain hunters can pick it up for a penny on the dollar when it is just 3 years old. LOL Sometimes less than that!

  2. chris coreline
    Thumb Up

    the cost with NVMe over FC is parallelism. RoCE does it a bit better, and native PCIe is the best. (not sure if this will equate to any real performance differential (line-rate notwithstanding) in any but the edgiest of edge cases)

    the benefit of NVMe over FC is that its basically free, (if your Fabric is on the steep end of the tech curve)

    I still think, long term, RoCE will win, but this should help FC stay relovent for a good while yet. Either way, Whoever wins, SCSI looses. Goodbye old friend, enjoy your new life shuffling tapes around.

  3. The Average Joe

    Only for those with BIG pockets

    yep, big money.

    The rest of us are using 10gig ethernet, data center ethernet and multiple sessions on LACP with MLAG switches. I am sure there are businesses that need this but for the rest of us it is more about the software stack and licensing. The migration off Oracle and MS SQL to open source products. The open source stuff is so good that if your not the top 1% your wasting your $ on some of this commercial stuff. Good staff and good hardware make it so that the developers are now the bottleneck.

    We are now in the phase of supporting Android and IOS as the major platforms so all the legacy Microsoft products are not even considered anymore. Some have moved on and others are stuck in the past. LOL

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like