back to article Fibre Channel Industry Association extends roadmap to 128G bps

It's all about speed: the industry association behind Fibre Channel has laid out its acceleration timeline, with 32G bps 128G bps now nailed to the calendar. The Fibre Channel Industry Association has set down its Gen 6 standard, and its timeline puts shipment of kit following the spec at 2016. The basic channel in the …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    What is the point of defining channel bonding?

    That's always been possible with FC, using ISLs and multiple FC cards along with OS support (e.g. Powerpath, VxVM etc.)

    Seems more like they're worried about 100G FCoE stealing some marketing thunder at some point, so they want to point and say "we're even faster than 100G!"

    1. Anonymous Coward
      Anonymous Coward

      Re: What is the point of defining channel bonding?

      400Gbit Ethernet will probably be out by 2016. If you want the fastest then FCoE is the way to go. Both now and in the future...

      http://www.ethernetalliance.org/wp-content/uploads/2013/05/EthernetAlliance_400GWhyNow_techbrief_FINAL.pdf

      1. Lusty

        Re: What is the point of defining channel bonding?

        As long as you're defining speed only in terms of throughput then yes. Until arrays deliver more throughput though you're far better off going for latency and that currently means fibre channel without the Ethernet gunking it up. Considering the back end disk loops on most modern SAN is 6Gbps SAS you'd need an awful lot of loops to require these kinds of bandwidths. Unless we go back to FC loops of course...

        1. Anonymous Coward
          Anonymous Coward

          Re: What is the point of defining channel bonding?

          "you're far better off going for latency "

          FCoE has latency in the order of a few tens of microseconds. The array you are pulling data from will have latency measured in milliseconds. Hence the FCoE latency is insignificant.

          "Considering the back end disk loops on most modern SAN is 6Gbps SAS"

          And how fast is the cache RAM?

          1. Anonymous Coward
            Anonymous Coward

            Re: What is the point of defining channel bonding?

            First you claim arrays have latency measured in milliseconds, which is only true if you're reading and it isn't served in cache, then you point out cache as a reason why throughput matters. Can't have it both ways.

            There are a lot more situations where latency in a SAN lies in your performance critical path than situations where throughput lies in your performance critical path. More importantly, throughput can always be increased in the rarer situations where it is critical by adding more FC links. There is no similar simple solution for addressing latency.

            Granted there are some niche cases like HPC or big data where you need massive throughput from a SAN, so use FCoE for that if FC just isn't fast enough. But I don't see the point of FCoE otherwise, certainly not because it offers more throughput, in the more normal case where the throughput current FC provides and its roadmap offers proves more than adequate.

            FCoE is nice for small shops who don't have a dedicated storage team, because it is easier to install and manage with existing team members / skill sets than a FC SAN is.

            1. Anonymous Coward
              Anonymous Coward

              Re: What is the point of defining channel bonding?

              "First you claim arrays have latency measured in milliseconds, which is only true if you're reading and it isn't served in cache, then you point out cache as a reason why throughput matters. Can't have it both ways."

              No, it's true even if the data is in cache. The response time is still in the millisecond range to get to the data - making the FCoE latency insignificant. For instance the fastest possible response from a Netapp flash cache benchmark test was 0.8 ms.... After the data has been located then the limit is the transfer speed....

              "throughput can always be increased in the rarer situations where it is critical by adding more FC links"

              At a greater cost, with more complex cabling / infrasturcture and having to use a parallel set of infrastructure for no real advantage in most situations...

              FCoE / unified fabric is cheaper and less complex - untimately delivering a lower TCO.

          2. Lusty

            Re: What is the point of defining channel bonding?

            At that speed your cache will only last a second or so before disk is a bottleneck, so presently Violin type setups are the only ones where this issue is not present.

            As for the latency, you're using vendor marketing figures rather than testing. Again though, solutions like Violin make these differences painfully obvious where the SAN does give near zero latency and suddenly your FCoE latency looks immense compared to native FC.

            1. Anonymous Coward
              Anonymous Coward

              Re: What is the point of defining channel bonding?

              "At that speed your cache will only last a second or so before disk is a bottleneck"

              That depends of how much cache you use, and how much disk bandwidth you have.

              If like us you have several disk arrays you can pull data from, it is much harder to max out 40Gbps server FCoE connections than 16Gbps FC ones - and much cheaper and less complex to not run both...

              "As for the latency, you're using vendor marketing figures rather than testing. "

              No, I'm speaking from experience - latency of course varies depending on network design, but in normal circumstances FCoE latency is relatively insignificant versus the storage array latency.

              Here is a good presentation of why FCoE is the future of high end storage infrastructure: http://www.slideshare.net/emcacademics/converged-data-center-fcoe-iscsi-and-the-future-of-storage-networking-emc-world-2012

  2. Voland's right hand Silver badge

    Is it me being thick or this makes no sense

    The bandwidth of QPI and PCIe3 is 256GBit. HT3 is slightly higher than that, but only slightly. The growth there has slowed down quite a bit nowdays. It is now crawling up by a few percents up on average (QPI7 to QPI8, etc). No more quantum leaps in that area.

    Looking at these numbers there is no way for a present or near future compute system to consume it and do anything useful with it. 128GBit is an overkill. End of the day, you use storage to do something useful with it, not just to pass it from left pocket to right pocket. Being able to work on 128GBit worth of packets can be be useful - move them from on interface to another and tweak a few headers. Voila, here is your NAT or firewall. 128GBit to storage? Not so much.

    1. Steven Jones

      Re: Is it me being thick or this makes no sense

      I've worked in data centres where 128gbit FC could have been fully used with ease. Not for an individual server (although we had some using well over 32gbit throughput), but for connection tostorage arrays, tape libraries, VM farms and for interconnects essentially the faster the better.

      Yes, arrays and switching can, of course, generate the required throughput using multiple interfaces, but as anybody whose worked on networking can tell you, dealing with the configuration management, pathing, subtle bottlenecks and sheer cabling complexities is a major pain. It is much, much easier to have a few very large paths into your major switching and storage backbones than it is to deal with hundreds of connections. Of course, nobody should expect you average little server to require this (and you probably wouldn't use FC anyway), but there are plenty of places where it is extremely useful. Of course, the ability of storage, and to a lesser extent, switch manufacturers to fully exploit this capacity is a different matter. I saw plenty of storage arrays where the amount of front end throughput theoretically possible bore no resemblance to actual capability.

    2. Graham 24

      Re: Is it me being thick or this makes no sense

      I remember an article in BYTE magazine from the early 90's, talking about the new 25MHz and 33MHz 486 processors that had just come out. The author said that while the 33MHz would be good for servers, they would never be installed in workstations since no-one could possibly need that much processing power.

      (Yes I realise this could be considered a variant on the apocryphal "64K is enough for anyone" quote")

      I seems to me that every single technology prediction along the lines of "it's nice, but there's no need for something that fast" has been found to be false just a few years after the technology was introduced, and I can see no reason for it not to be true for this too.

      128Gb DSL-equivalent to the home in a few years? I wouldn't bet against it...

    3. Anonymous Coward
      Anonymous Coward

      Re: Is it me being thick or this makes no sense

      But PCIe 4 supports 16GT/s (twice the bandwidth of PCI-e 3) and is expected to be available next year...

  3. Anonymous Coward
    Anonymous Coward

    costs?

    "Vendors are already adding their own announcements to the FCIA's, with Brocade saying it will ship products to the 2016 timetable"

    Well, 2016. We have 2014 now and I can get

    - 1x HP 5900AF-48XGT-4QSFP+ (JG336A) with 48x 10GBase-T, 4x 40GbE QSFP+ for 5821,-€ (without VAT, fan, power supply, carepack)

    or

    - 1x HP 5930-32QSFP+ SWITCH (JG726A) with 32x 40GbE QSFP+ for 13456,€ (without VAT, fan, power supply, carepack)

    NOW.

    - What will a 24x/32x/48x/64x 128G bps FC switch cost me in 2016?

  4. Hapkido

    Ahh yes, the classic: "I can't use FCoE, it has too much latency"

    Which is why we all use InfiniBand (being approx. half the latency of FC devices) - not....

This topic is closed for new posts.