back to article Brocade-funded study says Fibre Channel faster than FCoE

A study funded by Brocade has found that Fibre Channel provides faster access to all-flash arrays than Fibre Channel over Ethernet. FCoE involves linking all-flash arrays to accessing servers via Ethernet instead of a Fibre Channel fabric. Testers from the Evaluator Group, who carried out the study on behalf of Brocade, hooked …

COMMENTS

This topic is closed for new posts.
  1. Otto is a bear.

    Stating the bleeding obvious

    Well Da..... If you wrap one protocol in another it'll be slower than running it on its own. It's more a case of how much faster the native protocol is than the wrapped. The key being that Fibre channel fabrics, cost an arm and a leg, as you say. So is that 20% extra worth it.

    1. Aitor 1

      Re: Stating the bleeding obvious

      The thing is also that we are also comparing 2x8 Fc to 1x16 FC. So yes, 1x16 is better.

      If I had the money I would go to FC, but I still think that it can't beat having the SSDs INSIDE the server, performance wise.

    2. seven of five

      Re: Stating the bleeding obvious

      otoh, FCoE capable hardware is not exactly for free, either.

  2. Ian Michael Gumby

    How fast is fast enough?

    How much value do you put on simplifying your wiring?

    Case in point... if 10GbE is fast enough (meaning you're not saturating your link and the latency is not noticeable... does the simplicity of having one set of networking cables out weigh the underlying speed?

  3. ucs_dave

    Well duh

    If you read the study, they basically bottlenecked the FCoE test. They didn't publish the logical UCS configuration, but I'm going to assume (based on the bias) that they only gave the UCS server a single vHBA - which would have limited it to 10Gb tops, despite all of the other cabling, etc. Depending on how they did the workload, the entire test would have been bottlenecked at either the 10Gb link at the server, or one of the two 8Gb links from the Fabric Interconnect to the Brocade FC switch. Despite Brocade having FCoE capability, and the stated purpose of testing FC vs. FCoE, why would they not run 10Gb FCoE northbound from the Fabric Interconnect? Oh, right, because the study was formed not to actually test anything, but to support a preconceived marketing goal.

    1. Nate Amsden

      Re: Well duh

      FCoE has been all marketing for the past 5 years, and has shown across the board it has thus far failed in the marketplace. UCS deployments may use it but it's because they are hamstrung by limited connectivity options in those solutions.

  4. Anonymous Coward
    Anonymous Coward

    "It still costs an arm and a leg", so I´ll have stay with ISCSI and my high latency ethernet switches...

    1x HP 5900AF-48XGT-4QSFP+ (JG336A) with 48x 10GBase-T, 4x 40GbE QSFP+: 5821,-€ (without VAT, fan, power supply, carepack)

    1x HP 5930-32QSFP+ SWITCH (JG726A) with 32x 40GbE QSFP+: 13456,€ (without VAT, fan, power supply, carepack)

  5. russtystorage

    Real world testing

    I was heavily involved in setting up and running this test. It was designed to test real world configurations that we see Fortune 2000 companies (our clients) evaluating and deploying. The evaluation criteria was to find if there were differences with performance, cabling or power and cooling between FC and FCoE.

    We found that 16 Gb FC provided better performance under load than did twice as many 10 Gb FCoE connections. The target was 100% solid-state storage with 16 Gb FC attach. The target was not FCoE since very few companies are contemplating FCoE targets. As a result a bridge from FCoE to FC was required, which again is quite common in actual deployments.

    We were somewhat surprised to find that power and cooling was roughly 50% less for FC, while using fewer cables to provide higher performance than FCoE.

    1. tonybourke

      Re: Real world testing

      There was a lot wrong with the test. Probably the most dramatic was either using software FCoE initiators (despite the installed VIC card being capable of hardware FCoE) thus sabotaging the FCoE results, or the authors were so unfamiliar with the very basics of UCS they didn't understand they'd actually setup a hardware initiator (the document claims the FCoE is software initiated), and thought they set up a software initiator. That would show they had no idea what they were doing with Cisco UCS, and makes me wonder what else they got wrong. Can't really tell, since they only provided a single useless screenshot (though not totally useless, I could tell which version of UCS they used, which they didn't state in the document).

      The cabling and power was laughable, given it was a test scenario, and nothing like what would be done in a production environment. They counted the number of cables that provided FC versus the number of cables that provided FCoE as well as Ethernet. 2 blades is not why you buy a blade system, you buy it for 6, 10, 20, 40, 80 blades. Then you can compare cable, power, and port counts for both FC, Ethernet, and/or FCoE. If you're only deploying 2 blades in any chassis system, you're either waiting to grow into more blades, or you're fine with wasting power and cooling anyway.

      If they had used UCS 2.1 (the screenshot indicates 2.0), they could have plugged the storage array directly into the fabric interconnects bypassing the Brocade switch. If the storage array was a NetApp (they never mentioned which vendor they used) they could have done direct FCoE connectivity.

      The overall configuration was not given, so it's impossible to tell if they got something wrong on the UCS side.

      And I loved the part about whining about setting up UCS. Of course setting up an unfamiliar technology is going to take longer.

      Overall, it was kind of a trainwreck.

    2. tonybourke

      Re: Real world testing

      Also, why did you use a fake screenshot of UCS to show its "configuration"? The screenshot in the report is of the emulator, not a live UCS system. The SN is "1". That means emulator.

  6. russtystorage

    Beyond performance

    There are several issues being evaluated here, performance, management along with power and cooling. If we take away the performance questions for now, the other issues are still quite relevant.

    One of the main premises behind FCoE is that it is less expensive to operate and simpler to configure. If an FCoE environment requires 50% more cables and consumes 50% more power and cooling, then the question becomes, is it really less expensive?

    For applications that are not highly performance critical, the cabling power and cooling aspects are still very relevant. For performance sensitive applications, FCoE did have more latency under high load. These facts are hard to argue with.

    1. avvid

      Re: Beyond performance

      Just read an article that calls the validity of this test in question. (http://datacenteroverlords.com/2014/02/05/fcoe-versus-fc-farce/)

      Being a purchased (funded by Brocade) test, it's most certainly biased. Although only complete transparency will determine if the test is also false.

      1. Anonymous Coward
        Anonymous Coward

        Just read an article that calls the validity of this test in question. (http://datacenteroverlords.com/2014/02/05/fcoe-versus-fc-farce/)

        "I’m not a Cisco fanboy, but I am a Cisco UCS fanboy, so I took great interest in the report. (I also work for a Cisco Learning Partner as an instructor and courseware developer)"

        Ahh so no hint of bias there then......

This topic is closed for new posts.