back to article Is FCoE faster than Fibre Channel? Who knows? Just run your own tests

The Cisco Fibre Channel over Ethernet (FCoE) supporters' club has taken grave exception to what it views as a deeply flawed Evaluator Group study, funded by Fibre Channel (FC) enthusiasts Brocade, which showed FC was faster than FCoE. FCoE supporters say the study shows no such thing and lambasted both the Evaluator Group and …

COMMENTS

This topic is closed for new posts.
  1. Aitor 1

    I agree

    And must say that when I saw the config, I also thought it wasn't fair.

    1. Anonymous Coward
      Anonymous Coward

      Re: I agree

      So let's see a 'fairer' test then rather than trying to whaft the smell of steaming turds across the only benchmark we currently have...

    2. Anonymous Coward
      Anonymous Coward

      Re: I agree

      Here is a good presentation of why FCoE is the future of high end storage infrastructure: http://www.slideshare.net/emcacademics/converged-data-center-fcoe-iscsi-and-the-future-of-storage-networking-emc-world-2012

  2. kbb

    "what hope do the non-technical and the uninitiated have to find out the truth?"

    Should the non-technical and uninitiated be concerned? Shouldn't they be asking the technical and initiated as to which is the best solution to get?

    1. John Riddoch
      Thumb Down

      Re: "what hope do the non-technical and the uninitiated have to find out the truth?"

      "Shouldn't they be asking the technical and initiated as to which is the best solution to get?" - yes, but how many PHBs out there trust their techies over a shiny brochure and pushy sales droids who take them for a free lunch? There are too many non-techie managers who sign off on a purchase without checking with their IT department who are then expected to make whatever's been bought work.

  3. danXtrate

    From a deployment point of view, I'd the best way to go is FC. It's simple, reliable and fast. FCoE adds complexity as you'd have to manage both Ethernet and FC from the same switches which sounds great but is a real pain. I think FCoE is flawed right now as you cannot do proper zoning on the FC side without going through proper FC switches.

    1. Anonymous Coward
      Anonymous Coward

      No, the best way from the deployment piont of view is FCoE - it removes complexity - you get a far simpler converged infrastructure, with no need for any FC components other than the end points devices. No need to manage 2 sets of switches either, and zoning is often simpler due do the VSAN model.

      1. danXtrate

        As far as I know you still have to do zoning inside a VSAN, so where is the plus there? Yes, you could make things work without zoning, but that would really be a management nightmare as the architecture gets more complex.

        In my opinion FCoE is less complex from a marketing point of view, not from the actual implementation guy's point of view.

      2. tonyhurson

        Are native FCoE networks currently deployable?

        But are native FCoE networks broadly deployable today? My limited information is that they are not: most deployed FCoE segments switch to native FC at the top of the rack.

        Native FCoE networks need to be multipath scaleable. Perhaps that explains the crossover to native FC at the top of the rack.

        1. Anonymous Coward
          Anonymous Coward

          Re: Are native FCoE networks currently deployable?

          "But are native FCoE networks broadly deployable today?"

          They have been running in the enterprises for years now...Well proven and tested - and significantly lower costs, complexity, deployment and management overhead compared to adding native FC.

          "My limited information is that they are not: most deployed FCoE segments switch to native FC at the top of the rack."

          No, most FCoE is over Ethernet at every point between the disk array FCoE front end port and the server unified fabric card. This is where the big cost savings and reduced complexity come from. No more FC switches. None. Zero.

          1. Matt Bryant Silver badge
            Stop

            Re: AC Re: Are native FCoE networks currently deployable?

            ".....This is where the big cost savings and reduced complexity come from. No more FC switches. None. Zero." Male bovine manure! So, all that FC traffic just magically disappears onto what is usually an already crowded LAN without an hiccup? Never! What actually happens is you have to lay in extra cabling for the additional traffic now saturating your LAN, which means extra ports on the switch (kerching for CISCO) each with additonal port licences (kerching), more monitoring (kerching) and - inevitably - more CISCO switches (because you're locked in with UCS) - KERCHING, KERCHING, KERCHING!!!! IME, anyone trying to sell FCoE, especially CISCO FCoE, as a moneysaver is talking out of their recturm.

            1. Anonymous Coward
              Anonymous Coward

              Re: AC Are native FCoE networks currently deployable?

              "Never! What actually happens is you have to lay in extra cabling for the additional traffic now saturating your LAN"

              No, you just make better and more effective use of your connections to your core - usually 1Gbits at the low end or 10Gbit or 40Gbit in the enterprise and you partition that bandwidth as desired at the unified fabric adaptors or as otherwise desired. There is never normally a need to use extra ports. And even if you did, it's still way cheaper than a parallel FC infrastructure. Especially when you consider less complex environment, the faster deployment, and the lower TCO.

            2. danXtrate

              Re: AC Are native FCoE networks currently deployable?

              Nice point there, Matt! I honestly think that Anonymous Coward is either getting his info from Cisco FUD or is just trolling at this point. I can hardly imagine a large enterprise using a storage connection technology that you cant even use for basic disaster recovery without insanely expensive gear. Native FCoE (and when I say native I mean going from an FCoE initiator to a FCoE target without any bridges) can be done but the switches able to handle this kind of traffic are extremely expensive as they need to include a Fibre Channel Forwarder in order to deliver the FC part of FCoE to the target. I've done it, it works. The major problem with FCoE is that it supports a small number of hops ( three, I believe) so you cannot scale up a native FCoE solution properly. You'd have to add FC switches thus denying any theoretical advantage. I was a big fan of FCoE when it became economically viable, but all that vanished when I was faced with the first installations at my customers.

              1. Anonymous Coward
                Anonymous Coward

                Re: AC Are native FCoE networks currently deployable?

                "can hardly imagine a large enterprise using a storage connection technology that you cant even use for basic disaster recovery without insanely expensive gear."

                Most large enterprises can afford say Cisco Nexus 7000 switches....And even if they don't there is nothing to stop you using say FCoIP for the Disaster Recover part...

                1. danXtrate

                  Re: AC Are native FCoE networks currently deployable?

                  What FCoE is supposed to be is hassle free easily manageable networking. At least this is how its major backers push it onto unsuspecting targets.

                  Management is basically the same as in usual architectures, the single difference being that you do both Ethernet and FC management from the same console. No REAL advantage here.

                  But let's talk about the disadvantages:

                  FCoE switches are more expensive than normal 10Gbit switches and 8Gig FC switches combined.

                  The standard is evolving as we speak and maybe the new FC-BB-6 standard will solve the cost problem.

                  Ethernet changes are more frequent than SAN changes and this leaves the door open for administration SNAFUs affecting both the communication and data access of the infrastructure.

                  And in the end, if you take a step back you begin to wonder why Cisco is so eager in pushing FCoE when the entire industry is shifting to the Software Defined Network which works just fine with cheap dump Ethernet and FC switches.

                  P.S. don't even get me started on Cisco half arsed, "good enough" implementation of their bladeserver systems. Had they not offered the products basically for free while suckling profits from their networking business they would be a mere spec of dust on the server radar.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: AC Are native FCoE networks currently deployable?

                    "FCoE switches are more expensive than normal 10Gbit switches and 8Gig FC switches combined."

                    Actually there is a significant saving when you allow for the reduced power, support, cabling and port costs of a converged infrastructure...

  4. Anonymous Coward
    Anonymous Coward

    As Cisco peddle Flexpods, I'm sure the collusion between vendors and media has a more than a few homely roots.

    Any tech company flogs their wares floating on a wave of FUD and irrelevant benchmarks and comparisons. Perhaps Brocade are better sailors than Cisco, who knows?

  5. Anonymous Coward
    Anonymous Coward

    Lets' keep it in context

    Cisco is far from innocent on the FCOE vs FC FUD front.

  6. CheesyTheClown

    FCoE and FiberChannel are both disgusting

    As a long time operating system developer, protocol developer and most recently networking guy (pays a lot better and you don't have to think), I have to finally call bullshit on the whole iSCSI, FC, FCoE battle.

    It's amazing how the networking world has forced us into buying overpriced junk to compensate for underlying issues which are caused by using a block communication protocol from the 1970s. We're buying all these fancy schmancy systems for transmitting and receiving SCSI over faster medias and forcing network MTUs to be increased, forcing single pathing, forcing insanely short latencies all because SCSI is a piss poor network protocol and should be abandoned.

    This isn't 1978, it's 2014 and we should stop focussing on fixing this crap and instead design a block protocol which works awesome over normal networks and even better over reliable Ethernet.

    A block protocol needs to have a basic 5 functions :

    Open

    Close

    Seek & Read block(s)

    Seek & Write block(s)

    IO control.

    In addition, it should be possible to queue reads and queue writes. Blocks shouldn't be fixed sized and shouldn't assume they need to map to physical hardware block sizes. Algorithms implemented by Doug Lea and optimized by others such as Lars Thomas Hansen are ideally suited for scalable block allocation and LUT virtualization.

    As a massive bonus, to scale wider, it the protocols should have a high level block device zoning system as well as enumeration system.

    Oddly, the amount of work that's gone into half assed solutions to hacking the SCSI square peg into the modern storage round hole has been a disaster. We are NOT at the mercy of OS vendors to support alternative boot protocols. We only need to implement remote block device support in the virtualization environment and on a server.

    I have experimented with this using QEMU and VirtualBox and found it to be insanely simple to implement. My algorithms are not as well optimized as you would get from the Ph.D.s, but I was able to boot all operating systems with zoning, security, line encryption and more within less than a day of coding. In addition, I saw no reason to be forced into using "Big storage" from vendors like EMC and NetApp.

    There needs to be a networking group made up of people who understand networks, how networking people think and also protocol design and block device technology in order to replace SCSI since SCSI is an ancient dog with flees.

    1. Anonymous Coward
      Anonymous Coward

      Re: FCoE and FiberChannel are both disgusting

      "This isn't 1978, it's 2014 and we should stop focussing on fixing this crap and instead design a block protocol which works awesome over normal networks and even better over reliable Ethernet."

      Someone already did - just Bing "SMB v3"

      1. Trevor_Pott Gold badge

        just Bing "SMB v3"

        ...so you don't get access to any technical information, get frustrated, and have to Google it to learn anything useful?

        1. Anonymous Coward
          Anonymous Coward

          Re: just Bing "SMB v3"

          First page includes hits with the full spec. and detailed tech information...

          Brief summary of major new features in SMB3 here:

          http://blogs.technet.com/b/server-cloud/archive/2011/09/20/storage-and-continuous-availability-enhancements-in-windows-server-8.aspx

          1. Trevor_Pott Gold badge

            Re: just Bing "SMB v3"

            Not for me. First page is a blog with nothing but marketing pap and no meat. http://blogs.technet.com/b/windowsserver/archive/2012/04/19/smb-2-2-is-now-smb-3-0.aspx

            Maybe you were using Google instead? I know it's hard to tell the difference, but here's the clue: Google's the one that works.

            1. Anonymous Coward
              Anonymous Coward

              Re: just Bing "SMB v3"

              First Bing page includes full details at link #3

              http://en.wikipedia.org/wiki/Server_Message_Block

              This link you refer to as 'marketing pap and no meat' is link #9 on the first page of Bing, but the #1 link given by Google.co.uk

              1. Trevor_Pott Gold badge

                Re: just Bing "SMB v3"

                I suspect that's because Google knows you all to well, AC, and has customized itself to deliver you results as crappy as the rest of the world gets with Bing. Once you're used to meilie pap, it's hard to adjust to steak.

                Now, if you'll excuse me, I have to go hunt a bunch of Windows error codes and look for patches on Microsoft.com For that I'll need Google, because if there's one thing Bing can't do worth a bent damn, it's search Microsoft's own web properties.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: just Bing "SMB v3"

                  "I suspect that's because Google knows you all to well, AC"

                  I checked from a couple of PCs belonging to others and get similar results - so it seems that it is only YOU who gets the crappy results...

                  1. Trevor_Pott Gold badge

                    Re: just Bing "SMB v3"

                    But you don't seem to understand, I get great results by using the search engine that actually works: Google. Bing can barely find it's own website, let alone a useful search result on any given topic.

                    Bing is the Pepsi of online search engines: nobody really likes it, but there's a weird subsection of the population that will claim vociferously that they do just so that they can be different. If you want to suffer needlessly, go right ahead. So skin off my nose, pardner.

  7. M. B.

    The universal answer in IT...

    ..."it depends".

  8. Anonymous Coward
    Anonymous Coward

    My TLDR summary

    Both are pretty fast but the fastest one is probably going to be determined by how much you spend on expert consultants from the vendor/VAR.

  9. Nate Amsden

    don't rely on vendor studies for FCoE

    Just make it simpler.

    Don't use FCoE. It's been a market failure since day one. I remember sitting through what was it 5-6 years ago now various presentations from the NetApp folks (and one or two from Brocade) talking about how great FCoE was. I never bought into it and I still don't. The added cost of a real FC network IMO is quite trivial in the grand scheme of things for the benefits that you get(greater stability(more mature etc), isolated network, etc). It's pretty crazy even now the sheer number of firmware updates and driver fixes and stuff that are going out for various converged network adapters(and I'm sure the aggregators ala UCS as well as HP Flexfabric and any others).

    Applications often do fine if there are network issues(last round of issues I had was with a manufacturing flaw in a line of 10GbE NICs about two years ago - fortunately since I had two cards in each server it never caused an outage on any system when they failed), network goes down no big deal things recover when they come back.

    Storage of course is unforgiving, any little glitch and shit goes crazy. File systems get mounted in read only mode, applications crash hard, operating systems crash hard etc. Last major storage issues was with a shitty HP P2000 storage system(since replaced with 3PAR) which on a couple of occasions decided to stop accepting writes on both of it's controllers until I manually rebooted them. Each time at least an hour downtime to recover various systems that relied on it. Fortunately it is a very small site.

    Keep it simple. If you really really want to use storage over ethernet, I suppose you could go the iSCSI route, and/or NFS though that'd certainly be a lower tier of service in my book. I have a friend who has done nothing but QA for a major NIC manufacturer on iSCSI offload for the past decade and he has just a ton of horror stories that he's told me over the years. That combined with the wide range of quality of various iSCSI implementations has kept me from committing to it for anything critical. I still do use it though mainly for non production purposes to leverage SAN snapshots to bypass VMware's storage layer and export storage directly to the guests to work around bullshit UUID storage mappings in vSphere since 4.0.

    Now if your using UCS, I'm sorry, from what I've seen/read/heard those blade systems have very limited connectivity options so you may be stuck with ethernet-only storage. At least HP(and others I assume) give you options to use whatever you want.

    When a new good VM server can cost well over $30k a pop with vSphere enterprise+ licensing (and a few hundred gigs of ram) - the cost associated with FC is totally worth it. I'm sad that Qlogic is getting out of the FC switch business.. though they seem to continue to sell their 8Gbps stuff which I will use for as long as I can. I always found the Brocade stuff more complicated than it needed to be.

    1. Anonymous Coward
      Anonymous Coward

      Re: don't rely on vendor studies for FCoE

      "It's been a market failure since day one"

      Apparently the market disagrees. FCoE is selling well and is still accelerating. For instance Cisco have sold tens of thousands of UCS installs - the majority of which run FCoE.

      Anyway, regardless of the current position, this is why FCoE is the future in this space:

      http://img.deusm.com/datacenteracceleration/2013/09/267361/140414_150029.png

      1. Trevor_Pott Gold badge

        @AC

        When you're starting from "zero" it's not hard to make a graph look like you're "accelerating." It's also not hard - if you're Cisco - to make stupid amounts of money for something that isn't as good as the competition. Vendor inertia + pressure sales = victory. That has piss all to do with what's the most appropriate technology and everything to do with lies, damned lies and luncheons.

        Side note: please explain to me why I, as a business owner, am interested in having all of my data go through a single wire? If I am speed constrained there may be a case to be made for getting a faster network port, but there are a lot more factors to consider than just speed.

        Redundancy, for one. I am not fond of the idea that some putz could come along, unplug a single cable and annihilate my entire datacenter. For that matter, how does a pipe the size of the universe itself help me if the widgets on either end don't go that fast?

        Speed = eleventy billion only matters if you subscribe to the notion that life is better with great big hierarchical north-south networks with storage at the bottom, all centralized and bottlenecked. Newsflash: it's 2014. We go east-west now. Fabrics are the new black and Cisco is playing catchup.

        Decentralized and dynamic is good, for a number of reasons. Expandability, capital cost, and redundancy are the big ones. FCoE could be a part of this great new world, but at the moment Cisco's implementation isn't. I think we'll see a lot of traction yet from more numerous smaller, slower storage deployments that are connected more widely east-west than a great big fat pipe heading southward to a single point of storage failure.

    2. Trevor_Pott Gold badge

      @Nate Amsden

      Bang on analysis. Insightful, accurate...truly top quality. You should package that up and send it in to Drew, get it posted as an article and get paid for it, sir.

  10. Matt Bryant Silver badge
    Holmes

    I detect Elmers....

    ".....whereas two FCoE 10gigE came from the servers to a Cisco switch....." I suspect Metz is partially correct about the involvement of hp's Elmers there, but not in a way CISCO want to advertise. With UCS you have to go out of the chassis to a CISCO switch, and if you need to address a native FC device you usually have to go via another switch step. Of course, the hp C7000 chassis can take real SAN switch modules so you can "collapse your SAN switching layer into the chassis" (yes, that's the hp marketting term), or you can use FC pass-thru modules and effectively connect straight from the blade's FC mezz card right to a supported FC device. Warning - hp have reams of FUD on this and will bore you with it at the drop of a hat! Of course, CISCO complaining about an allegedly rigged benchmark test is one of those kettle-meet-pot moments.

  11. rch

    "Why not end-to-end FCoE?"

    When HDS presented its "Universal Storage" they did not mention FCoE with one word. No one in the audience asked about it either. That is the status of FCoE after 7 years of Cisco hype.

    My experience is that Cisco UCS is bought for many other reasons than FCoE. One of them being that the network people, or more correctly Cisco people, has had a say in the server purchase.

  12. Anonymous Coward
    Anonymous Coward

    " I have a friend who has done nothing but QA for a major NIC manufacturer on iSCSI offload for the past decade and he has just a ton of horror stories that he's told me over the years. That combined with the wide range of quality of various iSCSI implementations has kept me from committing to it for anything critical."

    So, what´s the current state of ISCSI implementation now?

    We paid roughly 80000€ for two fully licensed HP/Brocade FC switches incl warranty ~3-4 years ago, because everybody told us to do so and of course ISCSI was said to be evil/low cost/unreliable. Well both switches are almost fully equipped with fibre optics.

    We are currently planing for the next SAN and ethernet is simply the "cheapest" option.

    - price for 48x 10GBase-T@RJ45, 4x 40GbE QSFP+ for 7000,-€ (or the same for a 48x 10GBase@SFP+, 10Gbase-SR optics are cheap nowadays.)

    Let´s take two of those, stack them, add two 10Gbase NICs into each, use LCAP and activate TRILL/802.1AQ.

    Why shouldn´t ISCSI be used nowadays? Only, someone is telling us me again, FC would be "superior"? The next consultant telling me something about latency will be kicked out of my office...

    1. Trevor_Pott Gold badge

      Okay, simple reason that iSCSI can be bad? When the pipes reach saturation iSCSI can turn into a pumpkin. It doesn't degrade gracefully. That's pretty much it in a nutshell right there. And even that depends entirely on your implementation.

      FC is both less likely to fully saturate any given link and it degrades better if the link does become saturated.

      That said, I have successfully used iSCSI over 30Mbit WAN in emergency scenarios and had it work fine. I wouldn't do that except in emergency, but iSCSI has come a heck of a long way.

This topic is closed for new posts.

Other stories you might like