back to article Fibre Channel over Ethernet is dead. Woah, contain yourselves

How many times have you heard one of these statements: Tape is dead! Mainframe is dead! The laptop is dead ... and so on. It then turns out not to be true. Most of the time it was just a way to say that a newer technology was seeing a strong level of adoption, so strong as to eclipse the older one in the eyes of the masses. …

  1. Luke 11

    Said no HP enterprise customer. Ever.

    So you've spent £110k on FlexFabric per blade enclosure for converged networking and now some buffoon thinks they can grab a few panic clicks by writing this drivel.

    1. Anonymous Coward
      Anonymous Coward

      Re: Said no HP enterprise customer. Ever.

      I'm not sure where you are getting £110K from Luke11.

      2 x HP Virtual Connect FlexFabric 10/24 Modules fully licensed are around $40k US Dollars or just over £25k pounds sterling LIST price (before any discounting).

      Regardless of the price, the article itself has a lot of merit; at the end of the day it's about the right solution for the customers unique requirements and the ability to offer them some choice. FCoE will suit a lot of customers but it's not for everyone and never will be.

      Disclaimer: I work for HP but you probably already guessed that.

      1. Jesper Frimann

        Re: Said no HP enterprise customer. Ever.

        Jup, I can only agree.

        When I used to Work for IBM, then I did together with a another infrastructure architect, the 'local' standard design for multi-tenant Flex infrastructure. And by using CN adapters in the nodes, then with 2x 4 port CN adapters in the x86 nodes and x2 8 port CN (6 ports available) in the POWER nodes we were actually able to cut the 'capacity as a service' price quite a lot, compared to the stupid design that well... IBM server Group wanted us to use.

        This was an environment that relied on a huge shared cheap SAN infrastructure for storage, so FC was a must.

        So using 2 CN adapters gave us full redundancy on a x240 node you didn't have to step up to the more expensiv x440 node to get this, it also gave us plenty of IO 8x10 GBit hence we upped the RAM in the nodes giving us higher Virtual machine density, it gave us simpler cabeling, for low performance nodes we could actually Do away with 2 switches in the chassis, And for the optional TOR switches, you didn't need 2 for Network and 2 for FC you just needed 2 Converged switch'es where you then could break out your FC.

        This was actually quite big saving while still meeting the design goals of the solution, which included some rather serious security considerations.

        And from what I know of HP blades, you could do pretty much a design along the same line using HP products.

        So dead ? I hope not.

        // Jesper

  2. Anonymous Coward
    Anonymous Coward

    Or Cisco UCS customer for that matter

    or are they going to move to iscsi now then? I mean I know UCS has no market share in the US or Europe so its probably not that big an impact right?

    1. Anonymous Coward
      Anonymous Coward

      Re: Or Cisco UCS customer for that matter

      We have half a dozen UCS chassis' here, we weren't crazy enough to use FCoE though. All NetApp NFS, over 20GB uplinks.

      1. Brian 39

        Re: Or Cisco UCS customer for that matter

        Wish we'd have gone down that route too.

        Our newly installed EMC Isolon is (mostly) a pile of poop.

        We had a working NetApp PoC but the (then) CIO voted not to buy it, so.....

        Oh well.

        BOFH.

    2. Brian 39

      Re: Or Cisco UCS customer for that matter

      I thought UCS had no market either, and was rather ticked off when we purchased a 34 node Vblock from VCE (includes Cisco blades with FCoE) and a bunch of EMC storage.

      We discovered two of our local big school districts ($300 million dollar operations), a local hospital organization (just built a $1.3 billion complex), several locally based multi-nationals, blah, blah, blah were also UCS owners/customers.

      Oh well, It'll at least allow us to finally retire a bunch of Dell and Compellent servers and storage we purchased 5 years ago to replace a bunch of old Sun gear that we finally got to retire last year.

      Perhaps someone writing for El'Reg needs to get out of the office a bit more?

      Oh my! What a shock.

  3. Erik Smith

    What about Blade Servers?

    Disclaimer - I work for EMC and have spent the past 7 years working on FCoE.

    While I would agree that end-to-end (Server to Storage) FCoE never achieved widespread adoption, I disagree that FCoE is dead. Both HP blade server and Cisco UCS-B series customers use the protocol heavily and there are some compelling reasons for them to do so.

    I also disagree with the notions that:

    1) FCoE somehow legitimized IP Storage. They have been evolving independently of one another.

    2) Enterprise customers are moving away from FC (in general). I've explained my reasons here: http://brasstacksblog.typepad.com/brass-tacks/2015/05/fibre-channel-is-better-than-ethernet.html

    I also think DCB is merely a distraction for IP Storage. It adds a ton of complexity to the end-to-end configuration process and the value it provides is not significant enough to justify the extra complexity.

    Erik

    1. Voland's right hand Silver badge

      Re: What about Blade Servers?

      +1

      IP based storage (f.e NFS) never had anything to do with FCOE. It was legitimate before, it is legitimate now and it has a different use case.

      You use IP based storage when you want fine grained file based access as well as large amounts of data shared as read-write between multiple compute endpoints as files. It can be used in some cases where dedicated per-endpoint storage is needed and can even deliver higher cost efficiency. However, it requires significantly more qualified sysadmin workforce when used this way. In the days when I still ran IT, you used to get < 5% of candidates having a basic understanding of how to use NFS on a Unix system and < 1% knowing advanced stuff like autofs. In any case - FCOE does not apply here. It has nothing to do with the requirements and it does not implement anything from what you would need to deliver this use case.

      FCOE (and FC proper for that matter) as well as other block storage use case have little or no sharing between endpoints with VM images being a prime example. When you instantiate a VM image you do not have 20 systems in need of read-write access to it. Even if you use Copy-On-Write you still have a strict read-only master and separate journal for each VM.

      The article the way it is written makes conjectures based on failure of protocols and solutions designed for use case A to do use case B and vice versa. Surprise, surprise, a round peg failed to fit in a square hole. Of course it will not. However, based on the fact that it will not you cannot declare the peg dead or the hole dead.

  4. Lee McEvoy

    FCoE was going to take over the world....

    ....it hasn't, and I've pretty much only seen it used as described by Enrico to reduce spend in chassis (not very often as ToR).

    That's not how it was being positioned - it was going to be the protocol that took over the FC world. It hasn't even taken over all the networking in chassis based compute (where there is a potential case to use it).

    Saying that people have bought it and therefore it can't be dead is missing the point - a lot of people bought Betamax (and some even bought HD-DVD). Did that mean those didn't turn out to be technical cul-de-sacs without a future?

  5. Steve Chalmers

    Looking at FCoE another way, it's been quite successful

    Here's the comment I made on the author's blog, where this was first published:

    Perhaps there's another way to look at FCoE, not as a product but as a catalyst for business change.

    FCoE caused every sales discussion, worldwide, for networked block storage (server, SAN, storage) from 2007 to 2013 to become a discussion where just bringing Fibre Channel, or just bringing Ethernet, wasn't good enough. This changed the sales dynamic, worldwide, to favor not just Cisco products but Cisco's direct sales force and resellers.

    FCoE also redirected the entire discretionary investment (and then some) of the Fibre Channel industry (server HBAs, SAN switches, disk arrays and other storage devices) for that same period. In some cases, companies which previously specialized in either Ethernet or Fibre Channel were combined by very disruptive M&A in order to have all the skills required to succeed building, selling, and servicing FCoE products.

    In the end, FCoE turned out to be a very cost effective edge (last hop) (Access layer) for Fibre Channel networks. It was also the catalyst for my career shifting from Storage to Networking. In those two ways, FCoE was a big success!

    (speaking for self, not for employer, which happens to be HP)

  6. Archaon

    As far as UCS goes...

    ...the current chassis architecture wouldn't really allow for true FC down to the blade. Reverting back would also mean stepping blade technology back 5 years if not longer.

    Once the FCoE traffic hits the Nexus Fabric Interconnects (excluding the Nexus 10k I believe) it can be broken out into FC anyway. So yes that does play into the "top of rack only" argument, but is that really such a bad thing compared to the old way of having 4 modules in the chassis (2 eth, 2 FC), 2 or more NICs/HBAs in each server, top of rack ethernet and FC switches and so on?

    To more-or-less echo the EMC and HP bods who have already posted: regardless of whether it's Cisco UCS or HP FlexFabric, having FCoE down to the chassis/blade doesn't seem like a bad thing to me. From my viewpoint, converging LAN and SAN at the server level is a good, viable, cost effective thing to do. And you can hardly call it a dead technology when every blade vendor on the planet has taken the same approach. We're also seeing the technology move from blade servers to rack servers (UCS C-series for a long time, HP FlexFabric switches and CNAs more recently).

    It might evolve from the dream of a complete "FCoE network" to a more confined remit of "converged server uplink technology" or some-such, and while that role might be less glamorous it's clearly alive in that capacity. More to the point is that in that use case it's really popular with customers as the bottom line is it reduces cost.

    So to summarise this dying protocol...

    1) It's used by multiple companies in flagship products (blades).

    2) It's being developed by multiple companies to bring it down to more mainstream products (rack servers).

    3) It's popular with customers.

    4) It's cost effective.

    5) It's evolving to find its place in the market.

    Yup. Clearly this is one protocol that's as dead as a doornail.

    1. Morten

      Re: As far as UCS goes...

      Lack of ratification of BB-6 that will give us proper flow-control all the way to NPIV targets is a killer for many environments for FCoE...

  7. James 100

    It died?

    Was there some actual development to trigger this announcement? Has Cisco announced they're dropping FCoE support, or NetApp announced it won't be supported in the next ONTAP release, for example?

    Maybe it's a bad idea, and/or doomed, but the article seems terribly short on facts to support that. Maybe some actual sales or investment figures for regular FC and iSCSI versus FCoE?

  8. ajbergh

    FCoE = UCS

    I think what the author fails to realize is that FCoE is far more widely deployed then most people realize, and probably in environments where people don't even know they are using FCoE! Remember that every UCS B-Series uses FCoE from the fabric interconnects down to the chassis blade servers, even if the uplink storage protocol is traditional FC! I think this should be included in a "numbers" that people are throwing around as to the FCoE adoption numbers. The fact that people are using FCoE without even knowing shows you the power and ease of the protocol.

  9. nilfs2

    I want a lossless fibre connection to call my neighbour

    Why would you want to spend all the money needed to deploy an FCoE "solution" to connect a server to the storage controller that is right next to it? You don't even need a switch for that!

    1. Archaon

      Re: I want a lossless fibre connection to call my neighbour

      'Why would you want to spend all the money needed to deploy an FCoE "solution" to connect a server to the storage controller that is right next to it? You don't even need a switch for that!'

      You wouldn't. That said if you've only got one server then I hate to break it to you but you don't really need a storage array for that either; so I would think wasting money on FCoE is something of a moot point?

      For multiple servers, although direct attach FC and iSCSI is possible with most vendors you'll find that you run out of ports on the SAN very quickly without having a switched fabric playing piggy-in-the-middle.

      1. nilfs2

        Re: I want a lossless fibre connection to call my neighbour

        If you need to connect several servers to a storage controller, just use NFS or iSCSI, it is Ethernet as well, no dedicated hardware needed, a whole lot cheaper and a lot more practical than using that FCoE behemoth.

  10. Creslin

    Who says UCS are using FCoE, all our templates are iSCSI vNICs

    Few mentions of UCS making use of FCoE from the fabric interconnects, the FCs also support iSCSI which opens buying disk 90% cheaper if you want it.

    We now buy multiple cheaper NAS/(iSCSI SAN) units with NAS grade disk, the prosumer/consumer small NAS market has done a great job proliferating these disks across the market with features historically reserved for huge spindle arrays.

    UCS FIs have 24/32 10Gbit ports, sure you can use them for FC - but why would you? UCS hosted visors can multi-path back-to-back to many 10Gbit Eth ports on a storage unit, no switch even required, bought at typically 10% of the cost of sage brands as EMC/Hitachi/NetApp (by the time you've not bought vendor lock-in disks)

    I recently bought two QNAP 2480u-rp units, used any SSDs for cache, pay couple hundred dollars a disk, 4x10Gbit ports from intel, have 100TB in each for £16K (all approved, tested, does not break your warranty)--- or buy two vNXE's get slower throughput, less extensibility and pay 160K, 10x the cost. Plus none of those niggles SFF or LFF for SSD support/large disks, transceiver compatibility, expensive support, parts lockin, feature licensing

    Little argument to be had, with virtualisation its simple to move the storage design to the visor, get resilience and MTTF by literally having spare kit in the DC itself -- who needs EMC any more for less than PT storage. 40Gbit a head over iSCSI - fast enough for most, entire spare unit -- staff close to the kit again.

    1. Archaon

      Re: Who says UCS are using FCoE, all our templates are iSCSI vNICs

      A lot of customers have FC infrastructure already, and a lot of customers don't refresh servers and storage at the same time. It's not uncommon for us to have to spec out this years new server solution to work with last years new SAN solution and also make it future proof for next years LAN refresh. Outside of schools and small businesses it's very rare to see a complete refresh done all at once as a single project.

      If the infrastructure is 4Gb FC we'd normally rip it out, at which point it's fair game what protocol we put in (in agreement with the customer of course). But with that said 8Gb FC has been around for so long now (7+ years?) that most FC SAN refreshes we end up leaving the switching etc in place, effectively just swapping the storage array.

      Frankly it's a myth that iSCSI is cheaper than FC. Sure, 1Gb iSCSI is cheaper than FC but when you look at it properly 10Gb iSCSI can actually be really bleeding expensive to implement. At small scale (i.e. before you start having to buy port licenses) FC is quite often cheaper. There's also the point that the overheads on ethernet and iSCSI mean that 8Gb FC is faster than 10Gb iSCSI in terms of actual performance.

      In terms of the NAS you're using, I don't object to your premise but I question that two of those units would cost £160k. I'd question whether playing EMC (not known for being the cheapest) against QNAP is a remotely fair comparison. I don't know EMC pricing but you'd be able to get a storage array like that for around £25k easily. If you played the vendors against each other and entertained quotes from some of the cheaper vendors (e.g. Dell and Lenovo) and weren't too fussy on what you ended up with you'd likely be able to get it for around or possibly even under £20k.

      Sure that's still more expensive but with most storage arrays you'll get things like dual controllers (rather than a single motherboard) which will likely offer more cache and potential bandwidth. You've potentially got less overheads as well (SAS drives rather than SATA and native block rather than file) and generally speaking higher grade hardware as well.

      Also a word of advise - please be careful with your choice of drive. By using NAS drives (typically meant for machines with 8-12 disks, but that varies by vendor) in a larger unit you might hit issues.

      The drive firmware knows what it's being used for and the manufacturers can - and will - invalidate your warranty if it's been used in a larger system because you're using the hardware outside of what it was designed for; the cheap drives aren't designed to cope with the heat and vibration from running in a larger system.

      Using WD as an example, note how WD Red drives are advertised for NAS systems with 1-8 bays and WD Red Pro drives are advertised for NAS systems with 8-16 bays. Beyond that you're expected to use their Re/Re+/Se drives - but from your comments around cheap NAS grade disk I suspect you're using the cheapest you could find? And consequently you have broken your warranty - albeit not on the unit itself.

      A lot of people like to knock the Tier 1 vendors (HP, EMC, Dell, IBM etc etc) on pricing, and on the face of it that's easy an easy argument to make. But a lot of the time the way to make a self built solution cheaper is to use worse parts (WD Reds for example). In the long term - as long as you 'play the game' and don't get ripped off - you'll often find that a Tier 1 solution is cheaper in the long term.

      Good luck anyway!

  11. Anonymous Coward
    Anonymous Coward

    iSCSI dead ? FC dead ? FCoE dead ? But Ethernet is alive !

    This story makes (wrong) statements, that I consistently hear from some storage companies !

    Here my 2c

    The development of FCoE and lossless Ethernet (DCB) was decoupled by purpose !

    Comparing FCoE with any IP storage (iSCSI, NFS) is apple and orange comparison.

    FCoE is L2, stateless, non routable and essentially Fibrechannel transported over

    lossless Ethernet; that's why all the known FC tools, like multipathing, FC zoning

    apply for FCoE and classical FC.

    iSCSI is L3, stateful, routable, and requires eg. different multipathing software than for classical FC.

    FCoE is dead, and then below the statement ....thanks to FCoE, Ethernet storage has grown tremendously....

    IP storage is the proper term; iSCSI, SMB and NFS; they of course run on top of Ethernet.

    FCoE is very successful in the access layer of the network; first hop from the server to a access switch, and then splitting of the unified traffic into classical Ethernet and Fibrechannel.

    Just as a reminder:

    SAN JOSE, Calif. – June 4, 2014 – Cisco has achieved the ranking of No. 1 provider of x86 blade servers in the Americas, measured by revenue market share, according to a report by IDC. According to the IDC Worldwide Quarterly Server Tracker, 2014 Q1, May 2014, Vendor Revenue Share, Cisco is also ranked as the top x86 blade server vendor in the United States and North America.

    All this UCS customers, if they needed classical Fibrechannel, are using converged network adaptors, running FCoE between the blades and the Fabric Interconnect switch.

    I agree, that multihop FCoE, running FCoE end to end is challenging; but sorry, the exact same truth applies to running iSCSI over DCB !

    I remember the times, when similar stories circulated about iSCSI; fear of the big storage vendors, that Cisco could kill their high margin classical FC business. It is still alive.

    One could also claim that classical fibrechannel is dead ? with exception of speed increase, absolutely no innovation ? and the speed adoption is slow, specially in the storage subsystem space.

    Who knows, that 40G FCoE switches are available on the market (not only on marketing slides and roadmaps); compare this with current 16Gbps classical FC.

    Just a side remark: a lot of this standards development for FCoE (ANSI-T11) and lossless Ethernt (IETF) has been done by highly appreciated Italians in the silicon valley.

    I would rather propose another article, announcing the death of Raid X, and the expensive storage arrays; highlighting the coming cloudstorage built with commodity cheap servers and using TCP over Ethernet !

    1. Erik Smith

      Re: iSCSI dead ? FC dead ? FCoE dead ? But Ethernet is alive !

      Outside of your comment regarding multi-pathing I agree with you. Especially with regards to the Italians that, for all intents and purposes, drove FCoE through the FC-BB-5 standards process. There were a few others who were heavily involved and made important contributions but (IMHO) I think FC-BB-5 would still be debating things like "SPMA versus FPMA" if it weren't for two Italians in particular.

      With regards to multipathing, I do not agree that iSCSI and FC/FCoE require different multipathing software (e.g., PowerPath). The MP stuff operates above the transport layer in the storage stack.

  12. Matt Bryant Silver badge
    Happy

    FC or FCoE, which is "better"? Sometimes not the issue.

    A few years back I was approached by my then Chief PHB to produce a design for our next gen datacentre based on segregated Ethernet and FC. A week later he was back asking for the same using FCoE and one using iSCSI. His reasons were simple - he was playing the vendors off to see which would get him the biggest/best golf day! In the end, after three such outings, we ended up with a hybrid design that just about satisfied everyone.

  13. danXtrate

    FCoE is not positioned correctly

    In my opinion FCoE was pushed into the marketplace a bit too early, unpolished and positioned very badly.

    The early claims were that it simplifies management, will evolve along the Ethernet standards, is very flexible and can be used end to end in a datacenter, it's cheaper than a 10Gig Ethernet + 8Gig FC solution.

    It got most of these things wrong:

    1. It's does not have simple management, it just puts together storage and ethernet admins. And those guys don't really mix up.

    2. Can't be used as an end to end solution without an FCF, and that brings the price up a lot. The FCF is usually contained in a converged switch or separately, but I only know of one such product.

    3. It's not really cheaper. You can get Nexus levels of performance with Juniper + Brocade combos at three quarter the price.

    4. You can't do proper FC zoning without actually using FC switches connected to the converged ones and this kind of defies the purpose of simplicity and ease of management.

    5. Computing power for a two socket server has increased to the point where a two port converged adapter is no longer enough and you need a second one in order to run without bottlenecks. Spread some FlexFabric or VirtualFabric icing and you have a highly complex, dynamic and intensely administered cake. And the SAN is usually a serene place, with blue skies and where change rarely happens.

    I believe FCoE still has a place in the enterprise, but strictly as a way to get both FC and Ethernet to ToR or Blade Converged switches where you can split the traffic between FC and Ethernet. It can succeed if the industry comes up with cheap converged switches that do just that, while Ethernet and FC people can handle their networks separately.

  14. Anonymous Coward
    Anonymous Coward

    Cisco's view of FCoE

    I hope you have seen this (I'm not working for Cisco !!)

    http://blogs.cisco.com/datacenter/fcoe-is-aliveandkicking

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like