back to article Brocade wipes floor with CNA competition

Brocade's network adapters cream competing products from Emulex and QLogic when used for Exchange and Oracle data sent over over Ethernet links. The Brocade results (pdf) validated by IT Brand Pulse, show that servers using Brocade's converged network adapters (CNAs) will be able to carry out more Exchange email and Oracle …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Down

    But in the real world

    Brocade cards work great as long as you don't need to reboot your servers... at which point using the brocade 1020 card with the brocade twinax cable results in waiting for your network guy to do a shut/noshut before the cards will renegotiate a link on the nexus switch.

    Problem occurs in linux and windows.

    Still waiting for an answer from brocade. In the mean time the solution is either move to optical cables and sfp+'s with the brocade or move to the qlogic QLE8142 adapter that unsurprisingly works just fine with the brocade and twinax cables >= 2m.

    With regard to the speed tests the article nails it in the head in that your comparing the brocade card fully offloading iscsi to running the windows iscsi driver on top of a 10gig ethernet CNA of other vendors. Use the offload features of the other cards and present results for a range of current platforms before your tests are even worth looking at.

    If your playing in this space get adapters from different vendors and prove them with the servers, cables, switches, and os platforms your going to use before placing the big order. Marketing spec's like this report may be correct but there never the complete picture. And JIT delivery has no room for hidden gotchas.

  2. Chris Mellor 1

    I don't believe it

    Sent to me as mail, and I thought it well worth adding as a comment:-

    ----------------------

    I don't believe it.

    Look, I have no dogs in this fight. I don't speak for my company, only myself. Still, as a 20-year veteran of the storage industry, I find the results incredible. In particular, the FCoE results are not credible at all.

    Please explain how it's possible to get 2,600 MB/sec out of two 10Gb ports. 2,600 * 8 = 20.8 Gb/sec. Throw in the 8b/10b overhead, and it's way beyond the 10Gb data rate.

    How do they send bits faster than the protocol allows? There's no data compression in the FCoE, not yet anyway. It's more likely that they're completing cached reads without sending them over the wire.

    I gotta go fix my driver. Can't spend too much time on marketing crap.

    -------------------

    Chris.

    1. sysadmin

      Your math is not quite correct

      I rather like this whitepaper from Brocade, as it isn't just showing how big their cock is but also showing what they can do with it (which is what it's all about as I tell my wife... enough of the smut).

      Not sure who wrote this little piece saying the math doesn't add up but I've spotted 3 errors with it already that I'm sure a "20 year veteran of the storage industry" wouldn't make, unless he was just making mischief.

      Firstly, if the writer had actually read the whitepaper he would have discovered that the whole point of it was to show "real world" situations and blocksizes, not the usual spurious 128byte rubbish these vendors usually use in test reports. At 4K blocksizes Brocade managed 2,600 MB/sec, whilst Emulex only managed 1,400 MB/sec and QLogic 1,700 MB/sec. At 128K blocksize, Brocade managed 3,672MB/sec with Emulex and QLogic coming at 3,200MB/sec & 2,500MB/sec respectively. I learnt this by reading the report - it looks like Mr Storage Veteran's non “credible" claims about Brocade also apply to QLogic and Emulex. The reason for this is, to quote Wikipedia, “10 Gigabit Ethernet supports only full duplex links” and as this whitepaper is about “real world” it is showing BOTH reads and writes (in fact if you read the whitepaper it says 60% reads, 50% random).

      Secondly, your figures would be incorrect as this is FCoE performance not native 10Gb/E, so the maximum throughput would be 8Gb/sec full duplex not 10Gb/sec.

      Thirdly, as anyone with 20 years in the storage industry know, 10Gb/E (and FCoE which runs on 10Gb/E) uses 64b/66b encoding not 8b/10b encoding.

      I don’t know why we always have to have these “crazies” poo-pooing reports. I guess it is the marketing department of one of Brocade’s competitors as he says “I gotta go fix my driver”. I for one think it is a useful whitepaper and would welcome more, from all sides, detailing “real world” comparisons rather than spurious achievements like “I don’t have a heat sink and you do”.

This topic is closed for new posts.