Your math is not quite correct
I rather like this whitepaper from Brocade, as it isn't just showing how big their cock is but also showing what they can do with it (which is what it's all about as I tell my wife... enough of the smut).
Not sure who wrote this little piece saying the math doesn't add up but I've spotted 3 errors with it already that I'm sure a "20 year veteran of the storage industry" wouldn't make, unless he was just making mischief.
Firstly, if the writer had actually read the whitepaper he would have discovered that the whole point of it was to show "real world" situations and blocksizes, not the usual spurious 128byte rubbish these vendors usually use in test reports. At 4K blocksizes Brocade managed 2,600 MB/sec, whilst Emulex only managed 1,400 MB/sec and QLogic 1,700 MB/sec. At 128K blocksize, Brocade managed 3,672MB/sec with Emulex and QLogic coming at 3,200MB/sec & 2,500MB/sec respectively. I learnt this by reading the report - it looks like Mr Storage Veteran's non “credible" claims about Brocade also apply to QLogic and Emulex. The reason for this is, to quote Wikipedia, “10 Gigabit Ethernet supports only full duplex links” and as this whitepaper is about “real world” it is showing BOTH reads and writes (in fact if you read the whitepaper it says 60% reads, 50% random).
Secondly, your figures would be incorrect as this is FCoE performance not native 10Gb/E, so the maximum throughput would be 8Gb/sec full duplex not 10Gb/sec.
Thirdly, as anyone with 20 years in the storage industry know, 10Gb/E (and FCoE which runs on 10Gb/E) uses 64b/66b encoding not 8b/10b encoding.
I don’t know why we always have to have these “crazies” poo-pooing reports. I guess it is the marketing department of one of Brocade’s competitors as he says “I gotta go fix my driver”. I for one think it is a useful whitepaper and would welcome more, from all sides, detailing “real world” comparisons rather than spurious achievements like “I don’t have a heat sink and you do”.