Is Fibre Channel over Ethernet (FCoE) a panacea for the difficulties inherent in running separate storage and general networks? FCoE is not taking off and, indeed, there have been articles published that suggest it may die before it arrives. Let's look again at the idea of mixing general networking traffic and storage networking …
Ethernet has been capable of running different protocol types over the same physical cable/fibre since its inception.
I can remember back in the 1980's we were running at least TCP/IP, DECnet, LAT and VMS Cluster Traffic over the same 10baseT (10Mbits) cabling.
So please forgive me for asking, what is new here?
And what about voice and data, which was the last "convergence" fashion
Your article seems to studiously avoid any mention of voice and data convergence, which many companies have been doing for quite a long time now. We were always told that voice had to have top priority on the network, and now it seems that in fact storage traffic also has to have top priority.
What would be more interesting - to me, at least - is how to make app traffic, voice and storage all work together nicely on the same network.
Disk traffic trumps voice. Voice is still soft-realtime, but people won't notice a 5ms delay in their voice traffic. You darn well will notice sluggishness if someone adds 5ms extra delay to every disk seek.
No good will come of this
Thus the reason why I detest the idea of intermixing "disk" traffic across the normal data LAN. I've already had to endure countless shouting matches over who gets what priority QoS wise, endless "marking" mashups and resisted the urge to throttle someone in a never ending meeting because of the insistence on 'converging' all of our services upon one poor wire. (or fiber).
Separate LANs especially within a data center were created for a reason. Piling them all onto the same switching network even with independent VLANs is just asking for trouble as far as I'm concerned.
Now, where's my stone tablet and chisel? I've got an email to get out.
Even though we have iSCSI and FCoE today from some manufacturers, look at almost any best practice guide and at least with iSCSI and NFS they'll always reccomend running jumbo frames whenever possible for the storage. Jumbo frames I think in general is still too risky to run as your "main" ethernet ports because of compatibility issues.
So the point is even with ethernet, if you want best performance you need seperate NICs on each server and seperate cables to run jumbo frames. You can use the same ethernet switches since switches can run jumbo/non jumbo on the same ports, with server NICs I have not yet come across one at least on linux and vmware where you can have a physical port run both jumbo and non jumbo frames (non jumbo frames as in ENTIRELY non jumbo not jumbo with the ability to re-transmit the fragments if they are too big).
Jumbo frames is certainly not a requirement, you can (and many do) just fine without it. But if your running a serious operation with storage on ethernet I think most people will opt to use jumbo whenever possible.
I wonder if we'll ever get to "super jumbo" frame sizes, I mean jumbo frames became popular back when gigE was starting out, now with 10Gig commonplace now, and 40gig coming, I think it would make sense to give the option to boost the frame size even higher.
A couple of minor points
I have a couple of minor points:
1. The statement, "...FC MTU is 1440 bytes..." should be 2240.
2. The statement "When an Ethernet frame is received on a Switch, it is placed into a memory location and waits for forwarding. For storage traffic this isn’t acceptable since the end-to-end delay should be as low as possible to ensure that storage throughput is fast. The Ethernet Switch is configured to immediately service the buffer that contains storage data as well as to forward the frame at the next interval." >> I’m not sure what is meant by this, FCoE Frames are routinely stored in a buffer and will be forwarded when bandwidth is available. As Greg mentions later, ETS (802.1Qaz) ensures that each priority is allocated a certain amount of bandwidth and in cases where the bandwidth is being fully utilized, or there is upstream congestion, FCoE frames will be stored in a buffer (sometimes for extended periods of time)..
- Leaked screenshots show next Windows kernel to be a perfect 10
- Amazon warming up 'cheapo web video' cannon to SINK Netflix
- Something for the Weekend, Sir? I need a password to BRAKE? What? No! STOP! Aaaargh!
- Episode 13 BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
- Vulture at the Wheel Ford's B-Max: Fiesta-based runaround that goes THUNK