NetApp, Cisco, QLogic, Emulex and VMware have all proclaimed FCoE (fibre channel over Ethernet) heaven is just around the corner. So is this the end for the iSCSI internet storage standard? Has Dell's EqualLogic purchase suddenly been devalued? The thinking goes like this: iSCSI sends SCSI commands wrapped in TCP/IP to a storage …
So if the problems with iSCSI are down to the drawbacks imposed by ethernet (latency, retransmission-induced latency) and iSCSI works happily over 10GigE as well (why wouldn't it - it's all ethernet after all) then surely the question should be....
"Why would anyone bother with this new fangled FCoE when iSCSI is already doing this?"
Or am I missing something....
Back in the real world
iSCSI will probably evolve too in order to resolve the packet loss and other problems associated with it and data centres will probably use both technologies for different purposes. Even if FCoE were to completely replace iSCSI, large data centres won't migrate overnight.
You guys should know better than writing articles along the lines of 'new technology X is so much better than old technology Y that it will have completely replaced it in 2 years time'. That never happens because the only constant in technology is change and what may be old today may become new tomorrow by learning new tricks. Also, sensible geeks know that more often than not, the question is not whether technology A is better than technology B but whether A can solve the particular problem they have better than B (and is more cost effective).
Premature Speculation .... for Failed Stock starring Empty Promise
"So all iSCSI array vendors will need to rethink their marketing propositions and retool their product strategies.
There is no way out for iSCSI. It's at war and FCoE is the killing field."
It is simple for iSCSI Programs to do as you have suggested and shift the goal posts to suit their aims thus always to be in the Lead...... with another harvesting opportunity.
It is not at war,at all, it is a whole new paradigm that to consider as war will see one defeated very quickly as one is left behind in such dulled competitive mindsets.
Hrmm...doesn't add up
Let's see, first you say about iSCSI over Ethernet:
"The trade-off is that Ethernet has a longer latency than FC and can lose packets causing a retransmission, meaning even more latency."
then two paragraphs later you say about FC over Ethernet:
"FCoE meant you could enjoy FC speed - using 10gig Ethernet - and predictable latency with no packet loss."
So how does FCoE magically avoid the "longer latency" and "lose packets" problems that iSCSI has over the same Ethernet? Is it the magic of 10Gig? If so, then why not run iSCSI over 10Gig for the same benefits? Trying to compare apples and oranges to make FCoE look better, are we? FYI, a single 10Gig ethernet card with TOE will ALWAYS be cheaper than a bunch of 10Gig Ethernet cards with CNA (with or without TOE). Benefit iSCSI.
BTW, if your server farm ethernet drops packets EVER, please send your boss's address to me so I can send him my card. You obviously don't know how to design a proper ethernet network.
My thoughts exactly! The author spends most of his time disparaging iSCSI because of its Ethernet-based drawbacks, then goes on to say how FCoE will blow iSCSI out of the water (but only when using 10Gbps!).
This article is all hyberbole and inflamatory language:
"There is no way out for iSCSI. It's at war and FCoE is the killing field."
hang on, four paragraphs above we have:
"Let's wait 18 months to two years and see both 10gigE and CNA costs come down..."
So if we have to wait 18 months to 2 years for FCoE to be able to beat iSCSI, how the flip can it be the killing field?
Let's have more detailed information and less tabloid journalistic crap.
P.S. I don't actually use iSCSI (we have FC SANs) but I am not blind to its advantages...
I second that!
I don't get it - it's the same media; just a different protocol - what works for one should work for the other.
And doesn't having encapsulated SCSI commands make more sense anyway? With FCoE, presumably you have to convert the protocol into SCSI commands so that you can actually talk to the discs? Which is more work. No?
Will someone PLEASE explain what's going on!!!
Plenty of "if's" and "maybe's" here
Aren't you completely ignoring iSCSI on DCE 10gigE? That would seem to be the obvious move which would allow existing users of iSCSI to upgrade at relatively low cost and have most of the benefits of FCoE, but at lower cost.
Like Austin, am I missing something here - or has the author shot his wad over FCoE too soon?
You're not the only one befuddled by this article.
It's not like iSCSI is locked at 1GB and only FCoE will be able to make use of 10GigE.
Was this some sneaky advertising by FC makers?
Most modern corporate LANs I've seen are struggling with high network use already, with bandwidth throttling technology being used to stop network hogs. I wouldn't dare go tell some network admins they're suddenly going to have to cope with all the SAN traffic being dumped on to the LAN. I have seen offices with separate LANs just for network-based backups because they can't cope with more traffic on the main LAN but haven't got FC cabling to all their cabinets. For them, iSCSI is looking more attractive rather than less. Network admins were told first 100Mbit then 1Gb would solve all their problems - it hasn't. With the arrival of virtualised desktops imminent I know admins that are desperate to get 10GbE in and would certainly not welcome the addition of SAN over Etnernet traffic.
In the meantime, CISCO have taken a beating in the SAN switch market from Brocade, and thought FCoE would let them leverage their LAN installed base to return the favour. Whether Brocade can match them in the new consolidated SAN/LAN market is open to question. If Brocade does, then CISCO's installed LAN base suddenly becomes vulnerable, but if Brocade can't then CISCO will probably become the dominant force in consolidated SAN/LAN tcehnology, with iSCSI as a nice little side market.
iSCSI has to deal with loss of packets and therefore retransmission. FCoE doesn't have this issue.
iSCSI also requires a DNS-type solution (everyone always forgets this is their presentations)which has to be hosted on a server, usually dedicated - more expense, where FCoE uses the NS fuctionality embedded in the switches, be that the CISCO Nexus 5000-series or the new Brocade FCoE blade that will fit in the DCX Director switch.
iSCSI will only work effectively if you have a dedicated LAN for it to run over so you have no contention from other ethernet traffic - why not just invest in a new FCoE LAN or a SAN
whilst I hate to reference wikipedia...
A quick look there might have helped the author of this article:
"Since classical Ethernet is lossy, unlike Fibre Channel, to create a loss-less Ethernet required modifications to Ethernet which are being driven through the standards."
So why doesn't that benefit iSCSI too?
More than a few things wrong with this article, firstly, 'and IP Storage Area Network' is called a NAS, secondly you do NOT need a fiber switch to connect to Fiber storage, and even if you use a real SAN most have the switch built in now days.
The ONLY way you can get real equality of performance, Fiber to iSCSI/FCoE, would be to use a dedicated network switch with QOS turned WAY up and large IP packets turned on, which negates the cost savings of the IP solution. And the advantage would ONLY be realized using a 10gigE channel. (ie ~7G IP > 4G fiber)
iSCSI with ZFS on Solaris
I used iSCSI to see how it could be used to do backups across a cheapo home GbE switch -- it worked quite well:
I know nothing about iSCSI, but if it uses TCP/IP anywhere then that's where the high latency and packet loss come from.
The article has it slightly wrong, the latency from Ethernet is mainly to do with Ethernet being a broadcast protocol that has to deal with collisions on shared links. As anyone in their right mind is going to be using _switches_ for this with a duplex link, then there are no collisions.
FCoE simply abandons the TCP/IP layer and just uses Ethernet frames and the FCoE protocol deals with packet loss.
Re: Jim Kirby
If you really think that your ethernet networks NEVER drop packets then I'll be one sending your employer my card because you obviously don't know how to test and monitor your network properly. Less than one dropped packet in a million is easy on smallish networks, but getting significantly better than that is effectively impossible, or at least not cost effective.
Leave the network design to the grown ups who understand the issues involved and don't assert that sheer impossibilities should be expected.
Latency. difference between iSCSI and FCoE
It's been a while since I worked on this stuff, but if I remember correctly the reason for the improved latency is that iSCSI utilises the TCP/IP stack which is built to handle a lossy link, whilst FCoE assumes a pretty much perfect link, communicating directly with the ethernet layer and relying on ECC to handle any rare errors passed up from the ethernet layer.
Basically iSCSI shoehorns standard SCSI commands over an the standard TCP/IP protocol, whereas FCoE just swaps out the standard FC OSI layers 1/2 for ethernet, but uses the FC protocol stack to handle all the layers above that.
Or something like that!
Why no mention of infiniband? It is already running at 40Gb/s, with 10Gb/s and 20Gb/s common in the datacenter.
FC security features like zoning, LUN presentation, etc. happen at the media layer, and are useful enough that enterprise customers are unlikely to abandon them. FC or FCoE vs iSCSI is a non-issue in that realm.
The iSCSI security model *is* a simpler and cheaper alternative to FC at the media layer and arguably at the administrative level , particularly for SMEs. The bogus argument that FC complexity "goes away" with FCoE actually supports iSCSI, if reduced complexity is desirable.
FCoE can't be FC unless the complexities are present, which they are in the special host Ethernet adapters and hardware in Ethernet switches required to do FCoE (so much for reduced expenses), and the FC management issues are the same over copper or glass ( so much for "no FC skills in the data centre").
So what FC complexity goes away due to FCoE, then? Not having to run fiber during a transition from iSCSI to FC? Er, no. Just run more Ethernet. Can a cost argument be made for that that has comparable weight to the relative merits of FC vs iSCSI management and security? Only for a few customers, seems to me.
The speed argument has some transient validity. The complexity issue is ... salesmanship?
errr iSCSI over 10 gig works just fine
We've been running iSCSI on 10 gig since late 2006, works great. We have a separate 10 gig access layer which is iSCSI only, we have no measurable packet loss, so I really don't get this whole DCE thing - the whole beauty of Ethernet is that it's simple and inexpensive. It seems like a play by the major vendors to introduce a load more complexity to the network - purely to prevent 10 gig Ethernet from becoming the commodity that gig E has become.
In 2 years...
hopefully the FCoE switches will be cheaper, right? But then again, your run-of-the-mill 10GB Ethernet switches for iSCSI will alwasy be cheaper than FCoE-capable switches. The FCoE magic is in that not-yet ratified standard. Just move your dollars from those nice FC HBAs and FC switches over to those spanking new FCoE switches and NICs....
What you guys seem to be missing is that iSCSI is not intended to be direct replacement for high end storage, as its strength is in the low-end market. The article reads like the latest Ferrari will replace your Volvo because it runs on the same hi-octane fuel.
So let me get this straight - the FC crowd have been arguing that the problem with iSCSI is that it has all the drawbacks of ethernet (latency, packet loss). So their great idea for making FC better and more accessible is to make FC run over ethernet? Wow. That's really retarded.
More to the point, iSCSI doesn't run over ethernet - it runs over TCP/IP (and anything that TCP/IP can run over, including dial-up if you're desperate enough). ATAoE runs over bare ethernet (and is thus a little lighter, a little faster, and cannot be routed since it works below IP level). The article doesn't distinguish if this is actually FCoE (like ATAoE) as the article says or FCoIP (like iSCSI).
Either way, it sounds to me very much like the FC vendors are desperately trying to come up with a problem that their product solves - and failing to make their argument stand up to scrutiny.
Cut out the middleman
Any modifications to cut packet loss over Ethernet for FCoE's sake could be applied to iSCSI over Ethernet as well, negating that advantage. The real benefit would be in cutting out the layers in between: iSCSI runs over IP over Ethernet, FCoE runs directly on Ethernet (as, in fact, does ATAoE for similar reasons). That's a slightly simpler, more streamlined setup - but I doubt it will really make that much difference, particularly given how much effort has already been put into optimising TCP/IP implementations and mass-producing hardware offload engines.
The big drawback, though, will be the lack of IP's flexibility. Right now, I can set up two machine rooms on a campus (or on opposite sides of town, or indeed the planet), connected as plain old IP networks in different subnets. Nice and easy: servers and storage can talk to each other across the network using nice well-understood systems. Now try with Ethernet: do I really want to have to have both sites in the same Ethernet network, purely switched? Possible, but not ideal.
ISCSI and a copy of FreeNAS is all you need ;)
"There is no way out for iSCSI. It's at war and FCoE is the killing field. ®"
so its your argument that ISCSI costs will outway the new kid on the bloack....
IF your a SOHO guy just try beating ISCSI and a copy of FreeNAS on price,its is all you need ;)
sure ISCSI has its weakness, i dont like the way you cant share the same device and its partitions like you can samba but it beats samba on the data throughput every time.
the biggest problems for the SOHO guy is the total lack of cheap 10Gig ethernet cards for the everage end/home user, given you can use FreeNAS a bunch of disks in software raid and a £4 RTL card for a quick and dirty ISCSI NAS that takes all your home/office video and data NO problem.
the other major thing is the total lack of bonding ethernet drivers for the windows platform for these cheap cards, so no multi 1gig bonding for your FreeNAS FreeBSD to your multi 1gig ethernet cards sat there in your windows XP machines.
and why is it that your can get these cheap 1 gig ethernet cards and routers all over the place after all these years, but you cant buy anything better for the HOME/OFFICE in between this 1gig home and the 10gig industrial card speed, the RTL and their like OEMs really missed a trick there didnt they, and all that retail profit.
will we ever see a 2/4/6/ or 8 gig ethernet speed at home/end user prices and their matching 4/5/8 port routers to boot ?
sure this doesnt concern you ISP vendor guys, your happy spending other people money on your industrial ISP kit, but what of us little guys and our need for faster home LANs and 4+ TB NAS stashed away in the attick/spare room.
Thank you for the migraine
Chris exposes his shortcomings here:
"It's true that a 1Gbit iSCSI link is cheaper than a 10Gbit/s FCoE link. The iSCSI link needs a network interface card (NIC) - hopefully with a TCP/IP offload engine (TOE) on it. This saves the server processor from having to spend cycles formatting all the TCP/IP stuff, which is complex."
Almost all storage vendors and VMware, in fact, , have shown iSCSI SW initiators perform close to, if not better than, iSCSI HBA’s and TOE’s with little overhead. I think servers with over 16 cores and GB's of memory can handle some TCP processing.
Reality check? You might want to spec out a new x86 server. It's 2008, not 2001.
But what do I know? I guess HP was stupid to buy LeftHand, too?
Poor biased argument
Author is so naive or biased to make such a judgment. He gives an impression that 10GigE is designed for FCoE. iSCSI is more natural to 10GigE than FCoE.
iSCSI can be routed. No special routers / switches required. iSCSI will also take advantage of TOE (TCP-Offload-Engine) and DCE (Data Center Ethernet) in 10GigE.
iSCSI is set to commoditize SAN storage. FC vendors do not want to lose control over their existing market base. FCoE is simply a way to retain their profit from existing proprietary product lines.
End of the day, customers want open standards, commodity hardware, no vendor-lockin and most importantly cheap. iSCSI is better positioned to meet these requirements than FCoE.
As a side effect of SAN vendors promoting Ethernet, NAS market will stand to get benefited the most. NAS always worked on Ethernet.
article is tragically flawed
FCoE still uses traditional FC zoning and lun masking practices, it only uses ethernet layer, NO TCP/IP! Its lossless ethernet folks, not your grandfathers IP network.
IP and FCoE will use the same layer(10Gb ethernet) running Converged Enhanced Ethernet(or Cisco's branded DCE)
You cant run FCoE on a 6509, or your linksys at home. You can however run iSCSI, which DOES use TCP/IP. You need a Cisco Nexus 5000 or (soon) 7000 to run FCoE
just an impartial view
The article was biased but not totally without ground though.
If you only need to encap/decap at Layer 2, then when you have trillions of them then there will be a substantial benefit compare to Layer 3.
Total ethernet frame lossless is a dream but extremely high availability can be achieved if the ethernet switch uses store-and-forward (needless to say minor increases in latency) which means it is nearly achievable.
Without the IP factor, you won't have to worry about IPv6 or routing, but you do need Layer2 knowledge and VLAN/trunking/STP etc...
iSCSI would perhaps become a tool for lower end market, whereas FCoE for mid market where performance and budget are both crucial.
when I say "never" I mean "never".
Outside the natural (and man made) events that affect ALL networks, I really did mean "never" about my ethernet packet loss. And I know this *because* of the extensive monitoring I have in place. Testing is just obvious if you want to make the claims that I do.
I realize you know a lot of things and that makes you feel all grown up, but I've been building award-winning, lossless ethernets for nearly 10 years and I can back it up. Why don't you come on over some time and I'll teach you a thing or two.
re: iSCSI Game Over
FCoE: The Data Center Bridging specification won’t be ratified for a few years, even though Cisco and others are saying they are already shipping it. Do you remember the interoperability nightmare that the Fibre Channel industry put their customers through in the beginning? Do you remember when Cisco started shipping iSCSI products before the spec. was ratified? How many of those products are customers still using today? Due to the fact that FCoE isn’t routable and is incompatible with the impervious set of IP based infrastructures and management tools out there, iSCSI will be around for the foreseeable future.
re: when I say "never" I mean "I'm making this up"
I note you're already backtracking. You admit there are some events that you obviously can't protect against. Even noise can't be protected against completely - you get dropped packets on a 1m shielded crossover cable between two quality NICs. And all those checksums on various layers on the networking stack are completely unnecessary and only there for numbskulls like me whose networks are constrained by the laws of physics? Bull.
All eventualities must be included in your "never" and since you have admitted that this is not the case you have also admitted that you are talking crap. I wouldn't trust anyone who adjusts his performance metrics to suit his own conclusions in the manner that you obviously are doing.
You have no idea what you're talking about
Yes, iSCSI runs over 1Gb connections, but if you knew anything about EqualLogic you'd know that just one of those arrays has 3 x 1Gb connections, and due to its grid based architecture you add 3 Gb bandwidth to your EqualLogic SAN everytime you add an array (think ESX where you have multiple units acting as one logical compute resource). So, for example, if you had 4 EqualLogic units (all acting as one logical storage pool) you would have 12 Gb bandwidth, 8 GB cache, and 64 drives (using the 16 drive models, this obviously increases with the new 48 drive models). You can have up to a maximum of 12 arrays in one EqualLogic SAN - as the Americans say "do the math".
On the subject of losing packets and having to retransmit, this only happens if you had a poor quality switches and a badly designed network. For starters you need a separate IP network for the IP SAN, enable flow control on those switches, and make sure they have a decent backplane (eg. Cisco 2960G or 3750G).
Oh, and by the way, I tested Netapp FAS, EMC CX, HP EVA, Compellent, and Letfhand, - and EqualLogic outperformed every single one of them, and was by far the easiest to use. It's also worth looking at the Microsoft Exchange Solutions Review Program, again this proves publicly that EqualLogic outperform those "old dogs".
Simple fact is that FC is growing about 14% year-on-year, iSCSI is growing at 78%. There's a simple reason for this - it works.
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Pics Indestructible Death Stars blow up planets using glowing KILL RAY
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked