We're moving to a world in which all storage network protocols run over Ethernet. That means both file and block. There's no dispute about running file access over Ethernet but the block world is squaring up for a three-way tussle. The three block protocols are iSCSI, Fibre Channel and AoE. iSCSI uses the TCP/IP stack to provide …
We've used AoE
We have been testing AoE on some small scale deployments of late and haven't encountered any problems. It's relatively easy to add the required extras into an initrd for Linux and TFTP boot diskless VMs and cluster nodes. I never managed to get the XP driver to work for aoeroot though, but that wasn't a huge problem in the end.
Can't cross networks
I'm sure you know this already, but because AoE operates at the physical layer using MAC addresses, it cannot cross router boundaries. Therefore kiss your dedicated storage network goodbye.
Yes you can, but only when really necessary
Because AoE is layer 2 and therefore not routable it makes it inherently secure, and why does this stop you having a dedicated storage network? In fact it enhances the dedicated aspect. IF you need to connect to another AoE network, such as site to site DR, then you can tunnel AoE between storage networks.
the original Coraid product was compelling
Coraid initially sold a demo kit that directly bridged an IDE disk onto an ethernet switch. This idea was compelling because they were betting that 10 gigabit ethernet would be better developed and commoditized earlier than Fiber Channel or any other physical transports, and they were right.
However, Coraid didn't productize the AoE bridge, which was perfect for things like ZFS and CLVM, maybe because they couldn't afford to create a bridge chip for SATA.
All of the current Coraid products use a regular computer as a front end. Putting a modern CPU in front of the physical storage makes AoE mostly pointless because TCP/IP and other networking overheads become trivially cheap.
Thanks for the technical articles.
Keep it up, Reg!
Anyone know how linux NBD stacks up against these contenders?
AoE is ATA over Ethernet
And ATA comes from Advanced Technology Attachment (from Wikipedia) which is similar command set as SCSI for storage attachment. You know already about Serial-ATA drives?
Keep the Entry at each End
Keep the entry at each end, drop redundant stuff in the middle. That means Fiber Channel survives for the top end, dedicated hardware applications and the new, low cost entrant, AoE allowing people to start on a budget. But then, the case may arise where there is a need for routing on the LAN/WAN where both those stumble and that's where iScsi fits the bill. Drat. I guess we'll keep all 3.
My view on AOE (CORAID)
For me AOE was definately the way forward, the ease of integrating it in a Linux infrastructure was great. Performance was excellent, comparable or better than iSCSI or FC alternatives. Cost was hugely reduced in comparison.
Then it all changed... CORAID suddenly turned netapp *choke* and started requiring support contracts for their devices, aswell as hiding pricing on their site (click here to inquire is always a bad sign in my view...). This company is obviously trying to compete in the high end market, which I understand, but it turned its back on the people that kept it alive at the beginning.
Good luck CORAID, but for me Linux iSCSI is much more cost effective and provides excellent performance for my environment. CORAID are no longer the "Linux storage people" they once claimed to be.
Those are the breaks
I agree. CORAID has gone channel, which is why prices are now hidden, but it has such an incredible product range it could not keep going on an organic Linux community friendly growth path forever. Since going NetApp, as you put it, it has achieved exponential growth and actually been able to take NetApp accounts on directly. I appreciate your frustration but after all most of us are in business for the return and the real money sits with the likes of NetApp now, and to get access to it you need to look like a company the large organisations feel comfortable doing business with, and this is generally at the expense of the little guy.
It's here now, affordable now, it's interoperable across a huge range of platforms now, and it performs well enough for 95% of applications.
No, seriously - it's actually a great way to serve VMDKs.
iSCSI HBA's also available
If your after better performance with iSCSI then you can also get iSCSI Host Bus Adapters which will prevent you burning CPU cycles where the workload for the client is heavy, they are also useful for Boot from SAN environments if you really aim to centralise all storage in the SAN.
For services which have a more general purpose storage requirement a software initiator is fine, provided your NIC has a TCP/IP offload engine, otherwise the CPU will be doing a lot of packaging IP packets on a gigabit ethernet connection.
iSCSI all the way
iSCSI across 10GB ethernet is performant, affordable, proven and routable.
If 10GB is too expensive (still cheaper than FC), 1GB NICs with offload engines are already part of most server architecture and are not the financial burden they once were.
Are you in hurry?
If you aren't, then iSCSI, otherwise FC.
You do understand that _anything_ over TCP/IP is introducing several milliseconds of lag, do you?
Net lag of the request: 4ms
Disk access time: 6ms
Net lag ot the answer: 4ms
And thus you have dropped your performance to less than a half (of local disk). Actual speed of the cable is irrelevant at this point unless you want to present a data stream.
Re: Are you in hurry?
> You do understand that _anything_ over TCP/IP is introducing several
> milliseconds of lag, do you?
> Net lag of the request: 4ms
> Disk access time: 6ms
> Net lag ot the answer: 4ms
Sounds to me like you need to sort out your network. 8ms lag times are what I'd expect from a fast WAN, rather than a LAN. My Gb LAN RTTs are in the 100us (0.1ms) range, and that's return, rather than one way.
FC all the way
Ethernet - based and iSCSI still are nowhere near FC in terms of performance. For high - end storage, cost is not the most important factor.
aoe looks nice but I went and bought an imac for a workstation. iscsi appears to be the only free option to do time machine back to my storage server. I think aoe needs a dedicated nic, so even if it were free, I'd have a problem - I'd have to buy a nice monitor for my hackintosh... ;)
They all hurt
With the trend being to managed storage, none of the protocols is a good fit. They all deal in disk blocks, but the communication is actually Client--Managed Storage Server--Disk. So the connections underperform and have poor failure modes. What is needed is a disk access protocol which is designed for computer-computer use and not an adaptation of a computer-disk protocol.
Now that we know AoE is ATA Over Ethernet, a primer
Now that we know AoE is ATA over Ethernet, isn't broadly supported, lacks enterprise features, isn't generally supported and generally isn't Enterprise Ready, let's talk about the other two. Let's have a primer for the gallery.
Fiber channel (FC) is the grandaddy of shared storage tech. It once was the fastest available transport method (back at 1Gbps or 2Gbps). Its reliable packet delivery offers a firm foundation on which to build your enterprise tech. People who build out FC care more about reliability than anything else, so they usually populate two HBAs, each of which provides a link to at least two FC switches - themselves connected in a mesh network, with each storage appliance (SAN) likewise at least dually connected, through dual SAN controllers to drawers all the way down to redundant connections to individual drives. The idea is that you can wipe out one entire path of controller, fiber, switch, fiber, controller, fiber - and still not lose your connection to an individual drive. The drives are then striped and/or mirrored for additional redundancy at the media level, and in some cases can even be striped or mirrored across SANs that are geographically separated for the ultimate in storage reliability. All of the connections (or nearly all - some vendors cheat) involve interconnections using Fiber Channel protocols that guarantee reliable end-to-end and in-order delivery of packets. Typically the connections between boxes are fiber-optic.
Because it's rarer than common server hard drives and networking, FC is expensive. It involves a large number of patented and licensed technologies. It has garnered a certain high-end storage following, and it's well deserved. Many of the storage technologies that will follow in this discussion arrive from the need to maximize the benefit of very expensive FC disk, or to work around the idea that bringing your storage offline "just isn't done". Price really isn't much of a consideration with FC folks. You might pay $3,000 for a 600GB 15K rpm 4Gbit FC Hard drive, for example if you were looking for the IBM 59Y5460. FC now can use interconnects in the market that are 8Gbit/second between servers and switches or switches and storage, and these links can be aggregated for arbitrary amounts of bandwidth.
FC is a stable market. It's got reliable some % growth year over year in business, but it's not sexy, it's not new. Most people who aren't FC shops aren't looking to sign up for that drill.
iSCSI is a different technology invented by Microsoft - one of the few things they've made standard that I'm in favor of. It's possible to borrow some of the same ideas from FC and build in the same path redundancy as Fiber Channel, all the way to the drive - but this is a fairly recent development. iSCSI goes over Ethernet, which succeeded in the networking market despite the fact - perhaps because of the fact - that it DOES NOT guarantee either reliable end-to-end delivery nor in-order packets. Ethernet now gives commonly available links that run 10Gbits/second but with packet overhead it's a wash because it amounts to 8Gbit/sec actual bandwidth. The technology in iSCSI that allows for divergent paths to the disk is called Multipath I/O (MPIO). It requires special drivers in the OS, special configurations and recent versions. It requires validation testing, and frankly the technology is still a little bit fresh for environments where human life and safety are at stake. Like FC, connections between boxes can be aggregated for arbitrary bandwidth.
iSCSI works off of the principle that the aged SCSI bus protocol required that information be organized into packets so they could pass over the parallel SCSI bus. On this bus there were multiple drives and they needed to cooperate with the controller to avoid crosstalk. iSCSI essentially encapsulates the packets for transport over Ethernet, and adds a few features. We don't use the parallel SCSI any more, but its organization can still be useful. The client side driver and the storage device software negotiate reliable end-to-end delivery and in-order execution of writes and reads across the unreliable Ethernet connection using their own intelligence. Using modern iSCSI capable Host Bus Adapters rather than standard Ethernet adapters on the server side allows for boot-from iSCSI SAN as well as offloading of the processing overhead from the CPU. Enterprise iSCSI devices can still be expensive. A Dual-Port Serial Attached SCSI (SAS) 6Gbps (6G) drives at 600GB lists today for $809. That's not cheap, but it's better. You can also buy a cheaper and slower dual-port 6G SAS drive with a short warranty from that vendor that has 2TB for $949, which is coming somewhat closer to consumer SATA technologies and prices. They're cheaper because these drives are sold in vastly larger quantities as directly attached storage in millions of servers. The Dual-Port thing is an important part: it allows for backplanes that provide independent links to independent SAN controllers to complete the last leg of a redundant connection all the way from the server bus to the drive. Because it's more common and so leverages economies of scale, Ethernet switching generally costs less - but we're talking about 10Gbit Ethernet here and it's not as widely deployed as it might be so this is still not a small business use case yet. 10Gbit Ethernet can go over copper to the top-of-rack, or a few racks over - and save a lot of money doing it - but it's not an end-of-row solution. If you need 10Gbps Ethernet for more than a few meters, you're going to buy the expensive fiber GBICs. Most folks are still exploring 10Gbps Ethernet, and frankly if you can isolate this cost center as closely as possible to the servers, that's a good thing. We're not ready to go 10Gbit to the desktop yet. Because the abstractions used to maximize the return on investment for FC SANs are just software, the software has been ported to iSCSI SANs to provide the same amplifications. This includes things like synchronous replication, asynchronous replication, clustering, virtual volumes, thing provisioning, snapshots, clones and so on (the SAN Features).
There are even companies sell devices to insert into your FC + iSCSI network that serve to take the block storage that you have whether it's iSCSI or FC, ignore its special features, and provide all of the SAN Features so that you can continue to use the underlying storage using your preference of iSCSI or FC while you migrate from one vendor or technology to another without doing the (gasp! Forbidden thought!) unforgivable of bringing your storage offline. This is called Storage Virtualization.
iSCSI is a growth market. It's growing multiple x per year. Some say 10x.
Convergence: And then we have Converged Enhanced Ethernet (CEE), DataCenter Ethernet (DCE) or FCoE (Fiber Channel over Ethernet). These are all names for the same thing - at least they became the same thing when the standard was announced a few months ago. The multiplicity of names was from vendors attempting to take ownership of the symbol space leading up to the standard. At the moment this is a technology that exists between the server and the first switch that it encounters. Using this technology, which I'm going to call FCoE to be simple, the server has a FCoE Host Bus Adapter that provides a reliable and in-order packet to the switch. All of the HBAs I've seen operate at 10Gbps and have two ports for a net 20GBps. Typically each port can be configured to be presented to the server's I/O bus as MULTIPLE separate hardware devices which are either Ethernet or Fiber Channel. With FCoE you can do it either way, or do it one way now and migrate to another way later. You get to choose. You can adjust the fraction of bandwidth allocated to storage vs networking based on your need. This is a powerful choice.
The trick with convergence is that it ends at the first switch. There's no switching or meshing standard for FCoE yet. The first switch MUST break out the virtual connections into Ethernet or Fiber Channel, and pass them to their respective networks from there. Instead of simplifying things at this point it just adds a third network. But this can be useful for some things, depending on your needs. The Cisco Nexus 5000 is typical of the device you would use as a top-of-rack switch for this sort of application, but it's one of several.
There are FCoE HBAs you can use that connect with this first switch using a copper connection. This can save a lot of money, as Fiber GBICs (the module that converts the electrical signal to light and back again) cost a good deal of money and you need a pair of them to terminate both ends of a fiber connection. The Copper cable includes integrated GBics on each end and can be had for a couple hundred dollars. Fiber GBics generally start at about $800 each, or $1600 for each path. Remember, you need pairs of paths.
And then there are blade servers, some of which now include a pair of 10Gbit FCoE HBAs on the motherboard by default and avoid all the expense and connections, at least for sets of up to 32 servers at a time. The blade server chassis can provide the first-hop FCoE switch, and a lot of the complexity of the problem goes away, as does the need for HBAs, the vast majority of the GBICs, almost all of the cables. This is VERY important. I was taught nearly three decades ago in IT that if there's a problem, 90% of the time it's the cables, and my experience has held true to that lesson.
OK, that's enough of a primer. If this post passes El Reg moderation I'll discuss current solutions in this space.
Not Enterprise ready.... yeah
"Now that we know AoE is ATA over Ethernet, isn't broadly supported, lacks enterprise features, isn't generally supported and generally isn't Enterprise Ready, let's talk about the other two."
At what point would you consider something "Enterprise Ready" then? Does the fact that an AoE SAN serves as the backbone for one of the largest private clouds in the world for a US Govt Agency, 2 Petabytes and > 40,000 VMware nodes, not make that statement look just a little rash? Or perhaps customers like NASA, the Human Genome Project and various large acedemic institutions for the HPC clusters, plus over 1200 others - mainly large multi-terabyte systems demanding high performance.
By making such a dismissive statement you are missing out on a genuine opportunity to look at something different, for a cost that just might be the key factor if we go and double dip the western economies.
Perhaps you are also in the "never have a Dell in my data center" camp as well. If you look at the attitude to them 10 years ago you see a blueprint for how economies of scale can really disrupt a market.
Exactly which "high level networking protocols" run on top of FC?
While FC does have routing, name services, etc. built in, there are no "high-level networking protocols" that run "on top of" FC, creating overhead It's frames are fully switched (with routing available, if you do want to pay the overhead and dollar costs), just like their Ethernet counterparts. Given this fundamental error, I mistrust the rest of the original report the AoE section came from.
And if there are no "high-level" protocols, and lossless Ethernet is not required, exactly what mechanism does it use to deal with packet loss? FC uses the SCSI layer to recover from packet loss, and frankly it sucks at it. AoE flow control? How does that work? (Again, FC sucks at this, so I'm interested to know how AoE solves the problem.)
I eat a big bowl of Finisar traces for breakfast every morning, so I know way more than I would like about FC flow control and error recovery. There are no "easy" answers to those problems...
To be pedantic...
There are 7 layers (not 5), if we're talking about the same model.
Ethernet isn't very conformant to the ISO model and extends beyond Layer 2 into Layer 3 where some of its functions are duplicated/obscured by IP. TCP is a layer 4 protocol.
All of which is somewhat relevant because the protocols that run directly on top of Ethernet are extensible only over LAN topologies (which, admittedly, can cover great distances) whereas those that run over IP are extensible anywhere.
So it's horses for courses. You wouldn't *want* to run your high-performance SAN over a slow TCP/IP link - block access in general isn't going to work too well under those circumstances. If you've got a high-speed local network you ought to get marginally better performance running directly over Ethernet, but it really does depend on the quality of the TCP/IP implementation (and indeed whether it is hardware assisted).
Restauranteur with curious streak
I was originally interested in this topic mainly because the owner of this company is a regular at my restaurant. It is just too cool to be explaining ideas I read about on the Register to a geek and getting a "hey that's my company" response. If the changes Coraid has gone through are not going well then I can assure you that this is growing pains for a small company. I will be happy to pass on any criticism and support comments posted here. I also noticed the hidden pricing on their website and was a little surprised because the gentleman in question is very straight forward and full of common sense.
To be honest, I don't run a data center, but I have never found fiber channel compelling -- the actual fiber channel cards I've seen, the drivers always seemed to be really gnarly -- hit and miss Linux driver support, and Windows support that may work for one version and not for the next. And expensive. FC over ethernet still tends to use specialized ethernet cards, and when it doesn't still seems to require much more specialized driver support compared to either iSCSI or AOE.
Out of iSCSI or AOE, the only one I've implemented was AOE. Probably, if I had to deal with Windows boxes I would use iSCSI, but in my ideal environment I wouldn't have them mucking up the works. Otherwise I do like the fact that the AOE spec is so simple and to the point. However, iSCSI versus AOE doesn't make a huge difference to me. If a disk had to be available on a different network segment, obviously iSCSI would be the way to go out of the two. If a disk didn't have to be available off the local network segment, AOE's non-routeability is actually an advantage, since that inherently limits who can possibly access that disk.
For enterprise grade storage FC is the only way to go. Got a dozen VMWare hosts running with dual HBA's (2 paths on each HBA) and two Windows 2008 servers (for backups) that have a HBA plumbed in.
Not a single driver issue.
8Gb FC gives the most performance and reliability when implemented properly. iSCSI and AOE is a shadow of good FC environment when money isn't a considoration.
... how Enterprise equates to "overpriced solution" to some people. You pay more, so it must be better. "when money isn't a considoration (sic)" - HA!
I've implemented AOE using 10GbE and a Coraid stack and see throughput of 1.2GB/s on a single dual port card. My multi 1GbE test scaled nicely and I've no doubt the 10GbE would scale similarly. But I'm happy with this throughput today. Tomorrow when I need more, I drop in another low-cost Ethernet card. AOE is cost effective, scales, and just plain gets the job done. FC just has no compelling draw for me.
But hey, to each their own. If the ivory tower suits you, enjoy. But some of us "enterprise" guys are doing just fine on a budget!
iSCSI vs AOE
iSCSI has higher overhead and is routable. AoE has lower overhead and is not.
This means that iSCSI can be used in some interesting ways, such as remote SANs (for disaster recovery) and RAID-1 simultaneous writes to local and remote storage (online backup).
AoE provides lower-cost, perhaps slightly higher-performance solutions for a pure local SAN context.
For most purposes that I can see, FCoE is a non-starter except in a shop that has a lot invested in FC.
Ruutability is key aspect of this
Sent to me by mail and posted as a comment:
Regarding your article, "Ethernet storage protocol choices", you have made a mistake. AoE doesn't stand for "Application over Ethernet", it stands for "ATA over Ethernet", as the article you link to correctly states.
The key difference between iSCSI and AoE that is overlooked in the article is that iSCSI is routable, while AoE is not. Since iSCSI is a TCP based protocol, standard IP routing applies. Since AoE communicates directly over the Ethernet protocol, rather than a higher level protocol, the SAN must be on the same LAN as the clients accessing it. So, the reduced overheads and increased performance of AoE comes at a price of losing the routing capability.
You also mention TCP offload engines. On modern CPUs (and by modern, I mean anything built in the last decade) this is largely irrelevant. Even the bottom of the range server CPU today can easily saturate a 1Gb ethernet connection, even without jumbo packets, without a significant impact on CPU load. Additionally, even most desktop Gb NICs provide basic offload functionality (segmentation offload and checksumming). And even good Gb NICs (e.g. Intel e1000) are nowdays priced in the £20 range, hardly an amount that would dent the budget.
As for FCoE - it seems like an unnecessarily expensive solution trying to find a problem it is better at solving than it's much cheaper and simpler to implement competitors (iSCSI and AoE). But I guess that makes it more "enterprisey" and buzzword compliant.
THanks for this.
Sometimes the best choice is to not choose
I promised more if that Wall 'o Text passed moderation. Not only that but somebody liked it so here goes with the promised follow-up. I'm painfully aware I'm dangerously close to the moderation guidelines for post length, but this particular article was "asking for it."
The article is about storage, and so we're talking iSCSI. iSCSI travels over Ethernet. An iSCSI connection is an Ethernet connection so for the rest of this post I'm going to say Ethernet for iSCSI. Ethernet also gives you other things, which of course you know because somehow this post came to you over Ethernet.
10G Ethernet and 8Gbit Fiber Channel are here now. You can buy that if you want to. Each adapter is going to run more than a thousand dollars, and you're going to pay again on the switch side. You'll pay twice if you're buying SFP+ modules as well. It will be a few years before we pass beyond 10Gig Ethernet and 8 Gig FC, so you've some confidence in your investment. Those standards are fairly new in fields that make large infrequent steps and happened relatively at the same time this past two years only by coincidence. With these connections you can pay more for the connectivity than for the box with processors, but probably not including the RAM and storage.
Fiber Channel over Ethernet is a fairly new link that, despite the name, isn't just Fiber Channel over Ethernet. It's Fiber Channel and/or Ethernet over a new connection type that's not quite Ethernet. Calling it Fiber Channel and/or Ethernet over a new connection type that's not quite Ethernet yields an unwieldy acronym: FCaoEoancttnqE, and that's hard to sell so we call it FCoE. The switches can be pretty expensive - the ports are all 10Gbps. The Host Bus Adapters (HBAs) are expensive too. But you can put those in your server. The links aren't just fast in bits per second - they're also very low latency, which is even more important.
You can skip all the per-server NICs and HBAs, SFP+'s and cables. Some blade servers, like HP BL465c G7, come with integrated dual 10Gbps FCoE now. Others from several vendors come with onboard dual 10Gbps Ethernet. No adapters, SFPs or cables to buy, only one (or two) blade interconnects in the back of the chassis, and many servers (16 or so) can talk to each other with amazingly low latency high-bandwidth connections. This is pretty cool because when something goes wrong, 90% of the time it's the cables. You don't even have to build out 10Gig and 8Gig infrastructure yet, because uplinks can be slower. Choose the right interconnect modules and you can choose how much of that you want to be Ethernet, and how much Fiber Channel, and change your mind at any time. Another advantage is that the blade interconnect can be "not a switch" so if the server teams need an interconnect between their servers that they can manage without permission or interference from the network team, this is it.
The microseconds latency is the most important thing. Most of the traffic is multiplied many times in your cluster. One client form update request from the uplink turns into dozens of file requests, database reads and writes, logfile updates, SAN block writes and reads amongst your servers before a single, simple next page is returned through the uplink. If you can change dozens of 1 millisecond hops into 20 microsecond hops between request and response they add up to a perceptible improvement in responsiveness to the customer even if his bandwidth to the cluster is limited.
Other brand servers you can get the same FCoE in a Mezzanine card today, and that's almost as good - the servers might be cheaper to offset. You still get the same leverage of no SFPs, no cables, and so on, but you use up a precious Mezzanine slot. The FCoE adapters don't cost much more than either the 10G Ethernet or the 8G fiber, and definitely less than both. Dell sells these, and I'm sure IBM does too. Cisco has one for their UCS. Not sure about the others, but it seems likely.
The way these FCoE interfaces work you can use them as any of 1Gbit Ethernet, 10Gbit Ethernet, 10Gbit FCoE, 8/4/2 Gbit fiber or 4/2/1 Fiber depending on the SFP module. So if you're using Fiber Channel now but migrating away from it, or are pure iSCSI now but might want Fiber Channel also in the future, you're covered. Some of them even have some internal "virtual connections" that allow the bandwidth to be divided up into multiple Ethernet and/or FC ports.
10Gig is the way to go in blades, and FCoE if you can swing it. Choosing has the downside risk that you might choose wrong. The nice thing about choosing FCoE adapters in your blades is that you can change your mind later.
So we're left with "What about rack servers?" Not everybody needs enough servers to justify a pair of blade chassis. Rack servers now typically come with four 1Gbps Ethernet NICs. If you need more than that -and you almost certainly do - or you need FC, you're going to need a NIC or HBA. If money's so tight that you can't think about strategy you're going to buy the quadport 1Gbit NIC or the bare minimum FC card you can get today and this post wasn't for you in the first place. Here the decision point comes down to how many servers and if you already have the switch. If you don't have the switch and you need to provision links for enough servers, then a FCoE switch like the Cisco Nexus 5010 at 20 ports and $11K is more likely to give good return on investment even in the short term. For both 10GbE and 10Gbit FCoE you can use relatively inexpensive copper-based cables with integrated SFPs to the top of rack and keep costs down. Those Fiber SFP+ modules are pretty spendy in the pairs you need. Regardless, keeping your options open in the future should add some weight to the FCoE side even if it's not the most economical solution today - though that's probably harder to sell to the executive team. The break-even is at about four servers today. If you can't put that over with the E-team you're back up there with the quadport NICs and the cheapie FC HBAs and reading this must be sheer pain. I'm sorry. Rack servers benefit as much from low latency connections to each other as blade servers do - it results in a more responsive experience to the end users, who are the point of the exercise.
FCoE is new, and for now it's a one-hop deal. Your FCoE adapter can go to a FCoE switch, but that switch has to break out the connections and diverge the paths into Ethernet and Fiber Channel. It can't yet send it on to another switch still in FCoE form. The standard that allows for the second hop, routing and such things won't be ready for a year or two.
Fair notice: I don't own stock in any company mentioned. I do work for a company that sells solutions in this space including some but not all of the products mentioned, but my opinion is my own and my employer is neither responsible for it nor influenced it. I didn't get paid, nor do I stand to profit, from saying these things.
Coraid rocks !
Always funny to see what people which have neither tested Coraid goods can say about it...
We are working with Coraid goods since 2005: I'm not selling anything to you here (we are focus on the EU market), we (Alyseo) are an independent storage integrator and we offers innovative system solutions based on the best available technologies. But this mean we are also users and only select the products/solutions which meet our and customers requirements and on which we have a very strong level of confidence.
BTW, we have real work experience with AoE, iSCSI and FC (we are also a NexentaStor partner and have installed EMC, netapp boxes for some customers and in previous life).
Just some comments now :
On AoE bonding (link aggregation and automatique failover) is done at the protocol layer which mean you just need to plug the cable -> no admin task to do !!! In addition with iSCSI you usually will need to use LACP (802.3ad) which mean configuration on each sides (target or initiator and switches). with AoE: again nothing to do ! Last but not least LACP is good for failover but it will not increase throughput because with LACP, LAgg, 802.3ad, etc. your link is logically aggregated usually with a MACn-MACm association table. This happens for both sides, resulting in a single pair, end-to-end bandwidth achievable of only 1-wire speed. With AoE you have failover and more throughput based on the number of NICs between your AoE initiator and Coraid (AoE target).
"AoE packets aren't routable..." It is true that AoE is not routable and you need layer 2 interlink (or lan2lan) to see LUNs on multiples sites but solutions for DR are in the pipe (AoE tunneling, async replication, currently only sync replication is available).
Coraid goods are affordable but you need to understand than in addition it is lightweight (with less load on the host), easier to implement, provides a layer of inherent security, and offers higher performances. Jumbo frames, VLAN, switch are not related to AoE: it is simply the same for any iSCSI architecture...
I reckon it is always harder to sell alternative/competitive solutions to customers: yes, we use to fight against “I never got fired for buying XXX" on a daily basis. This is why we are doing many POC and Try & Buy and this is really the best way to prove to customers and leads that the solutions we offer can compete and kick legacy storage solutions at a fraction of the cost...
Our point of view and based on customer feedback: Coraid rocks !
I'm not asking you guys to trust me: just give it a try and check these products by yourself: you'll see what I'm dealing with ;-)
Last but not least: for the one which think Coraid is a small company and futur is uncertain, read this (Coraid raises $10M):
PS: On our market we are doing many POC and Try & Buy so people know what they purchase. Contact you local Coraid partner for an evaluation ;-)