The fastest storage area network (SAN) on the planet needs the fastest server-storage network links available. So what are they? There are three candidates: Ethernet, Fibre Channel and InfiniBand. Ethernet SANs Ethernet SANs use the iSCSI storage protocol and link servers and storage across Ethernet. Examples are HP’s P4000, …
Speed is a hard thing to grasp. I mean, often times the "filling of a pipe" is done so using aggregate users of the SAN. So... perhaps a better way of looking at things is how many SANs using a singular client, can saturate the line. It should be fairly easy to saturate a line when considering requests over a multitude of clients.
It's possible that a SAN CAN saturate 16Gbit, for example, if using ONE client... but how many drives and what config was able to pull that off? Again, it sort of matters when considering how a particular SAN storage unit scales.
So things to consider:
1. Number of clients
2. Number of pathways
3. Number of drives
4. RAID level
And probably a lot more....
Reality Check - Who is using this bandwidth ???
Having seen statistics from enterprise datacentres across the world. Do I see end devices (storage arrays and servers) congesting SAN links? At 2G - yes frequently, at 4G - rarely, at 8G not yet seen one. That's not to say there aren't out there. There are corner cases out there for specialist applications.
Where bigger and faster pipes can be of benefit is between switches (ISL's) as they can simplify inter switch and inter data centre link design and configuration.
VIrtualised servers were supposed to be pushing up the demand for host side traffic, I'm seeing lots of virtualised servers connecting to SAN, are they driving storage bandwidth significantly higher? Not yet. Most likely as customers haven't yet virtualised their I/O intensive Apps.
8G FC is more than enough for the vast majority of datacentres today (and most likely for the next year or so). 16G FC appears to be the industry reaching ahead of market demand to turn over product.
and who pays your wages I wonder??
"Do I see end devices (storage arrays and servers) congesting SAN links? At 2G - yes frequently, at 4G - rarely, at 8G not yet seen one."
Since the vast majority of all SAN ports are 8G now due to Brocade's market leadership, you must be looking in the wrong places. And the next paragraph about ISLs is typical Cisco BS and simply leads to far greater management and operational overhead and a pretty clunky delivery of what is streamlined from the market leaders.
Looks like a response from someone who has Cisco coloured dollars in the back pocket. Just because Cisco are unable to produce innovative or market leading products without acquisition, that is no good reason to limit progress and development elsewhere. As with Japanese car manufacturers who gave you features you did not know you needed until you got them, looks like 16G is here to stay and no amount of naysaying from luddites like yourself will change that.
I am sure Cisco have never reached ahead for the sake of a few dollars. Nor used Marketing to hide the plain deficiencies in so much of their product sets............ask RIM.