HP is removing the need for a Fibre Channel fabric linking its 3PAR arrays and BladeSystem servers with a quasi direct-attach supplied though its Virtual Connect technology. The technology, announced today HP's Discover event in Las Vegas today, forms part of HP's Converged Infrastructure product set. Virtual Connect Direct- …
P4000 is already VC only
Take a look at the P4800 SAN. Its been around for about a year now, rather clever piece of engineering. Uses normal blades and some MDS600 disk shelves (probably the most dense drive shelf out there) and operates at 10Gb inside the chassis and outside if needed (quite useful when you want a self contained environment).
Of course the advantage is that P4000 operates iSCSI so uses the normal VC ethernet modules, none of this legacy FC protocol (and all its limitations) here thank you very much :-).
You heard it here
First step, or maybe fifth step, in replacing the menagerie of DAS, DEC EVA, Lefthand, etc with 3PAR. They will eventually ditch XP at the high end as well as get to 3PAR across the board. SKU consolidation, as Whitman would say....
SAN is integrated, not eliminated
I heard the way this works under the sheets is to simply enable a traditional FC Switch inside the FlexFabric module that's already there but simply running in NPIV mode.
I believe that switch ASIC is made by Qlogic in the current model.
So in essence, you are not eliminating the SAN Switch but rather than having a large central pair of SAN Switches, you are moving the switch out to the edge. Then you use Virtual Connect's own GUI to manage the Zoning by simply attaching a "Fabric" to the Server Profile like you already do today.
And as far as other storage vendors eventually being supported:
One of the things that makes this possible is the fact that even a moderate 3PAR T400 supports up to 64 Host Ports (the port facing the SAN as opposed to Disk Shelves).
Maxing out the FlexFabric module with FC, that would be only 8 connections per Enclosure.
Which means you can hang a minimum of 8 enclosures from a single T400.
NetApp and EMC have generally less than 16 host ports and would then only enable what, maybe 2 enclosures? The EMC VMAX 20K can grow up to 128 Host Ports but only does 16 ports per 20K Engine. So that could work in this design perhaps but would also cost an arm and a leg.
So its not just a vendor lock in by design, but simply comparing the architecture of the competitors shows they probably wouldn't work well in this design.
SANs...when did it become a 4 letter word?
Wow! What latency are they talking about? SANs don't introduce latency. They provide the fabric that holds it all together and makes online storage avaialble through shared area networks. Haven't we been down the path of DAS before...talk about history repeating itself.
It should be interesting to see how this bet pays off...
Re: SANs...when did it become a 4 letter word?
For certain disk i/o sensitive applications (IMHE ones thst are heavily/exclusively random) we still use direct attached storage because of latency. It's not always a problem, and for some applications it never is, but in those exceptionally sensitive situations going SAN can mean 2 or 3 servers instead of 1 for what we need it to do. Maybe this will help some but after years of visits from SAN salesmen and failed attempts to move these problem apps to SAN I am skeptical. From what I recall, the SAN fabric was pretty fast - it was the SAN itself that was (for us, and our highly sensitive random i/o) slow.