SGI already resell this. It's quite nice, limited scalability tho' but the clone/snap functionality looks very funky. It's a bit of an SVC lite!
HP is introducing a SAN storage virtualisation platform, using LSI software, to combine mid-range HP and third-party drive arrays into a single pool of storage. HP's SAN Virtualisation Services Platform (SVSP) mimics IBM's SAN Volume Controller (SVC) in being a Fibre Channel (FC) fabric-attached controller. It can aggregate HP …
IBM SVC and, by your association, this solution are in-band virtualisation controllers. Data passes through the controller as it moves between host and storage. EMC's Invista is an out of band solution as it controls the SAN switch to do the smart stuff and it supports heterogenous storage system configurations.
So just what is the definition of in-band and out of band as you and Chris seems to diverge on this point?
SVC forces the fibre packets to travel from the disk to the host via the fabric switch, then the SVC controller, and then the switch again. Invista doesn't add the extra controller loop into the fabric path as the intelligent fabric switch knows what to do with the packets. Doesn't this definition make SVC out of band and Invista in-band?
Do I really want my fibre packets going down additional lengths of glass, thereby adding additional failure points? Why not let the switch do the work - Director class devices are very powerful these days.
If the Invista controller(s) crashes then the switch still has all the information to continue virtualising with no degradation, you just can't create new rules until it is back up. If an SVC controller crashes then you loose 50% of your processing capacity and degrade your paths - also you had better pray that the other SVC controller has it's battery fully charged (even if it still has full mains power) because otherwise it won't accept the failover - I've seen it happen :(
Paris, as she doesn't care if it's in or out...
>Do I really want my fibre packets going down additional lengths of glass, thereby adding
>additional failure points? Why not let the switch do the work - Director class devices are very
>powerful these days.
No they are not...they don't have cache, don't have any ability to do much of anything other than take a packet from ONE port and put it on another.
The only difference between Invista and SVC is that all the traffic goes through the external linux appliances with SVC, and Invista all the traffic goes through the linux appliance BLADE in the switch.
So...SVC is 16Gb of physical fibre connections to the core switch, and Invista is 25Gb of backplace connections to the core switch... BIG whoop.
SO here is the secret about Invista that EMC does not tell you.... IT ALL GOES THROUGH THE LINUX BOX ON THE SWITCH BLADE.....so sure...its 'IN the switch' but all your traffic on that entire director now has to go through that single 25gb backplace connection and back out .... all your traffic goes through that one blade now.
Its the same design.
Just that Invista can't scale as well as SVC, because I can have up to 8 SVC appliance's for only about 20k a pair, and Invista I can't do that without adding extra directors with hugely expensive proprietary linux blades from Cisco/Brocade.
And the fact that Invista is about braindead on features...can't do thin provisioning, can't do snapshots, cant do any async/sync replication, only supports about 2 different storage devices, and has about zero referenceable customers....yeah its better than SVC... NOT.
Don't let anyone fool you into thinking that Invista is 'out of band'.
Invista traffic is all going through the linux box on the director blade... the only difference between Invista and SVC is that the linux inline appliance in Invista is connected directly to the backplane(25Gbits), instead of via physical switch ports like each SVC(16Gbits).
Example, if a server on blade 1 wants to access storage connected to blade 3, the traffic will go in blade 1, across the backplane, into blade 8(Invista), back on the backplane, out to request from the storage on blade 3, back in the storage port on blade 3, back on the backplace to Invista on blade 8, back out the backplace to blade 1, and out to the server.
Draw it out...its the same number of hops just replace the fibre channel cables in SVC with backplane connections on the Invista blade.
And with SVC I can scale to 8 using cheap intel servers, I can't scale any more Invista nodes without having additional DIRECTORS and expensive proprietary intel blades for them (OUCH $$$).
Biting the hand that feeds IT © 1998–2019