What is HP going to do with the Enterprise Virtual Array? The El Reg strategy department reckons a speed and feed increase is coming but no more than that, because HP doesn't own the clustering EVA functionality, and the newly-acquired 3PAR product has the virtualised data centre/cloud storage on-ramps which EVA lacks. Why do …
Where is the EVA going?
Down the toilet that's where it going! Crappy Firmware updates of which at least 3 got posted and pulled on the same day due to issues. Crappy and expensive licencing. Crappy monitoring and reporting. Worst of all is the crap support. It takes ages to get through, when you do your speaking to a script monkey and its almost impossible to get your case raised to someone who knows what they are talking about. Nearly everything about the 4400 range has been rubbish.... NetApp here we come...
LSI connection disconnected
HP has started to inform customers that they will no longer be selling SVSP as all that functionality will be provided via other products because of their recent acquisition. They will continue to support customers that have purchased SVSP but the last thing I heard was that the last sales of this were done in December 2010.
It makes sense for them as a company but you have to wonder how the people that purchased SVSP feel about it.
Adding 2TB disk support???
Maybe they should fix the problems with the 1TB disks first.
Is 3PAR InServ F- Class and T-Class the right substitute for the EVA ?
It's a well-known secret that the current 3PAR F- and T-Class controllers have only three PCI-X busses with 64 Bit and 133Mhz. They are not capable to support 8Gbps / 16 Gbps Fibre Channel, 10 Gbps iSCSI and 10 Gbps FCoE because of insufficient bandwidth and the unavailability of high-bandwidth PCI-X- based HBAs, NICs and CNAs. This is strongly correlated with the 3rd generation 3PAR ASIC which is PCI-X- based.
The interesting question is therefore what the 3PAR roadmap says regarding the announcement of a PCI Express ASIC- based controller family with increased bandwidth.
The T-Class announcement was at September 2nd 2008 so we should await a new announcement in the autumn of 2011 after the normal 3-year product cycle. But there are some real doubts if 3PAR can realize this target.
The lead of the PCI Express ASIC design group Richard Trauben has left 3PAR in September
2009, he is now Principal Architect at Huawei USA, see the following link :
So we suppose some design trouble with the PCI Express ASIC, this can require a time-consuming ASIC redesign or a switch to the new Intel "Jasper Forest" processors with storage-related functions which make the ASIC obsolete. Therefore the 3PAR InServ
storage servers are no near-time substitute for the EVA models and it is realistic to expect
one or two EVA refreshes until the death of this famous, but aged storage system.
Talk to someone who knows?
Maybe all the AC's replying should talk to someone at 3Par...
-Why bolt-on 10GB FCoE/iSCSI to an existing array? A Nexus switch will do exactly the same thing, and they only have 4GB FC links..
-EVA can't have these upgrades either..
- We have never ever overloaded 4GB FC links, and we have a few 1000 hosts connected to 3Pars - what would you want 8GB or 16GB for?
- The ASIC runs on a seperate bus, not PCI-X
-The new systems are on-target for the date you mentioned, but are undergoing expanded compatibility tests as they are aimed at replacing the EVA's across the board.
3PAR 3rd Generation ASIC
The ASIC in the 3PAR InServ T- and F-Class controllers is connected to the three PCI-X
busses with with a data width of 8 bytes and 133 Mhz. Beside parity computation and controller
internal data transfers the ASIC is responsible for external command and data transfers.
The ASIC provides seven PCI-X busses with 8 bytes width and 128 Mhz for connections to the
other controllers. When n is the number of InServ controller nodes, then the number of necessary inter-node connections is n*(n-1)/2, for the 3PAR T800 model the formula gives 8 * 7 / 2 = 28
connections. This architecture corresponds strongly with the EMC Direct Matrix Architecture DMX.