* Posts by Casper42

6 posts • joined 25 Feb 2012

HPE server firmware update permanently bricks network adapters


Once again El Reg likes to post incomplete information.

For all you people people blaming lack of testing, the combination that bricks the NIC is when you use a brand new driver with firmware that is like 2+ years old.

If you follow the DOCUMENTED Recipe for Drivers and Firmware, you'd be fine.

Image and SPP was pulled to prevent customers who don't RTDM from hurting themselves.


Re: HP are getting good at this

HP != HPE and even if they were, the fact you think the same people would be working on both is comical.


HP Ink shrinks workstations to puckish form factor


HP Ink - is that a freudian slip?


HP goes off VMware's EVO:RAIL, stops selling sole appliance


Why bother with Nutanix, you can still get the HC200 with StoreVirtual for WAY less than the EVO:RAIL config, and with or without SSDs depending on your needs.

Besides, this time next year, Nutanix might be nothing more than a software company, and probably owned by Cisco :shudder:


HP freezes out SAN fabric


SAN is integrated, not eliminated

I heard the way this works under the sheets is to simply enable a traditional FC Switch inside the FlexFabric module that's already there but simply running in NPIV mode.

I believe that switch ASIC is made by Qlogic in the current model.

So in essence, you are not eliminating the SAN Switch but rather than having a large central pair of SAN Switches, you are moving the switch out to the edge. Then you use Virtual Connect's own GUI to manage the Zoning by simply attaching a "Fabric" to the Server Profile like you already do today.

And as far as other storage vendors eventually being supported:

One of the things that makes this possible is the fact that even a moderate 3PAR T400 supports up to 64 Host Ports (the port facing the SAN as opposed to Disk Shelves).

Maxing out the FlexFabric module with FC, that would be only 8 connections per Enclosure.

Which means you can hang a minimum of 8 enclosures from a single T400.

NetApp and EMC have generally less than 16 host ports and would then only enable what, maybe 2 enclosures? The EMC VMAX 20K can grow up to 128 Host Ports but only does 16 ports per 20K Engine. So that could work in this design perhaps but would also cost an arm and a leg.

So its not just a vendor lock in by design, but simply comparing the architecture of the competitors shows they probably wouldn't work well in this design.


Cisco's 3-ring circus: Xsigo CEO on bait and switches



Full disclosure, I work for HP and work with Blades and Virtual Connect every day.

I am curious how HP VC is not considered "Open" but Xsigo is?

Xsigo is a box that sits between the Server and the Network/SAN

VC is the same but happens to fit in the back of an HP Chassis.

Xsigo can connect to any upstream Network equipment

VC can as well.

Xsigo can connect to any upstream SAN environment

VC can, as long as you support NPIV.

So is it not open because it only works with HP's blades?

If HP was to partner with Xsigo, how would that change anything that either company does today?

HP already offers IB Adapters and Switches for all their Rack and Blade Servers, nothing preventing someone from using that with a Xsigo today is there?

I fail to see how this partnership would benefit anyone but Xsigo.

Not to mention its yet another thing I have to manage.

Something Cisco's UCS platform commonly uses as an attack point on the competition.



Biting the hand that feeds IT © 1998–2017