* Posts by Lee McEvoy

3 posts • joined 20 Oct 2011

Fibre Channel over Ethernet is dead. Woah, contain yourselves

Lee McEvoy

FCoE was going to take over the world....

....it hasn't, and I've pretty much only seen it used as described by Enrico to reduce spend in chassis (not very often as ToR).

That's not how it was being positioned - it was going to be the protocol that took over the FC world. It hasn't even taken over all the networking in chassis based compute (where there is a potential case to use it).

Saying that people have bought it and therefore it can't be dead is missing the point - a lot of people bought Betamax (and some even bought HD-DVD). Did that mean those didn't turn out to be technical cul-de-sacs without a future?

5
0

How to get the best from your IOPS

Lee McEvoy
Facepalm

agree with above comment

"stick with what you already use" sounds a bit dogmatic to me.

If you're consolidating a decent number of servers (some with big application workloads) on to a small number of virtualised boxes, sticking with iSCSI might not be the best bet - making the decision based on circumstances and requirements tends to be the best bet!

I also loved the comment about the overhead on FC - does anyone drive 10GbE at "rated speed" in the real world? Overhead / inefficiency on Ethernet is a bigger issue than on FC!

0
0

NextIO punts I/O virtualizing Maestro

Lee McEvoy

apples vs apples

To start off - I don't work with NextIO or sell there stuff.....

Do you think that NextIO may have used 1U servers with a little more "oomph" than the single dual core processor with 2GB memory blades that you configured.

Where we've been involved in building infrastructure for hosting (including one that used NextIO), we've been using multicore processors (minimum hexacore, sometimes 12 core) with multiple sockets in use (sometimes quad) with tons of memory - VM density is normally limited by the memory you put in.

In NextIO's example they had approx $200K on the "grunt" server hardware itself (i.e. excluding switches, blade chassis, etc, etc) based on this part of the article:

"the cost of the servers and the Maestro PCI-Express switch together, it costs $303,062, with about a third coming from the Maestro switches and 60 PCI cables."

The "non-compute" blade infrastructure according to the basket you produced had a cost of ~$130K, so I'd be comparing that against the $101K for NextIO - is it enough of a saving? It might be for some people and it is certainly lower cost and doesn't have vendor lock in that blades do.......

0
0

Forums