Feeds

* Posts by Lee McEvoy

2 posts • joined 20 Oct 2011

How to get the best from your IOPS

Lee McEvoy
Facepalm

agree with above comment

"stick with what you already use" sounds a bit dogmatic to me.

If you're consolidating a decent number of servers (some with big application workloads) on to a small number of virtualised boxes, sticking with iSCSI might not be the best bet - making the decision based on circumstances and requirements tends to be the best bet!

I also loved the comment about the overhead on FC - does anyone drive 10GbE at "rated speed" in the real world? Overhead / inefficiency on Ethernet is a bigger issue than on FC!

0
0

NextIO punts I/O virtualizing Maestro

Lee McEvoy

apples vs apples

To start off - I don't work with NextIO or sell there stuff.....

Do you think that NextIO may have used 1U servers with a little more "oomph" than the single dual core processor with 2GB memory blades that you configured.

Where we've been involved in building infrastructure for hosting (including one that used NextIO), we've been using multicore processors (minimum hexacore, sometimes 12 core) with multiple sockets in use (sometimes quad) with tons of memory - VM density is normally limited by the memory you put in.

In NextIO's example they had approx $200K on the "grunt" server hardware itself (i.e. excluding switches, blade chassis, etc, etc) based on this part of the article:

"the cost of the servers and the Maestro PCI-Express switch together, it costs $303,062, with about a third coming from the Maestro switches and 60 PCI cables."

The "non-compute" blade infrastructure according to the basket you produced had a cost of ~$130K, so I'd be comparing that against the $101K for NextIO - is it enough of a saving? It might be for some people and it is certainly lower cost and doesn't have vendor lock in that blades do.......

0
0