back to article NextIO punts I/O virtualizing Maestro

NextIO, a maker of server I/O virtualization switches based on PCI-Express technologies, has announced the third and probably the most significant of its products. It's called vNET I/O Maestro and is being peddled as a server I/O virtualization appliance that can take the place of Ethernet and Fibre Channel switches at the top …

COMMENTS

This topic is closed for new posts.
  1. Richard C
    Meh

    It's a blade server backplane

    It just happens to be outside the box (literally). I'm all in favour of breaking down blade servers to commodity parts, but a comparison of the costs would be fairer if it was against a blade chassis. I guess that the NextIO kit would be cheaper, but the gap might be smaller.

  2. This post has been deleted by its author

    1. Lee McEvoy

      apples vs apples

      To start off - I don't work with NextIO or sell there stuff.....

      Do you think that NextIO may have used 1U servers with a little more "oomph" than the single dual core processor with 2GB memory blades that you configured.

      Where we've been involved in building infrastructure for hosting (including one that used NextIO), we've been using multicore processors (minimum hexacore, sometimes 12 core) with multiple sockets in use (sometimes quad) with tons of memory - VM density is normally limited by the memory you put in.

      In NextIO's example they had approx $200K on the "grunt" server hardware itself (i.e. excluding switches, blade chassis, etc, etc) based on this part of the article:

      "the cost of the servers and the Maestro PCI-Express switch together, it costs $303,062, with about a third coming from the Maestro switches and 60 PCI cables."

      The "non-compute" blade infrastructure according to the basket you produced had a cost of ~$130K, so I'd be comparing that against the $101K for NextIO - is it enough of a saving? It might be for some people and it is certainly lower cost and doesn't have vendor lock in that blades do.......

    2. NextIO

      Response from NextIO

      Thanks for you posts on this subject. The 1U server we used in the comparison was an HP DL360 G7 (model # 579239-001), with a list price of $6539. The comparable C-Class blade is a HP BL490c G7. When you add in a 10GbE and a dual-port FC mezz card, the list price for this blade from HP's website is $7138 each, for a total of $214,140. You would also need two C-Class chassis (each with two FC and two 10GbE switch blades), which would roughly be $65K each. The total for the similarly-configured blade system would be $344K without any cables, management licenses, etc. Our vNET configuration is about 10% lower than this. In addition to the "one throat to choke" vs "best in breed" discussion (and the vendor lock-in that you have with bladed systems) you also need to consider the impact of technology lock-in. While vNET can accommodate 40Gb Ethernet when it comes out (without downing the servers), it is not clear that the current HP C-Class can do the same, or whether chassis swap-out would be required (which would require system downtime). Please let us know if you have any other questions (send to vNET@nextio.com). Thanks!

      1. Richard C
        Happy

        Thanks!

        That clears it up nicely. Given the 10 minutes I spent configuring this system (of which 9 was spent finding the blade systems on the HP website - it seems to think I want a cheap inkjet printer); I'm not surprised that I left some major holes. I've withdrawn my original post, to prevent any confusion.

  3. kain preacher

    EL reg

    This has to be the only place were I've seen a site do a review, commentards ask questions as to can the product really do what they claim it can. Then the company that is being reviewed answers back .

  4. Nate Amsden

    run the test again

    a 1U pizza box does not need 4 fibre ports, no way in hell. Maybe if your talking quad socket systems hooked to a massive storage array maybe, but when a single HBA can push 100k+ IOPS and several gigabits/second of throughput you better have a fast storage system on the other end, especially if your running 30 hosts.

    On HP's most recent world record SPC-1 tests they used a total of 8 BL460c G6 servers to drive 450,000 IOPS against a $2 million storage system(after discount). This is with 2 HBAs per server(4 ports/each). HP believes this 3PAR storage system can support at least 50,000 VMs (if they were VMs like mine they could probably do at least 150,000 VMs given the IOPS/vm I see as typical).

    Also using 1GbE is no good, it will drive the cost up a bit I'm sure but at least use 10GbE, if your going to build a new VM system nobody in their right mind should be using 8x1GbE connections.

    In the grand scheme of things the connectivity is going to be a small part of the overall cost. I don't know why anyone would buy a $6500 server and slap a few extra NICs and FC HBAs into it for vSphere. Get a bigger box. typical pricing would probably start at $20k(with a good box running $40k+), with vSphere licensing being at least $8k/box on top of that.

    If your going to be cheap your probably going to buy that $6500 box, use the on board NICs, onto some switches, and use iSCSI or NFS for storage.

    Me, my latest vSphere enviornment is a bunch of DL385 G7s with 192GB ram/each, boot from SAN, small 3PAR storage array, 4x10GbE/server (2 jumbo/2 standard - I use 10gig for the simplicity), 2x4GbE FC.

  5. bigphil9009

    Am I confused?

    The article states the hypothetical customer wants 'to virtualise 30 DL360s". Does this mean that they want to take 30 existing servers and virtualise them? Or are they going to create a VMware server farm with 30 hosts? If its the former then surely all the subsequent discussion is wrong?

This topic is closed for new posts.