back to article Review: Supermicro FatTwin

My testlab has a new arrival: a Supermicro FatTwin™ F617R2-F73. As always when something lands in my lab, I will valiantly kick the crap out of it on behalf of El Reg's discerning readership. There are already a few different systems in my testlab - let's see how this thing stacks up. I'd like to kick off this review by …

COMMENTS

This topic is closed for new posts.
  1. Justin Stringfellow
    Mushroom

    what luck

    They've been advertising this kit on your website for months, and now they've got a favourable review!

    What were the odd of that then?

    1. Trevor_Pott Gold badge

      Re: what luck

      Pretty orthogonal, actually. I had to work for six months to get a unit to review. It was worth it. Great bit of gear. If you have some ideas as to tests you'd like me to run, please, let me know! I'll run any tests that I reasonably can. :)

  2. Ian Michael Gumby
    Boffin

    Not too bad on the cost..

    8 units fully 'kitted' ~40K or ~5K per node.

    That's not bad.

    Even their blade systems don't look too bad either

  3. Spoonsinger

    re Supermicro FatTwin

    Oh! - It's all muscle. (and err fat).

  4. M. B.

    It's not the hardware itself...

    ...but rather the support , services, and top tier warranty. The VAR network. The 2 hour onsite response. As well as the R&D/testing and reference designs that go along with purchasing IBM, Dell, HP, Cisco, or Oracle servers. Yes you could duplicate them on Supermicro hardware for much less money, but who is going to stand behind the end result?

    My environment is a mix of medium business and small enterprise requirements, and my infrastructure is a mesh of HCL'd hardware and software at correct revisions, a lot of it straight out of vendor whitepapers. There is simply no room for this white box level stuff until it has the same level of R&D and support as the big players.

    Unlike some organizations, we can't simply buy the cheapest thing out there, throw any old version of Linux on it, stick it in a rack, and call it done. I suspect that is the way it is for many organizations out there.

    In other words, cool, but not for me.

    1. Trevor_Pott Gold badge

      Re: It's not the hardware itself...

      It has the same level of R&D as the big players. And they HCL. And they certify. And....pretty much everything. Supermicro isn't exacy "just a whitebox vendor" anymore. Yes, they do sell units on a whitebox basis...but they also have excellent support options, especially if you buy big enough to be doing whole datacenters through them. Might be time you talked with tbem about the options, rather than rather than rely on assumptions that - it seems - are years out of date.

  5. Prof Denzil Dexter

    how do these compare against say the Dell C6220 which i guess would be the Dell equivalent?

    1. Trevor_Pott Gold badge

      Funny you should ask that. The reason this took so long to come together was that Dell was originally supposed to ship me a C6220 to test. We were going to to a head-to-head; showcase each unit it isn't own article and then really tear into each of them with an array of tests. Dell backed out at the last minute and so I was down to testing the Supermicro against the rest of my lab.

      Kind of sucks; Dell's switch was quite a nice piece of gear. Supermicro and Dell went pretty head-to-head on that, hard to say one was a clear winner. I would have been interested to see Dell's C6220 in action, especially when it came to the resilience of the power plane and its thermal responsiveness. So I sadly cannot answer you regarding the C6220. It looks nice on paper; but we all know how misleading that can be.

      What I can say is that Supermicro's stuff has come a long way in the past 10 years. More critically, they seem to be putting a lot more time and effort into making their units able to withstand high temperatures (so that you can run your datacenter hotter, thus saving rather a lot of money) and into completely over-engineered power systems. Not only are the power planes resilient, but Supermicro makes their own PSUs; and they are crazy efficient.

      If and when I get equipment from other vendors, you know I'll run it through the wringer. From server stacks like the Fat Twin to the humble USB stick; I've got a test lab, let's break this stuff!

  6. Anonymous Coward
    Anonymous Coward

    Not worth the power they use?

    Care to share the basic maths Trev? I can't see how this computes at all.

    1. Trevor_Pott Gold badge

      Re: Not worth the power they use?

      Fairly simple; I may possess the hardware for these other servers outright, however, they are expensive to power. FLOPS/watt on them versus the Fat Twin units means that were I to go out and buy a Fat Twin to replace the three racks of older gear that I have I would pay for the faster and more capable Fat Twin in less than 6 months simply out of the power savings.

      To me, that means the older systems aren't worth the power they use.

  7. G Olson

    Virtualization?

    Trevor, did VMware ESXi install? OpenStack? Eucalyptus?

    This looks like a good medium scale virtualization platform; might want to include that in your reviews as a standard test.

    1. Trevor_Pott Gold badge

      Re: Virtualization?

      Haven't tried Eucalyptus. VMware ESXi 5.1 works like a hot damn. Openstack too. Server 2012 works as well.

  8. Prof Denzil Dexter

    well given you never got to play with the dell kit, here's my view.

    I’m not sure what the C6220s is about. It is definitely better than the C6100 it replaced. They were a bitch to configure (10+ mins to boot to drac config mode drac per node = hours when you have 6 chassis to rig up) The C6100s i found poor MTBF issues as well, multiple motherboard swapouts particularly.

    These (well the predecessor C6100) were designed by Dell at the behest of a rich client who wanted custom gear. The point being to squeeze as many nodes into a defined space as possible. I’ve never seen that these do that. You can get 16 blades into an M1000e encolosure, or 10u of space. I rack 3 chassis in a single 42u rack leaving 12u for airflow and patch panels. Thats a total of 48 servers in a single rack Cooling is never ever an issue on these, they have plenty of on board fans and even at the top of the rack they dont surpass about 30 degrees. However, the C6220s run much hotter. I had 10 of these in a dev rack recently (i.e. for a total of 40 nodes) , and the heat they were kicking out at the back of the reack was obscene. Alarming all over the shop, and anything above halfway up the rack was particularly bad. The only way we could get these running consistently was to place the kit in the first rack by the ACU so it gets more air than most racks.

    Bear in mind as well that the Blade chassis includes (in my case) 4 ethernet switches and 2 San switches. Adding the equivalent switching in the rack is at least another 6-8u. Oh, and forgot to add there’s no sfp or san ports on the C6220s so they’re out of the question too.

    obviously i cant compare on the Fat kit, but for our use, the 4nodes in a 2u space just doesn’t seem to do the job unless you have loads of cooling but not loads of space.

    1. Trevor_Pott Gold badge

      Wish I could compare. The Fat Twin emphatically does not kick out a lot of heat. It is the most power-efficient gear I have ever used. I could see 4 racks of them being a problem whereas 4 racks of 5U servers is not, but then I would be running 320 2P servers instead of 32 2P servers in the same space. Mind you, living in Canada, that is probably only a issue 2 months out of the year...

This topic is closed for new posts.

Other stories you might like