back to article Why blades need enterprise management software: Learn from Trev's hardcore lab tests

The value of enterprise management associated with modern blades has been made apparent to me. At the same time, I understand the value that "unblade" systems, such as the Supermicro Twin series or Open Compute systems, can bring. Cost, and what you plan to do with the things, are, as always, the determinant, but there are no …

  1. cjcox

    Tcp offloading

    If you are using hypverisors, make sure you are not double dipping on tcp offloading. So check your gso and tso values (ethtool -k <if>) and turn those off inside of the VMs. I think then you might be happier with your network performance. It's just a guess, but it's often missed and will cause huge problems with 10gE.

    1. Trevor_Pott Gold badge

      Re: Tcp offloading

      Oh, been up and down that one. No effect. Same with "are you using vmxnet3 virtual NICs" and every other standard item. It's not the offloading. It's not chimney. It's not anything obvious. It's the damned drivers.

      If ever anyone wanted to know what elements of "vendor fingerpointing instead of actually working to solve the problem" drive me mad, Intel/VMware over this issue makes Trevor something something...



    Regarding the network issue - have you tried using any proper switch (i.e. labeled HP or Cisco, *not* Netgear, D-Link or Supermicro)? To me it sounds like bugs in the switch queuing and flow control are conspiring with the NIC driver.

    Also, you get similar issues *with* enterprise blade servers: A customer here had a major break down the other month. The root cause? Bugs in the IBM blade servers triggered a MAC address move for the FCoE adapters; the top-of-rack switches got confused; and the SAN went down.

    1. Trevor_Pott Gold badge

      Re: Enterprisey?

      I have, in fact. Same issue. Also tried out my Dell, and about 15 minutes ago I verified that it exhibits the exact same behavior with a Juniper.

  3. jamesb2147

    Mild disagreement

    Certainly everything has its place, but, believe me, blades will bring their own issues. They're just like everything else... everything is unique.

    Just wait until you hit that $VENDOR bug where all your $COMPONENT reset all AT THE SAME TIME! :D

    It's happened before.

    I'm totally sympathetic to your plight, though, and have run into my own share of life-sucking problems. In my case, it's usually a bug in our vendor's software (Cisco/Meraki), and they won't even let us see the logs. For us, the worst crime is when the bug COMES BACK. We've twice (since summer 2012) had regressions with firmware updates MONTHS after an issue we surfaced was patched.

    1. Trevor_Pott Gold badge

      Re: Mild disagreement

      Can't disagree. Vendor bugs happen in all hardware and software. But there's something quite a bit useful to the ability to "push button, receive known good configuration". I don't see why that can't be built into, for example, some form of IPMI enterprise management software for just hasn't to date.

  4. Phil O'Sophical Silver badge

    Blades: future, past, both?

    Funny, but 10 years ago I'd have written similar sentiments. We had several labs crammed full of blades, cPCI, ATCA. They were the future. We were developing software for them, porting OSes, etc

    There isn't a single blade left in any of our labs now. Everything's flipped to rack upon rack of 2U and 4U servers and floor-to-ceiling disk drives, all virtualized to hell & back. Want a host? Just click a few buttons to select memory, CPUs & OS, and bingo, here's your VM. Where is it hosted? I've no idea, somewhere in that "cloud" or "farm", or "forest' or...

    A decade ago half the industry seemed intent on building clusters of blades to make big systems, the other half seemed to be virtualizing big systems to create small VMs. Always seeemd weird to me, surely they could make their mind up? Now (outside of the telco world) blades seem to have come, and gone. Maybe they're coming back?

    1. P. Lee Silver badge

      Re: Blades: future, past, both?

      > Everything's flipped to rack upon rack of 2U and 4U servers

      I'd hazard a guess that blade vendors got greedy when they saw "enterprise" whereas 2u boxes are built for the low-end.

      Surely the managability is in the homogeneity. Blades enforce it because you can't drop in competing vendor's blades and everyone expects that. 2u servers are physically interchangable but it makes your management more complex if you don't strictly control the hardware.

      My favourite enterprise kit is still the network load-balancer. The Crossbeam chassis is still one of the most elegant designs I've seen. F5 is also very cool, if slightly less elegant.

      Now, if netgear could punt a switch with ARM/MIPS doing network frontend clever stuff...

  5. Alistair Silver badge

    enterprise management tools

    These keep changing their garments. And sometimes those garments are fairly transparent.

    I've been cfengine for low level stuff for a while - collecting and tracking is everything - including tracking "hiccups" in the datacenter - now, since CMDBs are getting more and more detailed I can get the info in there -- and provide the documentation that "this ILO firmware and this NIC firmware and this OS revision" fall down and go boom when we do "this". But the vendor fingerpointing ............. this never never never changes.

    Single biggest issue is multivendor rollouts - As dangerous a stand as it is to take, I am much happier with a single vendor pipe - "Okay -- look, you provided the hardware bits A, B, C and D, you have the OS support contract, just get your groups together and give us the solution" works so much better than "Vendor A, can you work with Vendor B, and get Vendor C to validate" - -it becomes *so* much harder to solve.

    Mind you *I've* learned a HELL of a lot from those multivendor collisions.

    And -- yes -- I've seen the 10Gb/1Gb link thing. -- and am fighting one right now.

    (Grumpy bastard on 4 way phone calls)

  6. Long John Brass Silver badge

    High hopes

    Once upon a long time ago; when dinosaurs walked the earth

    The was a project to opensource PC BIOSes, the original design was to do something similar to

    SUNs OpenBoot (The FOSS one was OpenBIOS I think?).

    The hope I had for this was... being able to talk to the BIOS firmware settings from the OS

    Sadly, SUN, OpenBoot/BIOS all seem to be gone now

    /me sad

  7. dan1980
    Thumb Down

    This kind of thing is one of the big reasons why reference architectures and paint-by-whitepaper are still popular methods of designing a system, despite the clearly superior options available from a price/performance standpoint.

    It really does come down to TCO but the point is that there are many factors that contribute to 'cost' and these are very much dependent on the type of organisation and use.

    No solution is a one-size fits all but most types of solution do market to a wide spectrum of needs, despite usually only being suitable to a much smaller subset.

    Take blades. You can get cheap blade units and these are often marketed as being great for small companies. And hey can be, but the chassis/blade infrastructure means that 2 or three servers can be cheaper than the equivalent compute/storage provided by a blade. So why get one?

    As Trevor has noted, one of the big benefits of a blade system is the management. But these advanced features only (generally) come with the higher priced units, which are out of the price range of many who might otherwise consider blades. But then, that extra price puts you up to other solutions.

    Thus you find that most solutions find their best value in rather specific scenarios.

    Unfortunately, it can be difficult to identify these and often you just don't understand the real TCO until you've actually tried a system. - as with Trevor. For him, he was trying to get great value for money - as we all should, but his time is probably the most valuable resource he has and he needs to get good value from that too!

    Hyper-converged is good too but below a certain point, it is too expensive. The minimum configuration of 2 is not really cost-effective as a production environment but I am not sure you'd deploy an entire row of them!

    Back to blades, however, it's very difficult to understand the value of the management software until you find yourself with a problem that it would have solved!

  8. elip

    You don't understand the true value of a rack of simple, brain dead (maybe even running openboot or coreboot), extremely-dense 1U servers, until after you deploy the shiny, extremely expensive, cluster of blade servers (UCS in this case), only to eventually hit an apparent bug (one of many, but this one is a business-crushing variety) with the insanely complex firmware, causing *none* of the blades to be able to successfully PXE boot, after working just fine for *years*. Two years later, case is still open, and Cisco (who are our close partners, and literally across the street) still has no clue what's causing the issue. My recommendation to every IT architect, systems admin, random-dude-in-charge-of-critical-infrastructure remains the same 15 years into my "career": keep your systems as simple as possible; the less code in the firmware/BIOS, firmware on any PCI cards (if you must use them at all), and in the OS, the better off the business is, and the more sleep your operations folks get.

  9. WillJeffries

    MFT is a necessity for mid to large sized companies or at least a large file transfer solution like (

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019