back to article IBM skips BladeCenter chassis with Power7+ rollout

Even before IBM launched its Power7+ entry and midrange servers in early February, El Reg told you that it was suspicious, although not yet alarming, that no rumors were going around that Big Blue was working on Power7+ kickers to its PS7XX series of blade servers for its BladeCenter enclosures. Well, it looks like the Power …

COMMENTS

This topic is closed for new posts.
  1. Matt Bryant Silver badge
    Facepalm

    BladeCenter has been dead since the Flex announcement.

    IBM just didn't bother telling us customers last April that the Flex designs meant all our current investment in BladeCenter was being made obsolete. They didn't even sell the BladeCenter lines to Lenovo like I suspected they might, but then again they could still do that.

    1. TheVogon
      Mushroom

      Re: BladeCenter has been dead since the Flex announcement.

      Who on earth buys IBM blades?!

      Meanwhile in the real world, the HP C7000 chassis is still unchanged, and the new C7000 Platinum 7TBit backplane chassis is fully backwardly compatible...

      1. Anonymous Coward
        Anonymous Coward

        Re: BladeCenter has been dead since the Flex announcement.

        Interesting, but HP has replaced BladeSystem's chassis how many times since IBM's BladeCenter came out?

        1. Anonymous Coward
          Anonymous Coward

          Re: BladeCenter has been dead since the Flex announcement.

          "HP has replaced BladeSystem's chassis how many times since IBM's BladeCenter came out"

          That would be zero.

          There is now a higher bandwidth midplane option (Platinum) but it's optional and fully backwardly compatible.

      2. Matt Bryant Silver badge
        Unhappy

        Re: TheVogon Re: BladeCenter has been dead since the Flex announcement.

        "Who on earth buys IBM blades?!......" We do, for lower-end tasks, because IBM discount them so heavily, but we use hp C7000s more. We've looked at the Dell and Fujitsu options but IBM keep dropping their pants so we keep IBM as second choice (as a purchasing strategy we have two possible suppliers for every bit of the stack). If IBM keep the Flex systems prices at the same basement level we'll probably carry on grumbling but buy them. It's just annoying that there is no commonality between BladeCenter and Flex, whilst hp just seem a lot better at designing blade solutions that have longer lifecycles built in. The worrying bit is how long IBM can carry on discounting them before they offload the range to Lenovo like they did a lot of the other x86 lines.

        1. This post has been deleted by its author

        2. Anonymous Coward
          Anonymous Coward

          Re: TheVogon BladeCenter has been dead since the Flex announcement.

          Enough with the subversive IBM slams. Everyone knows you are shilling for HP.

          "IBM equipment isn't that great, but it is so inexpensive that we can't help but buy it", said no one ever.

          1. Matt Bryant Silver badge
            FAIL

            Re: TheVogon BladeCenter has been dead since the Flex announcement.

            "Enough with the subversive IBM slams....." Wow, what a detailed riposte! Just dripping with technical insight - not!

            ".....Everyone knows you are shilling for HP....." Whatever! The minute anyone disagrees with you IBM drones they're "shilling", right? Maybe the market is also "shilling for hp" seeing as they seem to buy an awful lot more hp blades than IBM ones. Or Gartner, maybe, going by their magic quadrant for blades - hp way out in the top right corner, IBM lagging behind. Gosh, what a nasty load of shills we all are! Face it - hp just make better blades, and customers are going to tell you it to your face if you ask us.

            ".....said no one ever." Actually I used to hear it a lot as an excuse whenever someone bought Dell, so I suppose IBM has just become the replacement cheap, "me-too", x64 vendor that Dell was considered to be.

  2. Magellan

    BladeCenter chassis are out of gas

    The BladeCenter chassis, both the H and E, are tapped out on power, cooling, and space. The current Intel blades do not the high-wattage CPUs, and offer limited memory.

    1. Anonymous Coward
      Anonymous Coward

      Re: BladeCenter chassis are out of gas

      Yes, the BladeCenter served its purpose. I think it came out in 2002, so it is over a decade old. It does IBM credit. While their competitors have gone through multiple chassis redesigns over the last decade, IBM didn't make a chassis change until the engineering required it to be done for electrical and thermal envelope reasons.

      1. Matt Bryant Silver badge
        Stop

        Re: Re: BladeCenter chassis are out of gas

        "...... IBM didn't make a chassis change until the engineering required it to be done for electrical and thermal envelope reasons." You mean apart from having to upgrade the PSUs every five minutes, continually telling us customers that we now had enough power in the chassis, only to admit that we still couldn't have full redundancy and a full set of blades, that would come with the next PSU upgrade, rinse, repeat..... The only blades solution treated with more derision were the laughable attempts from Sun!

    2. TheVogon

      Re: BladeCenter chassis are out of gas

      Rubbish.

      HP offer 2TB in a single blade server (BL680c) whereas IBM can only offer 1TB. In 1 slot HP offer 1TB - just like IBM and 512GB in half a slot - just like IBM.

      HP also support 4 x 130W TDP CPUs in a single blade just like IBM do.

  3. Anonymous Coward
    Anonymous Coward

    What I don't like

    is that the PureFlex appears to want to take control of the deployment and management of the OS, using a control infrastructure that looks clever, but maybe a little inflexible.

    Maybe I really am becoming the dinosaur that some people accusing me of being, but I really don't like having the control of OS deployment taken away from me, no matter how clever it appears to be.

    Blades would allow me to use xCAT (or previously CSM) to deploy them. But as I understand it, PureFlex uses IBM Director, something that I've never liked the look of. Maybe I'm being disingenuous, but I have great suspicion of anything that says it does everything for you. Simplicity on the surface normally indicates great complexity under the covers, which means that when it goes wrong, it's a bugger to fix.

    1. Anonymous Coward
      Anonymous Coward

      Re: What I don't like

      The Flex systems use what they call Flex System Manager, which is sort of a next gen version of Director which is embedded in a half-width node... instead of a roll your own management server as with Director. There isn't any requirement to use the FSM management node. If you don't like it, you just don't use it... much like Director. You can still use chassis and node by node (blade by blade) management. The roll-a-rack version of Flex, called PureFlex (pre-integrated), has the FSM embedded in it, but you can also buy the Flex System (not pre-integrated, a la carte) chassis and decide if you want to use FSM, V7000 storage, the IBM interconnect switches, etc.

      1. Matt Bryant Silver badge
        Facepalm

        Re: AC Re: What I don't like

        ".....Flex System Manager, which is sort of a next gen version of Director which is embedded in a half-width node...." So, what you're saying is you buy a chassis that can only take fourteen server blades, already two less than the opposition, and then you have to lose another server blade to management?

        1. Anonymous Coward
          Anonymous Coward

          Re: AC What I don't like

          "So, what you're saying is you buy a chassis that can only take fourteen server blades, already two less than the opposition, and then you have to lose another server blade to management?"

          If you want the FSM functionality, you would use the FSM appliance node in a half width bay. HP has nothing comparable to the FSM native in their c chassis. If you don't want it, take the FSM out and you are back to c chassis management functionality.... or buy a bunch of BMC licenses and put it on one of the c chassis blades and you are at FSM functionality. The Flex x86 has about 50% more memory capacity per node and 200% more I/O than a c chassis... and the ability to go native four socket (note, native four socket). A Flex node is not directly comparable on a one to one to a c chassis blade.

          1. Matt Bryant Silver badge
            FAIL

            Re: AC Re: AC What I don't like

            "......HP has nothing comparable to the FSM native in their c chassis......" Sorry, but hp's Onboard Administrator and iLO combo is far superior to IBM's options. And, as far as I can see, the FSM "single pane of glass" is just a front end to the usual hodge-podge of IBM management tools.

            ".....The Flex x86 has about 50% more memory capacity per node and 200% more I/O than a c chassis...." Please let us know which blades you are comparing, because all the hp BL4x0c Gen8 blades have two built-in 10GbE flex ports and two PCIe 3 mezz slots, which seems directly comparable to what IBM are putting on the new half-height Flex blades, so your "200% more I/O" claim sounds like just marketting baloney. And as regards memory, the x220 blade has LESS memory than the hp Gen8 blades, it's only the x240 that has more, but I bet that comes with some lovely IBM gotcha around limiting memory speed if you go for the maximum, just like the old IBM blades. Oh, and where is the IBM Flex blade with AMD Opteron to compare to the hp BL465c Gen8?

            Of course, the real limitation in the IBM Flex chassis is the same as the old BladeCenter, interconnects - since you want to use them in pairs you end up with limited choices of interconnects. You want just LAN for the onboards, FC for SAN and Infiniband for fast clustering? Not with redundancy in the IBM chassis, that would require six switch slots for three pairs, and they only have four slots. The C7000 can handle up to four pairs of switches if required. That's 100% greater USEABLE bandwidth option than IBM Flex.

            "..... and the ability to go native four socket....." <Yawn>. Is that the new IBM troll buzzphrase, "native four socket"? Go look at the hp BL660c Gen8 - four sockets, same Intel cores. Try again!

          2. TheVogon
            Mushroom

            Re: AC What I don't like

            You got that the wrong way round. The Flex x86 has about 50% LESS memory capacity per node and 200% LESS I/O than a c chassis.

            HP supports 2TB per node in BL680C servers, and the current HP C7000 Platinum backplane has 7Tbit of bandwidth...

            1. Jesper Frimann
              Thumb Down

              Re: AC What I don't like

              You are rather clueless aren't you ? Let me guess HP marketing drone ?

              You are taking a full length double wide BL680C, which surely is a superb blade server, that you can plug 192Gbit worth of IO into. But again being full hight double wide blade you can only have 4 in a C7000. And that you are comparing with the smallest size node in competitors solution.

              Now a Pureflex node if we take the toughest one like the p260 will house 1TB of RAM, and can have 16x10Gbit=160Gbit worth of IO+management, and you can have 14 of those in a chasis.

              BLEH. If you want to ditch other vendors products, then please do at least the basic homework, and not just consult the marketing material and take the best for your own company and the worst from company X you want to compare to.

              // Jesper

              1. Matt Bryant Silver badge
                Stop

                Re: Jesper Re: AC What I don't like

                "You are rather clueless aren't you ?....." Sorry, Jesper, but you'll have to be a bit more specific as to which AC poster here has got your knickers in a twist.

                "...... Let me guess HP marketing drone ?....." <Sigh> What is this IBM troll reflex to accuse anyone that doesn't share their blind devotion of being a duplicitous hp marketing employee in disguise? Can't you get it through your heads that a LOT of us customers actually prefer the hp kit for a reason (well, several reasons actually)? Not seen the blade server market figures lately?

                "......and can have 16x10Gbit=160Gbit worth of IO+management....." OK, rather than blindly bleating on about maximum backplane figures, why don't you please explain how the two PCIe 3 slots on the p260 are going to give you the same amount of useable bandwidth as the two onboard dual-port Flex LOMs and three PCIe 3 slots on the BL860c i4? Face it, the p260 is another unbalanced IBM design - in order to get more blades into the chassis they had to chop bits off the design, and now they have an IO-starved blade in the p260. Another failed compromise from IBM, it doesn't even have built-in LAN ports, you have to use up mezz slots just to get connected!

                http://www-03.ibm.com/systems/flex/compute-nodes/power/bto/p260-p460/specs.html

                http://h10010.www1.hp.com/wwpc/us/en/sm/WF06a/3709945-3709945-3710102-1146345-3722789-5330436.html?dnr=1

                1. Jesper Frimann
                  Headmaster

                  Re: Jesper AC What I don't like

                  Matt....

                  Yawn.

                  RTFM

                  The BL860c i4 -> 3 x PCIe 2.x x8 slots. + 4 onboard CNA ports. (up to 10 ports)

                  The x240 -> 2 x PCIe 3.0 x16 slots,

                  The p260 -> 2 x PCIe 2.0 x16 slots, (up to 16 CNA ports)

                  And Oh.. yes the BL860c i4 is a full length blade 8 per c7000, the p260 a half length, hence 14 per chasis. Oh, and then there is memory, where the BL860c i4 only supports 384GB RAM.

                  // Jesper

                  1. Matt Bryant Silver badge
                    Stop

                    Re: Jesper AC What I don't like

                    ".....The x240 -> 2 x PCIe 3.0 x16 slots, The p260 -> 2 x PCIe 2.0 x16 slots, (up to 16 CNA ports)....." LOL, I see you're still dancing away from the lack of IO options and the lack of redundancy! So, how do you do pure LAN, FC SAN and Infiniband on a p260 or x240 when you only have two slots and no onboard IO? Can you do a SAS option out to a shared SAS switch module? Can you address more than one fibre switch so you can split your backup SAN traffic (which the IBM offering needs as you can't do a SAS tape drive) so that it does not use the same ports and switch modules as your production SAN traffic? No, you can't. The hp offering not only has more options you can use more of them at once, making it more flexible and not IO starved like the IBM designs. Try again!

                    1. Jesper Frimann
                      Pint

                      Re: Jesper AC What I don't like

                      Typical Matt.

                      When he is proven wrong, he can't acknowledge that he is wrong but quickly grabs after straws.

                      "Can you address more than one fibre switch so you can split your backup SAN traffic (which the IBM offering needs as you can't do a SAS tape drive) so that it does not use the same ports and switch modules as your production SAN traffic? No, you can't"

                      Sure 2 adapters with will address all four switches in the back. Try reading a manual.

                      And keep repeating something that is clearly wrong "IO starved like the IBM designs", doesn't make it more right. Again a half length node has more IO bandwidth, more memory and more processing power than a full hight HP blade.

                      You aren't fun discussing things with anymore. You do know that, don't you ? That is why people mostly ignore you.

                      // Jesper

                      1. Matt Bryant Silver badge
                        Stop

                        Re: Jesper AC What I don't like

                        ".....When he is proven wrong....." Que? How is showing that the p260 only has two IO slots and no onboard LAN ports being shown to be worng when it illustrates the glaring compromise of the p260 design? In an HA environment, when I will want two LAN mezz cards and two FC cards minimum for redundancy, your only answer is to double up on converged cards. Leaving no other slots for high-speed interconnects like Infiniband, not to mention the SAS option if Flex even had a SAS option. The hp Itanium blades all come with two onboard dual-port converged adapters, which means I have redundant LAN and SAN just by adding a second converged mezz card, leaving two slots for linking to a second SAN or a SAS switch, or for redundant Infiniband cards. You can pretend that not having redundancy is not an issue if you like, but I seem to remember it being quite key to HA clustering.

                        "......Sure 2 adapters with will address all four switches in the back. Try reading a manual....." Oh dear, back to RTFM for you! Two IO slots on the P260 means NO options, it is simply too few. Does IBM offer a mezz card that can magically do LAN, SAN and Infiniband? No. So if you have one converged card and one Infiniband card, where is the redundancy required for clustering? You have none. One card failure and your p260 is a dead duck.

                        But back to the manuals you mentioned. When the original BL860c came out I asked the hp guys in Vegas if they could design a half-height Itanium blade and then said they could, but it would be too much of compromise. The p260 simply proves that point. You mentioned manuals, well go look here (http://www.redbooks.ibm.com/abstracts/tips0880.html#locations) at the layout of the p260, note how the CPUs are at the front and their heatsinks cover all the airflow to the memory? Once again, IBM have designed a blade where the CPUs will cook the memory! And the disks, which hinge over the memory banks and are not hot-swappable (as they are on the hp Itanium blades). Yes, yet again, IBM have designed a blade where you have to take it out of operation and open it up to take out a failed disk - have they learnt nothing from their previous failures? The disks also limit the height of the memory and the airspace around it for cooling, meaning more pricey special low profile memory for the IBM Power blades.

                        But whilst we're on the disks, does the p260 have hardware RAID for the onboard disks as the hp blades do? Nope, software RAID only! This, however, does not seem to be a design compromise as just a lack of design, the full-height p460 also not having onboard RAID. So not only do you have to yank a blade if a disk fails, you have no RAID protection for the data on the internal disks unless you give up CPU cycles to software RAID.

                        So let's summarise - the p260 has no onboard adapters, no hardware RAID, no hot-swappable disks, not enough IO slots for real flexibility and redundancy at the same time, and will probably cook its expensive memory (along with those non-hot swappable disks) as soon as you start thrashing it. Yeah no compromise at all - NOT! But even funnier is IBM still haven't worked out how to make a big blade, even though the p-series rack servers have been modular for ages - four sockets is the best they can manage, whilst hp's blades have managed eight sockets as one hardware instance for years! That means whilst hp can run an eight-socket octo-core blade, with IBM if you want to scale that high you have to throw away the Flex chassis and go buy an IBM rack server and a whole lot of external switches.

                        Stop pretending that p260 is anything other than a compromise design to counter the Xeon BL460c Gen8, nothing more. Even the IBM literature describes it as a "compute node", exposing their intention for it to be HPC and nothing else. It simply does not have the HA feature set for enterprise UNIX clustering.

    2. Anonymous Coward
      Anonymous Coward

      Re: What I don't like

      Apart from all this talk about cloud, deployment patterns, expert integrated systems and so forth, Flex System is a great blade system. 40g uplinks, best backplane bandwidth, native four socket Intel/Power nodes without the QPI link bottleneck, best memory per server node, integrated storage-interconnects-servers and platform software (if you want it), etc. I think IBM confused/concerned some people by making this out to be a pre-integrated, centrally managed data center in a box, which it can be but doesn't need to be. There was also a rumor going around that this precluded the use of Cisco switching, which caused every CCNA on the planet to start convulsing on the floor. You can use the IBM interconnects, formerly BNT, but not required.

      1. Matt Bryant Silver badge
        Facepalm

        Re: AC Re: What I don't like

        ".....interconnects....." Oh yeah, thanks for reminding me - and after you're down to thirteen useable server blades, you then have only four interconnect module slots for switches compared to six for the Dell M1000e and eight for the Fujitsu BX900 S2 or the hp C7000. Even the old BladeCenter had four.

        1. Anonymous Coward
          Anonymous Coward

          Re: AC What I don't like

          The raw number of blades/switches doesn't equal more capacity or thruput. Flex has four full length I/O bays, 42/10g internal facing ports... first 1 Tbps plus switch in a blade chassis. Flex will handle 100g in its current design... not sure anyone else can say they are 100g certified in their current chassis. Whenever Cisco and co get around to pumping 100g out, HP will need a new chassis. Another nice bit about Flex interconnects is that it handles all interconnect within the chassis, east to west, whereas the traditional/legacy blade systems, including UCS, go up to the TOR and back down, north and south, for blade to blade communication. A large latency avoidance.

          1. Matt Bryant Silver badge
            Facepalm

            Re: AC Re: AC What I don't like

            Whoops! Looks like IBM also forgot a SAS switch module option and a tape blade option for Flex too! At least the old BladeCenter had a SAS option so you could at least use an external SAS tape drive.

      2. Anonymous Coward
        Anonymous Coward

        Re: What I don't like

        HP BladeSystem offers 56GBits FDR to blades and uplinks (As well as 40g).

    3. Anonymous Coward
      Anonymous Coward

      Re: What I don't like

      OK, OP of this comment trail (AC 16:44)

      I must admit I hadn't realised that there was a difference between Flex and PureFlex. I must do some more reading beyond the initial IBM announcement blurb. I understand what has been said, and if I could use taylored deployment tools still, I am less unhappy.

      It's the Power side I am most interested in myself, and the Intel flavour of this comment trail actually leaves me a bit cold. It's interesting hearing your thoughts everybody, though.

  4. David 14
    Happy

    PureFlex != Flex Chassis

    Okay... it was alluded to in another comment, but it should be made clear. PureFlex is not equal to the BladeCentre... though, the Flex Chassis is, arguably, so.

    The Flex chassis is a 14-slot chassis, with a lot of I/O bandwidth, power and cooling, etc. It is 10U, but supports higher power and density machines, though it is not the most dense equipment IBM sells.

    The Flex chassis is a component of the PureFlex, but PureFlex is actually a full integrated-package offering that includes servers, storage, network and management appliance.

    Other components of PureFlex include:

    Flex System Manager (FSM) - a single-bay sized appliance that provides a single pane of glass to manage the server, storage, network of the system... can support (currently) up to 8 chassis of equipment, and can also run the full Cloud Management Suite - SmartCloud - from IBM.

    I/O Modules - these are the ethernet or fibre modules that would be installed in the chassis. They are designed with full redundancy and very-high thoughput. They also support east-west traffic so that inter-chassis traffic can work at backplane speed versus the network/fibre wire speed.

    Chassis Managament Module (CMM) - like the older blade equipment, a management interface for a chassis-based hardware management, also can be installed in redundant pairs.

    v7000 flex module - basically, a v7000 that fits in 4 node bays rather than in an external unit.

    Compute nodes - xSeries or Power nodes in several versions, including single and double-wide nodes.

    I have installed these already, and while there is always a few growing pains of firmware updates, etc. that are to be expected, the systems seem to be fine replacements... even if that is not what IBM had targeted them as (officially, at least) as can be seen by IBM's difficulty in getting their own support staff trained up in implementation services

    1. This post has been deleted by its author

This topic is closed for new posts.

Other stories you might like