back to article Why blade servers still don't cut it, and how they might

Sometimes, a good idea just doesn't take off. OK, this is information technology, not philosophy, so let me rephrase that more accurately. Sometimes, ideas and habits that were once laudable have an immense inertia that prevents a new and perhaps better idea from building momentum in the market. Such is the case with the …

COMMENTS

This topic is closed for new posts.
  1. Flocke Kroes Silver badge
    Boffin

    There is a reason for not having modular main boards

    Power = Voltage x Frequency x Capacitance

    Chips have capacitance. Tracks on a PCB have a little capacitance. Chip sockets have some capacitance. Connectors have capacitance for each half of the connection. If you can add more and more DIMM sockets, you have some nasty choices:

    1) Put some huge power transistors on your CPU (and memory chips) to drive the enormous capacitance of a row of daisy chained DIMM holders - even when few DIMM holders are present.

    2) Divide the frequency by the number of DIMM holders daisychained together.

    3) Put bus drivers on each DIMM holder (expensive, power hungry and increases the latency)

    In short, a modular motherboard is not competitive. A better way to go would be to reduce the number of connectors and sockets. This can be done by putting the DRAM on the same chip has the CPU. Much of the complexity of a CPU is there to deal with the latency of the chip package+chip socket+PCB+another chip package+northbridge+back off the chip package+PCB again+DIMM socket+DIMM PCB+memory chip package. Scrapping all that latency gives you a healthy chunk of space on your CPU for DRAM.

    Now your CPU only needs connections for power, cooling and communication.

    DC power may not be the best answer. If there was a standard voltage and if you do not mind asphyxiating a few techies when the liquid nitrogen boils, I would say low voltage DC through super conductors. The other choice is high frequency AC. Humans are far more resistant to high voltages at about 1kHz than at 50/60Hz, so you could use thin (expensive) copper cables covered in thick (cheap) PVC carrying a few hundred volts at high frequency without risk of electrocuting people. Converting the voltage only requires a transformer (smaller than you are used to because of the high frequency). Converting to DC still requires a filter, but the capacitors would be smaller than you are used to because of the high frequency.

    A nice dream. Google have the income to make it happen, but I do not see them leaping into low margin standards based hardware development at present.

  2. Matt Bryant Silver badge
    Pirate

    Rack servers a bad comparison.

    Rack servers did not follow any common standards other than the 19inch rack sizing, and seeing as blade server chassis are usually 19inch that makes them similarly compliant there. PCI, PCI-X and PCI-e came about for interconnects, yes, but everything else like motherbaords was proprietary. The CPUs, RAM and disks were common but they are too on blades. And switches/routers for interconnection were external to the rack servers so could be anything with no promise of commonality (10base-T, 100VG, FDDI, 100Base-T, 1Gb FS, ESCON, etc.). There was no lack of completely proprietary interconnects with rack servers either, such as HP's Hyperfabric.

    The real big reason that there are no real standards for blades is because they would stifle innovation. HP and IBM have carved up the market through making blades better than Sun, Dell and smaller players like RLX, by innovation aswell as market muscle. Standardisation too early would have crippled developments like HP's Virtual Connect. In the meantime, lack of standardisation seems to be a very effective Darwinistic means of development - unpopular or innefficient designs just don't get bought.

  3. Anonymous Coward
    Paris Hilton

    Blade servers don't catch on because...

    they are too bloody expensive. I see plenty of blade chassis that don't offer much in the way of density beyond 1U servers and cost 2-3 times as much by the time you've accounted for the chassis and the blades. How many manufacturers individual blades cost more than the equivalent 1U server - for a card that doesn't have a case, power supply, etc. etc. Yes it's more tightly integrated, but in plenty of cases that's done to save money elsewhere anyway. Let's see the manufacturers provide a decent density blade system that saves money on the hardware.

    Paris because she's out of my price range too.

  4. P. Lee
    Linux

    A large pool of computing resources...

    massive I/O, fine-grained resource management and scalable CPU power in a box.

    Ladies and Gentlemen, I give you... The Mainframe!

    Complete with linux virtual machines for standard APIs.

    I thought the point of blades was to take cheap, standard hardware used by commodity servers out of its box and pop it in a slot. With all the engineering required for this wishlist in terms of scaling and resource management, the point of the exercise will be missed. I'm not convinced that blades ever promised to be much more than power and space-saving servers with faster, neater connectivity.

  5. Philip Buckley-Mellor

    I'm not normally one to critisise...but

    One socket to rule them all - I see this happening very shortly after world-peace. Just because it makes sense (does it really?) doesn't mean it gives the chip-makers any advantage at all and without advantage they won't do it.

    Modular motherboards - most people buy blades because they're cheap, one of the cheapest parts of a blade is its motherboard. Introducing the required connectors/interfaces between the various parts of a modular motherboard would inevitably make them more expensive, more prone to failure and make it even harder to configure and buy.

    Blade and sub-blade standards - you want standards AND innovation? I'm not saying it can't be done but again what is in it for the manufacturers who are making a fortune on their custom NIC/HBA/etc cards?

    No more AC power inside the data center - changing out 32A AC commando PSU lines in an existing data centre for DC ones makes little sense as the majority of other devices will still need AC so you end up leaving those running in each rack too - doubling cabling you rely on, plus the power switching circuitry itself. If it was a greenfield data centre and everything you wanted to put inside was DC then fine, good idea - but that rarely happens. Plus, in my experience, M&E work is the slowest part of any change, so you end up with a very long time to change in your deliver plan.

    Integrated refrigerant cooling for key components - so you suggest that datacentres have normal AC for most things and direct coolants for hotter things - I worry enough about single cooling systems, never mind two of the things. Isn't it cheaper, more resilient, quieter, and just more '2008', to draw in ambient air and expel at force the exhaust air, just like Switch in Vegas do (http://www.theregister.co.uk/2008/05/24/switch_switchnap_rob_roy/)?

  6. Andrew Duffin

    Why bother?

    Blade Servers?

    How 20th century.

    Has none of you heard of virtualisation?

  7. Phil Williams

    Why bother? Well...

    @Andrew:

    What would you virtualise on, pray tell? Blades are a far better fit for virtualisation than traditional servers - have a look at VMotion, to name just one huge advantage.

  8. Andrew Baines Silver badge
    Stop

    One socket to rule them all:

    Back in the days of the 486, we had multiple vendors using the same socket. Then, they saw competitive advantage to locking users in.

  9. Anonymous Coward
    Thumb Down

    Odd article

    I mean, for example, vendor-specific blade-dedicated switches from Brocade and Cisco? Wtf? *Who cares*?

    As for a multi-vendor processor socket: AMD and DIGITAL/DEC (via Samsung, iirc) had that in the days when Alpha was still around - same socket would take an AMD or an Alpha. It was a lovely idea but it didn't take the world by storm - there was and is more to enterprise-class systems than compatible CPU sockets, and non-enterprise customers just want "cheapest to buy, now". Can you really imagine an HP or IBM or even Sun doing the necessary qualification to certify someone else's hardware in their blade boxes? Dell certainly won't.

    Don't give me any BS about standards either. The real world knows that conformance to standards doesn't even guarantee interoperability between two products from the same vendor, never mind two different vendors. Lots of things that are seemingly standards based don't work quite right together when push comes to shove; how many enterprise customers are going to want to take that risk, who do they hold to account when it doesn't work (especially if both vendor responses are likely to be "you're on your own, we never qualified your config")?

    There are at least two other areas where the "processor independence" track has been tried and failed. They're logically similar although physically different: the NLX motherboard and the PICMG passive backplane concept. In both cases you put the processor and the stuff which needs to be close to it on a plug-in. The slower more remote stuff (eg PCI stuff) is on a separate board or boards. You can still get PICMG kit from various vendors. NLX is afaict dead. The fact that 99.3% of readers will never have heard of these illustrates their historic significance, and as is well known, those who refuse to learn from history are doomed to repeat it.

    Then there's the basic physics. Ye canna change the laws of physics, capn, as the comment re volts and Hertz and power duly noted. You want fast, you run hot. No "third way" here.

    Summary: Good, fast, cheap. Pick two, and hope for the best.

    There's more, but that'll do for now.

  10. Brian Miller

    Remember VME?

    Once upon a time, there was a backplane called VME (VERSAbus-E). 32-bit bus, and basically an extension of the Motorola 68000. You could plug multiple CPUs into it, networking, storage, all that good stuff. A supercomputer could be packed into it. Total blade system, there. And where is it now, even with extensions like 10Gb serial interconnect? No place. Nobody mentions it.

    Blades as a denser rack system is fine. Blades as a pluggable low-end mainframe have not gone anywhere, and will not go anywhere.

  11. Anonymous Coward
    Anonymous Coward

    VME => PICMG => CompactPCI ?

    Might CompactPCI (or a variant thereof) perhaps be the unnamed "standards based bus used in the Telco industry"? Or maybe the author has a long memory and remembers FutureBus+, which is another bus which the military+telco industries once looked upon as a Holy Grail (back in the 1980s) because it did exciting things like distributed cache coherency and live insertion support. It has however long since vanished without trace and unless you have wandered across FB+in a DEC 4000 or DEC 10000 AlphaServer you are unlikely to have come across a real implementation. Again, processor-independence for CPU cards doesn't seem to sell (or hasn't sold so far), except perhaps in certain niche markets.

  12. Tam Lin

    Blade servers are like flights from Chicago to Milwaukee

    All the trouble to spec them out, design everything, prototype, check for compatibility, purchase, build out, start to relax ... whoa, obsolete and can't be upgraded. Time to replace, er, land already. Would have been cheaper and faster to take the train or drive. And gotten you closer.

    So I agree with the article, a longer, wider, and more leisurely life-cycle is needed. But, I also agree with @Philip Buckley-Mellor, except I'm not as confident it will happen as soon as "very shortly" after world peace (no hyphen).

  13. Anonymous Coward
    Anonymous Coward

    various other reasons it hasn't caught on

    (a) blade vendors want to charge a premium but customers are very price sensitive, so no sale

    (b) power density goes up so the customer needs more a/c. if the vendor says that's not a problem, his lips are moving

    (c) good old 1U and up are well proven and very customizable. if something breaks or you want a new feature or better performance you can swap in an industry standard part (cheaply) and off you go.

    (d) blade configs are limited not just by marketing but by what the backplane can power up and interface with.

    (e) blades sometimes (maybe always?) have vendor-specific KVM that won't integrate into your existing solution(s)

    (f) really large datacenters have to cater to the needs of their customers in turn. forcing this next level of customers to a blade standard and not being able just de-rack and deliver a working machine independent of the backplane is scary to some people.

  14. Anonymous Coward
    Happy

    @ AC - blades too expensive?

    AC,

    I'm not sure whose blades you've been looking at, but you may want to take a look at Dell's M1000 range..

    if you price out a chassis with blades, internal switches, I think you'll find a significant saving over the rack-dense equivalent, not to mention speed of deployment, ease of management advantages.

    I can't speak to IBM and HP blades as I don't work for HP/IBM, but typically we've seen them costing as much or more than the rack dense equivalent.

  15. Glen Turner

    DC power is not the future

    -48DC is common enough in high-end routers, which pull about 14,000W per rack. At this low voltage that is a substantial amount of current, and thus substantial cabling costs. Yet high voltage DC makes no technical sense either -- the transmission loss will be higher than the losses from AC-DC conversion.

    High-voltage AC offers a cheap way to move electrons around.

    Furthermore, the input to the facility is AC. Either from generators attached to the grid or from a local backup generator. What makes you think that the losses of AC-DC rectification at the point of generation would be less than the losses of AC-DC rectification at the point of use?

    The high loss from DC-AC inverters from battery UPS systems can be ignored. These are only in service for a few seconds whilst riding out brown-outs or whilst the backup generator comes online. So the total amount of lost energy is low.

  16. Alfred Loo

    Niche market

    Useful when you are adding more applications than your real estate (server room) can grow. Otherwise it would be a hard sell against a traditional server.

    Here is some food for thought a 42U server can theoretically take 42 servers. Anyone ever checked the back of the rack? It is simple not possible to run 42 power cords to the PDUs.

  17. Anonymous Coward
    IT Angle

    Re: DC power is not the future

    One thing that would help AC is to increase the frequency. At 1khz the transformers can be much smaller and lighter, and somewhat more efficient also.

    You don't want to go much higher than 1khz since the inductive losses will sap the power, but while 60Hz is great for overhead power lines, it's a little on the low side for just running power around a data center.

This topic is closed for new posts.