IBM is looking for the new Nehalem EP-based servers to kick start its System x rack server and BladeCenter blade server business, which saw a steepening decline in sales as 2008 wound down. Big Blue will today announce two new rack servers, a blade server, and some configurable compute and storage nodes for its avant garde …
... should just dump their x86 line and sell Sun's excellent piece of engineering.
This of course includes Solaris.
IBM falls further behind in blades
the chassis really limits what they can offer on a blade, especially in memory capacity.
IBM sales' answer of course, is to go to a double-wide blade and although this doesn't pass the sniff test, those that blindly bleed IBM blue will buy this argument. The reality though is that IBM is losing market share faster than The Titanic took on water in the blades space to better competition from HP and to a lesser degree Dell.
".....which runs on 120-volt power....."
Back to the old days of power supplies that went spectacularly <BANG> when plugged in then?
Or are they really 120/240 autosensing like everything else made since Noah got out of the business. It's still a ".co.uk" here, so it is important.
Many moons gone, I was with a small training company and myself and a colleague were given the task of genning up a roomful of XT clones for a course. We assembled them all on the desks, stuck a DOS disk in each one and then went down the aisle firing them all up in sequence for the fun of it. Mine went: <Click><Whir><Click><Whir><Click><Whir><Click><Whir>...all down the line. His went: <Click><Whir><Click><Whir><Click><Whir><Click><BANG!!!>... Cue vapours and North Sea Cod on trawler deck impression.
Cisco California blades and fake 384 GB...
This Cisco configuration is based on 32GB (non existing!) DIMM modules. Using biggest available (8GB) modules they will be capable of handling up to 96GB of memory.
IBM looking very old!
I think Nehalem is finally showing BladeCentre up as the outdated design it is.
Lack of memory density and more importantly the inability to run the large processors as the chassis cant handle the thermals! (I seem to remember tehm having to cut the Power 6 down to fit in it?)
That and the fact that if you want any expandability you have to use double width blades reducing your chassis density to little mre than traditional racked boxes (which have more memory density and can run the hotter processors).
Sorry IBM you'll have to do better than that!
Corrections to some previous FUD comments
The HS22 is a *single wide* blade, with 12 DIMM slots supporting up to 96GB RAM.
How 14 of those in a 7U or 9U chassis is less dense that 16 similar spec'd blades in a 10U chassis, I'll let you decide..
Also, the top Nehalem SKU (Xeon W5580 130W TDP) that is not supported in the HS22 doesn't seem to be supported in HP's or Dell's blades either. Do your homework next time guys!
Re: Corrections to some previous FUD comments
14 HS22 w 96GB (Im going to be nice the article did say it only supported 48GB at launch (IBM having difficulty with memory thermals in their chassis?)) RAM in 9u vs 16 bl490c w/128Gb RAM in 10u, so thats much higher memory density and slightly higher cpu/u density (not by much I'll admit), both Nehalem 2 socket.
Oh and the HP blades unlike the Dell and IBM come with 2 x 10Gb/s NIC's as standard, with Flex 10 if I want it so I can have upto 8 vNIC's (OS sees them as proper hardware paths) in a 2 socket box without even having to buy any extra I/O cards! Unlike the IBM or Dell. Although I can still choose upto 2 mezz cards as well if I want to.
Oh and if you dont want a virtualisation tailored blade, why not buy a bl460c, still 2 socket with 12 DIMM's (same as HS22) so slightly higher CPU density within a chassis still (16 vs 14), but again I get 2 proper hot plug disk drives accessible from the front and 2 x 10Gb/s on board NIC's!
Oh and the Nehalem SKU is supported in HP's blades, ordered some last week after doing my homework and laughing at the IBM solution as it doesnt appear to have moved on by much! There is a reason that for every blade IBM sell, HP sell 3! Face it the IBM blade offerings are a bit of a lame duck, lack of blade diversity, managability (believe me I have used both and IBM's managment interface is a joke!) and simplicity.
Re: Re: Corrections to some previous FUD comments
1. 14HS22 may be fit into 7U as well (Blade Center E).
2. Lack of IBM blade diversity??? Are you serious? HS with Intels, LS with AMD, JS with PowerPC or Power6, QS with Cell, PN41 as network inspector and do not forget that BladeCenter specification is open (blade.org) - there is a blade server with Sun T2 processor. So, how many HP has?
Re: Re: Re: Corrections to some previous FUD comments
Bladecentre E does not have all of the networking capabilities of the Bladecentre H so its not a fair comparison, if you want the IBM equivalent of the c7000 you have to specify the H.
Sorry I didnt clarify what I meant by diversity, HP also have a wide range of processor options (usually more Xeons and Opterons supported) plus Itanium (of which they have the highest market share of RISC/EPIC blades). Of course HP dont have to cut any processors down to fit their chassis thermals, unlike IBM
What I meant was that if you wanted a virtualisation platform then you can buy an optimised one from HP that gives you much higher memory density per socket than anything IBM can offer the bl490/5c. If you want a HPC grid you buy a 2x220c that gives you 4 cpu sockets in a half height so you can have 256 cores in 10u. If you want to do VDI there is a blade with an inbuilt graphics card. If you want cost effective and power efficient you buy a bl280c. So on and so forth. As I stated if IBM blades were really that good why have they constistently lost market share! My meaing was that HP have alot more specialist blades for particular tasks, IBM have some generic blades that you have to try and tweak for a specific task.
Thats before I even start talking about any of the chassis technologies like Virtual Connect or the power and thermal technologies (that IBM cant even get close to!). Oh and IBM Open Fabric Manager isnt a patch on Virtual Connect (no pun intended) particularly now Flex-10 is around.
Oh and you still have not answered the thermals issue!
Oh and blade.org is a joke, HP has far more partners as part of its HP BladeSystem Solution Builder than blade.org. blade.org is just IBM's way of trying to force their (inferior) product as the industry standard which based on their repeated market share decline is not working.
Anonymous - Get real and get your facts straight
Obviously an HP bigot, and an IBM hater - look at the real facts:
1) it took HP 3 tries to finally get their blades to where IBm was 6 years ago
2) you want 16 blades for max density- sacrifice redundancy as each blade has only one connection for power and I/O to the midplane
3) want to mix half-high and full hi blades in the same chassis, you have minimal choices
4) IBM side-car options are unique in the industry, - add PCI slots, add memory, add hard drives
only if you want - no one else can scale this way
5) HP Solution builder - 56 members, IBM Blade.ORG 86 members (last I went to school 96 was a bigger number.
6) Virtual Connect - proprietary, and 3 times the cost of IBM's Open Fabric manager
7) Noise level - OUCH for HP, 68-72 decibels, IBM as low as 60.
8) Want to add storage blades to HP, sacrifice a blade - IBm Blade-S has separate bays
Anonymous - Get real and get your facts straight
9) HP blades have thermal issues with memory dimms (run at 98degrees)
10) Oh and what idiot installs a hard drive on top of a heat sink (surely you gest !!!!)
11) GET THE FACTS website on HP had 7 articles about IBM blades one year ago and amazingly only 2 items are there now? hmmm could it be that they were WRONG - oh and by the way the last 2 items are also wrong
12) IBM did not announce the higest Nehalem speed bin and NEITHER DID HP
13) Latest rumour that HP is coming out with yet another chassis design within the next 12 months once again adhereing to their RIP AND REPLACE strategy
Get your facts right next time
Last page for response to Anonymous
Why even mention Itanium - just shows how much you really DON"t know - it's a dead horse, never made it, never will. Want proof? - ask INTEL what their roadmap is for Itanium - there is none. Ask Microsoft where all the applications are - very few and no news ones coming. you want to compare Itanium performance versus IBm power - what a joke, not even in the same class. So why does HP still have Itanium - cause they invested $22B over 7 years so they are committed to it. Plus, you will see Alpha and Tru64 goin away soon so HP HAS to keep something for their 64-bit customers - Read between the lines moron.
Re: Last page for response to Anonymous
Just to clarify the Itanium situation. Last readmap I saw had Itanium going for at least another 3 generations last IBM roadmap I saw had Power 6+ and didnt even mention 7 and that was 2 months ago! Oh and lets bring some real world economies into this, do you know how much it costs to start making a new processor design including new fab technology and R&D about 2 billion! Unfortunately Intel are VERY good at manufacturing processors and making it pay, to be brutally honest I dont think IBM can afford to stay in the game for much longer, particularly as their last balance sheet showed them making a loss in their microchip business!
Oh and INTEL dont care about MS products (well other than SQL) because they have Oracle, SAP etc on their side (just look at the licensing to see how unfair it is when trying to size for an IBM system) last one we did on our systems showed that just by moving from Power 6 to Itanium we would save 30% in licenses for our Oracle and at Oracles prices thats enough to pay for the hardware! As for performance dont go there, your just showing yourself to be the script kiddie you really are, as far as your concerned its all about MHz isnt it ...................... tit!
Anonymous - Get real and get your facts straight
Oh look an IBM salesgrunt! Either that or a user who is so far up IBM's a*se everything looks blue.
Well let me tell you mr IBM salesperson. I work for quite a large company who until recently was predominantly IBM. That is until we came to refresh time and started looking around to see what else there was.
I dont have time to address all of you "facts" but lets take a few and see how wrong you are.
"2) you want 16 blades for max density- sacrifice redundancy as each blade has only one connection for power and I/O to the midplane"
WRONG! You see what you fail to understand is that the midplane on a HP is passive which means there is nothing on it to go wrong, all of the components are in seperate modules easily accesible out the back. What happens if an IBM midplane has some of its components short out, oh sure there is redundancy but like when it happened to us you end up having to have the whole midplane replaced which means taking everything in the chassis down thats before you loose half of your I/O to every single blade in the chassis!
"6) Virtual Connect - proprietary, and 3 times the cost of IBM's Open Fabric manager"
You see this is the one that makes me think you are an IBM employee. Virtual Connect is an end point on the network, that means any networking device sees it as a NIC or HBA, the only thing proprietary about it is the HP badge on the front and the fact that IBM didnt think of it!
"7) Noise level - OUCH for HP, 68-72 decibels, IBM as low as 60."
Whats wrong you running a yoga class in your DC or something? Of all the complaints about kit this has to be the lamest ever!
"9) HP blades have thermal issues with memory dimms (run at 98degrees)
10) Oh and what idiot installs a hard drive on top of a heat sink (surely you gest !!!!)"
Ah the old power and thermals. Well all I can say is that our DC runs about 2 degrees cooler now that we chucked that BladeCentre rubbish out, oh and HP actually give us real time data on our power and thermal unlike IBM's guesstimate, which is quite useful when I have to talk to our facilities people!
"12) IBM did not announce the higest Nehalem speed bin and NEITHER DID HP"
Sorry just ordered it!
"13) Latest rumour that HP is coming out with yet another chassis design within the next 12 months once again adhereing to their RIP AND REPLACE strategy"
Ive seen the NDA, I would be very very worried if I was IBM!
Oh congratulations by the way on cutting and pasting IBM sales FUD, they really are masters of the brown stuff arn't they. If only they put as much effort into their products they wouldnt have to slag everyone elses off!
To the HP bigot
Actually I am not an IBMer, I am a customer - Major engineering company with over 1500 blades, all converted to IBM after a disasterous year with HP blades - over heating, drive failures, backplane failures (passive backplanes my butt !!) but what erked me the most was unethical sales practices by our HP rep. We had enough and turfed them out - you will soon see. If there's FUD out there, it's all HP. Like I said, it took HP 3 tries and 5 years to get to where IBM was 5 years ago. If you 've seen the NDA then you know you will have to rip and replace all your existing infrastructure in the next 18 months. Good luck
- On the matter of shooting down Amazon delivery drones with shotguns
- Review Bring Your Own Disks: The Synology DS214 network storage box
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- IT MELTDOWN ruins Cyber Monday for RBS, Natwest customers
- Google's new cloud CRUSHES Amazon in RAM battle