A long time ago, in a data centers far, far away, blade servers ran on laptop chips from Transmeta. These days companies such as HP pop double-wide, scorching hot Itanium-packed blades into their chassis. HP today rolled out the four-socket BL870c blade server. This puppy complements the existing two-socket BL860c rolled out at …
4 is better than 2?
with 4 screws instead of 2, does this mean the Itanic will sink faster?
Mine is the yellow ine... yup thats it... taxi!!
It's all about consolidation!
Me, I'm looking forward to Tukzilla blades! When the they stopped making 8-way Xeon boxes a few years back we consolidated MS SQL Server instances onto Integrity Windows clusters (200+ plus SQL instances, 170+ Wintel servers, lots of local-attach or SAN storage, consolidated onto a pair of 4-way rx5670s with existing XP arrays), but when the rx5670s are due to be replaced I'd like to try 8-core or 16-core Tukzilla blades. We looked at Linux on Integrity and MySQL, but Mickey$haft made the detach-reattach option in SQL Server so easy it made more sense, especially as we didn't have to change any of the front-end apps. Darn, did I just say something nice about a Mickey$haft product!?!?
Bet the TCO is great on these,...NOT!
"But you'll only be able to squeeze four of the tubby BL870c systems into the same chassis, and two into HP's smaller C3000 chassis. So, the extra sockets come with a price."
Is there a market for this?
I have to ask the obvious question, because the market for 2-socket Itanium is EXTREMELY small, so I have to imagine the market for 4-socket Itanium is even smaller.
I mean, who buys Itanium any more? The price-performance versus what you can get from Intel Xeon and AMD is so far beyond Itanic, it's absurd. Moreover, the ease of migration from HP_UX to Linux and Windows should drive any cost sensitive (i.e. all) customers there.
I also think this is pushing the definition of blades a little too far. 4 in a 10 Rack? A blade that takes up 2.5 rack units? Duh.
RE: Is there a market for this?
"A blade that takes up 2.5 rack units? Duh." Well, you do still save on rackspace due to all the integrated switches. Suppose I want a cluster with two LAN and two SAN switches for resilience, well I can have those in the back of the blade chassis, saving me another 4U. In fact, I can have eight interconnect switches in the back of a c7000, saving 8U of rackspace! Duh.
And then, if I already have a blades environment, and only require one or a few 8-core Itanium servers (for thse tasks Xeon/Opteron just cannot do, like really big Oracle or SQL consolidations, or OpenVMS), then I can make use of that existing blades infrastructure and avoid the expense of separate racks, power, switches and cabling. Duh.
As for the health of the Itanium ecosystem, please go read the IDC market figures, you'll see that plenty of 2-socket Itanium servers have shipped, especially the HP rx26x0, rx16x0 (HPTC environments in particular love them), and rx3600 series. Duh.
In fact, all the main UNIX vendors are packing RISC or EPIC blades into their blade chassis, so they obviously believe there is a market for real 64-bit chipped blades rather than 32-bit extended. The difference is HP can now fit a four-socket blade into the same chassis as all their other blades (either c3000 or c7000), whereas Sun can't (different Sun Blade 6000 chassis for 2-socket and Sun Blade 8000 chassis for 4-socket blades) and neither can IBM (JS22 is only 2-sockets, no 4-socket Power blade option). Need I say it - duh!