10 severs in 10U
Now 16 servers in 3U I considered "compact". Like the B1600 chassis Sun produced, what, 4 years ago? Now whatever happened to them?
Contrary to public perception, Sun Microsystems does want to play in the mainstream blade server market. Sun today revealed yet another take on blade systems, showing a compact chassis that holds Opteron-, Xeon- and UltraSPARC T1-based servers. The new Sun Blade 6000 chassis compares favorably with existing systems from blade …
Until someone introduces a good value-for-money blade setup, I'm sticking with 1U servers. Blades are usually 20-50% more expensive than a normal 1U server, plus you have to pay a hefty whack for the chassis.
Sure, you get less spaghetti with a fancy blade system, but is it really worth *that* much money ?
The only way blade systems are going to come down in price is if the manufacturers agree to a standard design for chassis' and blades.
Someone commented about the B1600 blade chassis from Sun. Well, it didn'tsell well because it was plagued with heating/cooling problems and the blades were underpowered, so Sun abandoned that design.
One of the very basic basic benefit of blade is that the the overall energy consumption is lower compared to rack mount servers, "for the same amount of computing power". Basically what they do is to use common powersupply and fans shared by a bunch of servers. You get the added benefits of better cable management, integrated switches etc. When you consider the whole datacenter, you can pack more computing power in less space and with less electricity - this is very important for some customers. So even though you get only 10 2S servers in 10U space, the total power consumption would be lower compared to 10 1U rack mount servers. BTW, the HP C-class blades
have can have only 8 full hight blades in 10U space, and each full hight blade is actually less powerful than the new Sun blades ( have only 6 DIMMs per socket compared to 8 for Sun), of course you can use 16 half hight blades in HP, but each blade would be under-powered.
I see the new Sun blade is unique in many ways:
- You can use the highest performance cpus, compared to some blades which want to use the LV version of the cpus.
- They offer 8 DIMMS/socket, so you can actually buy a large memory config blade cheaper. For example, if you need 16G, you can use 16 1G DIMMS which will be cheaper that 8 2G DIMMS. And you can go to 32G by using 16 2G DIMMS, and the DIMM cost would be 50% cheaper than using 8 4G DIMMS. Not to mention, you can go upto 64G, something you can't do on HP/IBM/DELL blades.
- They use PCI express modules, which allow you to have unique I/O per blade. You can't do this on other blades which must use the same IO for each blade. Use of PCI express module also makes servicing much easier, since each module is at the back of the chassis. Compare what with other blades, where the IO modules reside on the blade itself. In addition, the PCI express modules are hot plug capable.
Overall, yes blades are more expensive than rack-mounts, yet the blade market is growing by leaps and bounds, because there are customers who value the density/power advantage of blades and don't mind the extra upfront cost.
The swing point is usually five "servers" - at that point the savings in not just "spaghetti reduction" but also in less obvious things, like greater efficiencies in power supplies and cooling, mean blades come out better. Been there and done the maths, and I started as a BIG doubter of blades, now a solid convert. I'd be interested in the T1 blades as possible webservers, I expect they'd do very well there, but would I really trust a company like SUN to make a decent Windows or Linux server when others such as HP, IBM and even Dell have so much more experience in the field?
Biting the hand that feeds IT © 1998–2019