It may be hard to imagine why, but some customers who buy blade servers don't want the fastest x64 processors they can get their hands on. They may not be virtualizing the blades, and they may only be running simple print, file, and Web infrastructure workloads. Therefore don't need lots of memory or even two processor sockets. …
Two disk bays
...the HS12 has two disk bays (I've got four HS12, one is a Solasir NAS serving the other three running ESX4. All ESX are boosted to 24GB RAM, CPU running ~20%. Who needs the HS22?
seems a very expensive way to roll out low-end apps
I don't see the justification for low-end blades like these. The whole purpose of blades is to get to dense workloads.
You may as well buy a 1U pizza box low-end server and save the blade slot and a few thousand.
With a 1P-style workload that doesn't require a lot of resources, why would you buy a blade chassis and a bunch of 1P blades? Wouldn't a single server with virtualization be a better play?
Blades solve a density issue, but I can't see 1P workloads driving a need for blades unless a.) they were very ultra-dense blades (i.e. ~4U chassis height) or there was a legal requirement to have standalone hardware.
- Review Apple iPhone 6: Looking good, slim. How about... oh, your battery died
- Review + Vid Apple iPhone 6 Plus: What a waste of gorgeous pixel density
- +Comment EMC, HP blockbuster 'merger' shocker comes a cropper
- Moon landing was real and WE CAN PROVE IT, says Nvidia
- 46% of iThings slurp iOS 8: What part of this batt-draining update didn't you like?