It may be hard to imagine why, but some customers who buy blade servers don't want the fastest x64 processors they can get their hands on. They may not be virtualizing the blades, and they may only be running simple print, file, and Web infrastructure workloads. Therefore don't need lots of memory or even two processor sockets. …
Two disk bays
...the HS12 has two disk bays (I've got four HS12, one is a Solasir NAS serving the other three running ESX4. All ESX are boosted to 24GB RAM, CPU running ~20%. Who needs the HS22?
seems a very expensive way to roll out low-end apps
I don't see the justification for low-end blades like these. The whole purpose of blades is to get to dense workloads.
You may as well buy a 1U pizza box low-end server and save the blade slot and a few thousand.
With a 1P-style workload that doesn't require a lot of resources, why would you buy a blade chassis and a bunch of 1P blades? Wouldn't a single server with virtualization be a better play?
Blades solve a density issue, but I can't see 1P workloads driving a need for blades unless a.) they were very ultra-dense blades (i.e. ~4U chassis height) or there was a legal requirement to have standalone hardware.
- On the matter of shooting down Amazon delivery drones with shotguns
- Review Bring Your Own Disks: The Synology DS214 network storage box
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- IT MELTDOWN ruins Cyber Monday for RBS, Natwest customers
- Google's new cloud CRUSHES Amazon in RAM battle