It may be hard to imagine why, but some customers who buy blade servers don't want the fastest x64 processors they can get their hands on. They may not be virtualizing the blades, and they may only be running simple print, file, and Web infrastructure workloads. Therefore don't need lots of memory or even two processor sockets. …
Two disk bays
...the HS12 has two disk bays (I've got four HS12, one is a Solasir NAS serving the other three running ESX4. All ESX are boosted to 24GB RAM, CPU running ~20%. Who needs the HS22?
seems a very expensive way to roll out low-end apps
I don't see the justification for low-end blades like these. The whole purpose of blades is to get to dense workloads.
You may as well buy a 1U pizza box low-end server and save the blade slot and a few thousand.
With a 1P-style workload that doesn't require a lot of resources, why would you buy a blade chassis and a bunch of 1P blades? Wouldn't a single server with virtualization be a better play?
Blades solve a density issue, but I can't see 1P workloads driving a need for blades unless a.) they were very ultra-dense blades (i.e. ~4U chassis height) or there was a legal requirement to have standalone hardware.
- Breaking news: Google exec in terrifying SKY PLUNGE DRAMA
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Google chief Larry Page gives Sundar Pichai keys to the kingdom
- Adobe spies on readers: EVERY DRM page turn leaked to base over SSL