HP trotted out its newest line of x86-based Proliant systems this week. These new boxes, fueled by Intel Sandy Bridge processors, will sport speedy PCIe 3.0 slots, custom HP disk controllers (for tri-mirroring and error correction), and provide a wide range of features aimed at improving system flexibility and manageability. Our …
"HPHPCHH for short"
Anyway, "all of this monitoring and automated management has a cost in terms of lost cycles. And that overhead, even if only a few percent, is probably unacceptable to the typical HPC customer. Moreover, they’ll definitely balk at paying more for the hardware to support these features, mainly because that money could be better used to buy more/faster CPUs or memory "
Disclaimer: I'm no HPC guy, never touched it. However... I've found the absolute quintessence of tuning anything I've ever sped up (sometimes by orders of magnitude) is feedback, and the relatively low cost of gathering that is multiply repaid in increased speed and/or reduced memory. I'd be amazed if that's different here.
but... <repeats disclaimer>
Re: "HPHPCHH for short"
Firstly, I would like to repeat your disclaimer and state "I am not a HPC guy" either.
That said, it seems to me that in enterprise computing so-called "raw grunt" (or "megahurtz" as intel marketing bods used to push it) is the least important factor when choosing server hardware.
Usually memory, disk, network IO, not to mention the aforementioned "monitoring & management" is where the money is at.
IN the HPC world however, I suspect that monitoring & management is FAR down the wishlist of buyers. I expect that the ability to fail gracefully is important so that if a node in the cluster fails it doesn't fsck the entire job run but even that is more down to the overarching OS and clustering software that is running the system (the only one I know of is "Beowulf" and that was all the rage a decade ago, I have no idea what the latest and greatest is, as I said I'm not a HPC guru)
It sounds to me that this is enterprise gear masquarading as HPC gear because I don't expect HPC buyers are interested much in anything other than MegaFLOPs per Watt or MegaFlops per 1U rack space.
Tiered management (re 360/160/1xx)
HP have tiered their support for a long time, with 3xx series with full enterprise support in management and deployment, 1xx with limited deployment support and good management support, and various others with a lightweight management capability, making virtually identical server chassis usable for three areas, compact 19" rack dense compute (or blade variants), entry level low cost servers, and enterprise ready servers.
While my experience is with Generation 5 1u and 2u servers, other than dropping some DL1xx servers the G7 (and probably G8s) are following the same model.
This article seems to have digested a press release without really understanding HP product lines at all.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base