The new Xeon E5 processors from Intel pack considerable oomph, but if you want to squeeze the most performance out of them, particularly in virtualized server environments doing real transaction processing and web front-end workloads, you have to remember the oldest bit of advice for the systems racket: don't skimp on the main …
Is this what used on Omicron Persei 8?
These young'uns just don't understand!
But what about clustered systems?
Having a 'cluster-centric' view of the universe, I suspect that running the same benchmark on a 4-system cluster using cheap memory would get much the same performance profile (or better) as LRDIMM, and for a lot less money.
That's not to say LRDIMM doesn't have a place in the sun. Quite the opposite..it may become the mainstream, of course at reasonable pricing, since it is denser and cooler.
Another study I saw, by Intel, suggests that a mix of DRAM and SSD will give dramatic performance boosts for many applications, and is a viable alternative to lots of DRAM. Whether this holds up with VMs is a bit uncertain, but again, it will save a lot of cash.
Re: But what about clustered systems?
I would like to see the same benchmark run versus a microserver tray (something like Dell C5220).
The base price of a 12 microserver chassis with 32G RAM per node is way less than the cost of a 4 socket box spec-ed to the max RAM capacity.
one of the most idiotic benchmarks I've heard about lately
So they were comparing VMs with 38-32GB physical RAM and 4-5 hd available for each VM and 50GB database (i.e. severely disk bound) vs VM with 64+GB RAM and 50GB database (i.e. where DB fits in RAM).
Did it ever occurred to them that RAM is faster than disk?
Or that if they've used TWO 2-CPU (Xeon E5-2690) servers with 384GB RAM each (24x16GB) it'll be both faster and cheaper than their contrived and ridiculously overpriced LRDIMM config?
Virtualised database servers?
Don't really want that, as the license & support costs of multiple servers will eat through any savings on idle CPUs & electricity.
Surely you'd just put multiple db's on the server without virtualisation (or with just one VM, for hardware independence).
While it might be fun to stack'em high, watching them fall is less amusing. If you have lots of virtualised systems it makes sense to spread them out over more, cheaper systems. It also allows 1-many failover which means you have better utilisation than a mirrored pair with 50% idle.