Regarding your DIMM failure rates there are a few things to consider.
First, if you have one failure every two months out of 600 in service your (very rough due to small sample size) MTBF is in the region of (600*30*24*2) ~ 800,000 hours. That would be OK for a hard disk (if only they still actually made million hour MTBF hard disks and not those offensive lumps of crap that get sold now).
I very much doubt that dodgy power is doing anything to your DIMMs, they are behind both the server PSU and an onboard voltage regulator, any noise that gets through that will cause bigger problems than failing DIMMs and you would be losing server PSUs at the sort of rate the Vatican has to blame homosexuals.
From experience what you are probably seeing is that the memory in your servers is not well matched. Modern memory is very very sensitive to timing, this will change slowly over time, generally on an exponential curve. When you memory is matched into sets, for the DIMMs and then sets of DIMMs for the server not only does the timing need to match when they are grouped up but also the decay rate needs to be matched. If they have come off a single, well controlled line and are of the same age then this will be the case. If not then your "matched server memory" will fairly quickly un-match itself and you will end up with what appears to be a failed DIMM, when this goes back to the vendor they will not be able to find any fault with it.
Try buying some decent quality memory for some of the servers and see if that fails too, if it does then maybe you have some other problem. I don't know whose servers you are buying and don't expect the lawyers to let you say but if you have fallen for the whole "high density" gag then you may well be toasting the innards of your 1U servers so that you can have most of the rack empty... (oh and no, high density is not high efficiency, it is quite the reverse)