Re: EVGA SR-2
There are two classes of problems on the SR-2:
1) It struggles to POST with more than 48GB of RAM. The CPUs are rated for up to 192GB each, but a BIOS bug in MCH register initialization and timeouts prevents it from reliably POST-ing with 96GB. It can be made to work most of the time, but at the cost of running the memory command rate at 2T instead of 1T which has a significant impact on memory performance.
2) Some of the settings profile in BIOS clobber each other (IIRC 4 and 1, but I'm not 100% sure, it's been ages since I did any settings changes on any of mine.
Hardware bugs, substandard components, and design faults:
1) Clock generators get unstable long before the hardware. It's supposed to be an OC-ing motherboard, yet above about 175MHz bclk the clock stability falls off a cliff.
2) SATA-3 controller with it's 2x 6Gbit ports is behind a single PCIe lane with 5Gbit/s of bandwidth. So if running two SSDs off the SATA3 ports, the total bandwidth will be worse than running both off the SATA-2 ports hanging off the ICH10 SB.
3) SR-2 is advertised as supporting VT-d. This is questionable because there is a serious bug in the Nvidia NF200 PCIe bridges the SR-2 uses, in that DMA transfers will seemingly bypass the upstream PCIe IOMMU hub. That means that when running VMs with hardware passed through, once the VM writes to it's virtual address range that happens to be at the same place where the physical address range on the host of a hardware device's memory aperture, the VM will write it's memory contents straight to the device's memory aperture. If you are lucky, that will result in a write to a GPU's apreture and result in screen corruption briefly before it crashes the PCIe GPU and the host with it. If you are unlucky, it will write to a disk controller's memory aperture and write random garbage out to a random location on the disks.
This can be worked around to a large extent - I wrote a patch for Xen a couple of years ago that works around the issue by marking the memory between 1GB and 4GB in the guest's memory map as "reserved" to prevent the PCIe aperture memory ranges from being clobbered, but it is a nasty, dangerous bug.
Similarly, most SAS controllers will not work properly with the IOMMU enabled for similar reasons (I tested various LSI, SAS and Adaptec SAS controllers, and none worked properly).
4) NF200 PCIe bridges act as multiplexers in that they pretend there is more PCIe bandwidth available than there actually is. The upstream PCIe hub only has 32 lanes wired to the two NF200 bridges, 16 to each, but the NF200 bridges each make 32 available to the PCIe slots. So if you are running, say, 4x GPUs with each running x16, the net result is that even though each GPU will be showing up as being in x16 mode, only half of the bandwidth to the CPUs actually exists. This isn't so much a bug as dishonest marketing/advertising, similar to the supposedly SATA-3 controller.
5) SB fan is prone to seizing up. This has happened on all of my SR-2s within 2-3 years - not great when the warranty on them is 10 years, and even refurbs for replacements ran out over a year ago, with some motherboards still having 7 years left of their supposed warranty.
There are more issues, but the above are the biggest ones that stuck in my mind.
FWIW, I just ordered an X8DTH6 to replace the last of mine. There are too many issues for it to be worth the ongoing annoyances.
But I guess if the requirements are simple (no virtualization, mild or no overclock (so why buy this board in the first place?), <= 48GB of RAM), no more than one SSD hanging off the SATA-3 controller) it might just be OK enough for somebody who doesn't scratch the surface of what they are working with too much.