Ah, you're right, I goofed on the calculations. Still, $30k for such a system isn't bad at all, mate. That's a hell of a lot of compute for that price.
"Is there any other reason to choose this server unless you need to drive 4 Tesla cards in 2U? And, as you propose scenario of a rack full of such servers why not go for 4U units such as the 4028GR-TR that can take 8 such cards? What would be the benefit for either approach?"
There isn't, really, a good reason to pick one over the other that is universal. The world isn't quite so black and white and each of these systems have a reason to be. It really boils down to "can your workloads make use of X amount of oomph per node". To be perfectly honest, we're running in to issues driving 4 GPUs in a single node, and these aren't the top end GPUs. 8 per node starts getting into the "having specialist HPC code" territory.
I suspect 2 GPUs is the sweet spot for many workload, with 4 GPUs being the border of what's workable with selfware today.
"You mention that "the maximum amount of compute capacity you can cram in to 2U of rack space" but that's only the amount of GPU compute power you can cram into this - there are 2U servers with 4 Xeons and which can take more RAM."
GPUs > CPUs in terms of compute. Though I do understand you on the RAM thing. Believe it or not, I have a pretty in depth article on the discussion between both and why sometimes RAM matters (holding billions of variables, etc.)
"Why didn't SM settle for PCIe M.2 if they oppose industry standard USB and SD?"
Actually, this is an emerging standard where power is supplied as part of the SATA port. It's quite common on a number of systems. NVMe setups use it, for example. (Though in the case of NVMe there is also a connector for the PCI-E lane.)
Those (M.2) are not capacity limited, they're cheaper, abundant and several times faster
M.2 connectors are, for the most part, better. There are issues about "where do you put the smeggling thing" on a system crammed that densely, which is why I suspect that they've stuck with the SuperDOM. I should point out here, however, that SuperDOM is not entirely proprietary. Plenty of folks make SATA flash drives that are powered by the SATA port in much the same way. For reasons I don't entirely understand the industry seems to be moving to M.2 for consumer/SMB level hardware and these powered SATA drives for server stuff. I actually have been gathering info to deep dive into the whys of it.
So it worked according to the specs. How about that!
Which is actually quite amazing. Plenty of systems I've tested don't work within the claimed operating temperatures. Supermicro consistently does. Given the power density of this unit, I'm quite surprised that they managed it.