Resilience
I'm generally a big fan of shared memory systems. However, there are a couple of issues. First is performance (of course) - off-board memory access and cache coherence overheads can be a big problem, although the appropriate workload and clever system software can help a lot (same with NUMA).
However, there is a bigger potential problem with availability. In general, any failure across the physical memory space of a shared memory machine can bring everything down. Basically if you double the number of components you double the number of hardware failures in a given period. Big iron hardware has lots of very expensive features to minimise this through error correction, redundancy and so on. A large shared memory machine made out of commodity stwo-socket servers isn't going to have such features. With non-shared memory models, and an appropriate application design (such as Oracle RAC or horizontal load balancers), then the failure of a single node won't bring the service down. With shared memory models (unless they've been able to do something very clever), the loss of a single node is likely to bring the whole thing down.