Re: Shark jumped
Well, no Trev, it's not a joke. Because it all comes down to how resources are allocated and used. I'm sure you've seen this post (http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx) explaining that with bigger servers, the .NET framework (which underpins much of Exchange nowadays) allocates memory and CPU threads ineffectively. After all it was posted the same day as the updated calculator to which the article refers.
5000 x 5GB users on 3-4 servers is STILL not enough to hit more than about 15 cores (even on slightly older kit) but might need 128GB RAM. Or, you scale it out ONE more VM/node and get within the recommended guidelines. How many users do you want in a fault domain anyway?
Alternatively, if you're going to go to 20 and 50GB mailboxes, and a small number of massive servers, you need to understand that you'll run out of databases in the DAG before you run out of CPU and RAM resources (you're probably looking at SMALLER servers being needed but still with 100TB+ of disk each). Why allocate 64 CPUs and 256GB of RAM to a server that will end up running at 2% CPU and 5% RAM usage? And who would purchase those servers in preference to smaller/cheaper?
Even MS says to virtualize Exchange if you're planning to deploy massive hardware platforms. It's right there in the article. It's also very clear that bare-metal 2U commodity servers is the way they deploy at (much larger) scale, far beyond 99.9% of organisations' internal deployments.
But hey, the software's architects, the support teams who troubleshoot this stuff day in and day out, and the guys who have deployed that system for multiple millions of users - they don't know what they're talking about. But some VMware guy - I guess he must be an _expert_.