Call me "Admiral of developing", please....
When someone will put a stop to silly titles?
Microsoft's Exchange Server has a reputation as being hard to virtualise under hypervisors other than Redmond's own Hyper-V, a state of affairs that has led to more than occasional suggestions that Microsoft might just like it that way. But of course Microsoft still tries to keep Exchange humming as best it can, so in late …
When having "only" 24 processors and 96 Gb of RAM is an "itsy bitsy" mail server, there's something wrong. More or less enough to store a thousand users' Inboxes (excluding attachments) completely in RAM? Just how many organisations have enough email users to need more horsepower than one of those?
It wouldn't surprise me if the overheads got excessive above that point, so you'd actually be better scaling out rather than up at that point. Probably what MS do in their own "cloud" Exchange offering ... so maybe that's the scale they tune it for, rather than individual mega-box servers?
Maybe for your home user. Even for a small office, using shared mailboxes (with proper security, so the whole place isn't logging on as "firstname.lastname@example.org"), shared calendars, and scheduling are pretty common features.
Double the memory (8GB, whoopee) and a modern processor, and yes, you can service those 500 users quite well. POP3 isn't even worth discussing. It was crap in 1998, much less now.
"When having "only" 24 processors and 96 Gb of RAM is an "itsy bitsy"
Expanding from www.theregister.co.uk/2015/03/27/supermicro_twin_server_review/
I can cram 18 cores per socket by two sockets by two threads for a total of 72 threads with 1TB per node. That's all in 1U, or 2 in 2U, 4 in 4U, etc. (I haven't seen any of the 1/2U units do 1TB RAM quite yet.)
24 threads and 96GB of RAM is a joke. A joke. Especially with NVMe SSDs out and I/O able to meet pretty much any demand you can throw at it.
I'm not enough of an exchange admin to weigh in on the VMware versus Microsoft view here, but I will say that on this one thing - limiting exchange to such low core/RAM counts - Microsoft doesn't serve it's customers well. Even cheap-o Supermicro systems can spank those specs today, so there wasn't a heck of a lot of future proofing built in to Exchange 2013.
Which may be what Microsoft wanted. Who knows?
Well, no Trev, it's not a joke. Because it all comes down to how resources are allocated and used. I'm sure you've seen this post (http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx) explaining that with bigger servers, the .NET framework (which underpins much of Exchange nowadays) allocates memory and CPU threads ineffectively. After all it was posted the same day as the updated calculator to which the article refers.
5000 x 5GB users on 3-4 servers is STILL not enough to hit more than about 15 cores (even on slightly older kit) but might need 128GB RAM. Or, you scale it out ONE more VM/node and get within the recommended guidelines. How many users do you want in a fault domain anyway?
Alternatively, if you're going to go to 20 and 50GB mailboxes, and a small number of massive servers, you need to understand that you'll run out of databases in the DAG before you run out of CPU and RAM resources (you're probably looking at SMALLER servers being needed but still with 100TB+ of disk each). Why allocate 64 CPUs and 256GB of RAM to a server that will end up running at 2% CPU and 5% RAM usage? And who would purchase those servers in preference to smaller/cheaper?
Even MS says to virtualize Exchange if you're planning to deploy massive hardware platforms. It's right there in the article. It's also very clear that bare-metal 2U commodity servers is the way they deploy at (much larger) scale, far beyond 99.9% of organisations' internal deployments.
But hey, the software's architects, the support teams who troubleshoot this stuff day in and day out, and the guys who have deployed that system for multiple millions of users - they don't know what they're talking about. But some VMware guy - I guess he must be an _expert_.
" the .NET framework (which underpins much of Exchange nowadays) allocates memory and CPU threads ineffectively"
Which is a fucking joke. Especially considering the hardware that's available today.
"But hey, the software's architects, the support teams who troubleshoot this stuff day in and day out, and the guys who have deployed that system for multiple millions of users - they don't know what they're talking about. But some VMware guy - I guess he must be an _expert_."
Yeah, you know, "some VMware guy" very well might be the expert here. Microsoft and it's developers and systems administrators don't need to care about money, or efficiency. They don't pay for the software licenses and they don't seem to give a bent damn about making the most use out of the hardware or datacenter space.
Put simply: Microsoft's priorities are clearly not the same priorities as actual businesses. So yes, I don't believe Microsoft are the experts here.
Here's an idea: cut Microsoft in half. Azure Public Cloud to be it's own thing, and "them who sell software" to be another. Now let's give it a year and see what the Azure teams have to say about the software after they start having to pay for it, and they start having to sustain and grow only on the backs of their own profits.
Biting the hand that feeds IT © 1998–2019