Dell M1000e Chassis Specs
The M1000e can hold 16 x 2 Socket servers (skylake) - OR - 8 x 4 Socket servers (haswell). Unlike HP or Huawei, you can not hardware partition the Dell m1000e nodes.
Huawei has unveiled a more powerful version of its top-end KunLun server at CeBIT, amongst a raft of other big iron-ish hardware and software announcements. KunLun is Huawei’s big freaking box of a server, with up to 32 CPUs in a rack. This V5 edition is basically a Xeon SP refresh with NVMe drive support. This is the latest …
I should also have added that Cisco do sell an 8-socket server (the Cisco C880 M4), although it is OEM'd from Fujitsu - it is basically a Fujitsu PrimeQuest server- possibly they will also OEM the SkyLake version as well.
Dell also sell 8-16 socket servers from Bull (Atos) - I wouldn't call it an OEM though as I don't think Dell even rebrand these systems.
All of the up to 8-socket (glueless) designs just use standard Intel UPI (was QPI) connections between the processors - same as in a 4-socket box, except you get extra NUMA latency domains as you don't have enough UPI links to direct connect all CPUs together.
For the boxes that scale over 8S (like the KunLun and HPE's Superdome Flex), typically one of the UPI links to each processor is connected to custom silicon (think FPGAs) that act as agents/proxies to filter the coherency protocols and in some cases cache data as well. This is why these systems can typically use Gold Intel CPUs as well as Platinum CPUs - the limit of 4 UPI device IDs per group of processors in Gold CPUs isn't a problem as they access others via the agent/proxy silicon.
I find this old 'big iron' very interesting, having begun my working life with mainframes, but it's niche is narrowing as standard platforms become more and more capable. A two socket unit from loads of people can have >50 cores and 1.5TB of RAM. There are also quad socket boxes with >100 cores and 3TB of RAM. I would suggest if the application has been written in such a way it doesn't fit on a number of these servers it's probably very badly put together. I know the 'big iron' folk speak about ease of management but Nutanix has taken that to a whole new level and scales to 1,000s of cores and many TB of RAM in a single cluster, and makes managing multiple clusters a (relatively) trivial task. The future is software driven, hardware does matter, but it no longer needs to fancy. SuperMicro, Lenovo, Fujitsu, Dell, HPE and others all produce decent kit, what matters is the software. I still wish my friends in the 'big iron' world well, it will be with us for many years, but IMO for a decreasing set of applications.
You're thinking about CPU here a lot more than memory - memory is the key on these types of system - the ability to ingest huge amounts of data and process it without having to go through complex data partitioning processes is what differentiates them - in many cases they only have the amount of CPUs they do because they have to be there to provide that much memory (i.e. constrained by current x86 processor design) - as we move into the world of persistent memory (3d x-point etc) combined with new memory semantic interconnects (like GenZ) we'll see these sorts of systems focus much more on delivering many 000s of TB of memory with the right quantity and type of SoCs composed from a fabric to fit specific workloads. Much more interesting that stitching 2-socket systems together with software (and all the constraints that brings).
I realise your point is about more than bloat, but bloat is still one of the key issues here.
'Big Iron' is a manifestation of something seen all the way up from the phone in your pocket, through tablets and laptops and desktops up to workstations: fantastic amounts of CPU power, memory and storage compensating for the fact that so much modern software, whether it's a phone app to take notes, a Windows program to edit images or a big red database running across a datacentre, is badly written, obscenely bloated and grotesquely inefficient.
I feel sure there is a computer science thesis brewing somewhere for a postgrad team prepared to take a couple of widely-used application systems at each current hardware point, refactor them from the ground up with a view to compactness, speed and efficiency, and demonstrate that it's possible to do all the same stuff with the same data just as fast—on kit that's three years old (or runs at a tenth the speed with a hundredth of the RAM).
Sometimes it's as if there is a kind of bizarre "waste race" going on, to see how much unnecessarily colossal computing power is needed for the latest generation of morbidly obese, unfit code.
But I'm realist enough to know that it all depends upon incentives. If computing power continues to get more bang for buck, and cheap second- and third-rate coders can knock out stuff that sort-of just about works, no matter that 100Mb of logic is delivered in a 10Gb fatberg of shonky code and endless libraries, with their many sins obscured by freakishly quick computing—who, if anyone, has a reason to seek efficiency?
As a one-time Ada practitioner (doing what you'd expect with Ada in the early 90s), I wonder how the kind of code found in, say, modern commercial aircraft systems compares with the stuff written for the corporate world's CRM, ERP systems and others. Obviously, I personally suspect that the efficiency of the former is orders of magnitude beyond that of the latter.
Maybe that's another thesis for someone?
Biting the hand that feeds IT © 1998–2019