Re: Obviously not
"...Buy an SGI Ultraviolet. Simples.
http://www.sgi.com/products/servers/uv/configs.html
Seriously - you can spec one of these with 64 Tbytes of memory...."
Well, the SGI UV2000 and the predecessor Altix servers with 64TB RAM are clusters. And they are only fit for HPC number crunching embarassingly parallel workloads. Even SGI says so. There is another Linux server as big as the SGI UV2000 server, ScaleMP sells a Linux server with gobs of ram and 10.000s of cores - and guess what? Is is ALSO a cluster. All servers bigger than 32 or 64 sockets, are clusters (sure they run a single image Linux kernel, but they are clusters, yes).
If you are going to do enterprise workloads (large databases, ERP systems, etc), then you need to go to SMP alike servers, and the largest SMP alike servers have 32 sockets (IBM P795, HP Itanium Superdome/Integrity, SPARC M6-32) or even 64 sockets (Fujitsu M10-4S).
HPC number crunching are typically running a tight for loop in the cache, on each node. So each node does not communicate too much - so this is great work for clusters such as the SGI or ScaleMP servers. Enterprise workloads, have code that branch everywhere so you need to visit all code everywhere - and for this type of workload where every cpu communicates all the time with each other - you need SMP alike servers (IBM P795, Oracle SPARC M6-32 etc). There are clustered databases running on clusters, but they can not replace a SMP database. A cluster can never replace a SMP server.
So the largest Linux servers are ordinary 8-socket x86 SMP-alike servers, and the SGI and ScaleMP clusters with 10.000s of cores and 10s of TB. There are no 32 socket Linux servers on the market, and has never been. Linux scaling is ineffecient when you go beyond 8-sockets. Sure, you can compile Linux to 32 socket unix servers - with terrible results. Google on "Big Tux", and read how Linux had ~40% cpu utilization on the HP Integrity/Superdome 64-socket server - that is really bad. (I always mix up HP Superdome and Integrity). But when you compile Linux to Unix servers, you get less optimal results. And besides, the SMP servers are very very very expensive, who would run Linux on a SMP server?? For instance, a 32 socket IBM P595 server used for the old TPC-C record costed $35 million (no typo). Who in earth would run Linux on a big SMP server?? Linux does not scale well beyond 8-sockets in the first place. It is better to buy a 10.000s of core Linux server for a fraction of the price of a large SMP server. Linux servers with 10.000s of cores are very very cheap. 16/32 socket SMP servers (Unix/Mainframes) are extremely expensive in comparison to a cheap cluster. A cluster costs you the same price as each node + a fast switch, basically. A SMP server needs lot of tailor made tech to scale to 16/32 sockets and that costs very much, because ordinary cpus does not scale.
So, there is a reason you will never see Linux in the enteprise market share: it is dominated by large SMP servers running enterpise business workloads (Unix / Mainframes), and unless Linux will scale beyond 8-scokets, there is no chance in hell Linux will venture into high end Enterpise segment - where the really big money is. Or, all Wall street banks would instead buy cheap Linux clusters with 10.000s of cores, to replace their extremely expensive 16/32 Unix / Mainframe servers.
.
SGI and ScaleMP says their largest servers, are clusters (that is, they are only used for HPC number crunching, and can not do SMP Enterprise workloads)
http://www.realworldtech.com/sgi-interview/6/
"...The success of Altix systems in the high performance computing market are a very positive sign for both Linux and Itanium. Clearly, the popularity of large processor count Altix systems dispels any notions of whether Linux is a scalable OS for scientific applications. Linux is quite popular for HPC and will continue to remain so in the future, ... However, scientific applications (HPC) have very different operating characteristics from commercial applications (SMP). Typically, much of the work in scientific code is done inside loops, whereas commercial applications, such as database or ERP software are far more branch intensive. This makes the memory hierarchy more important, particularly the latency to main memory. Whether Linux can scale well with a SMP workload is an open question. However, there is no doubt that with each passing month, the scalability in such environments will improve. Unfortunately, SGI has no plans to move into this SMP market, at this point in time..."
ScaleMP's large Linux server is also only used for HPC:
http://www.theregister.co.uk/2011/09/20/scalemp_supports_amd_opterons/
"...Since its founding in 2003, ScaleMP has tried a different approach. Instead of using special ASICs and interconnection protocols to lash together multiple server modes together into a SMP shared memory system, ScaleMP cooked up a special software hypervisor layer, called vSMP, that rides atop the x64 processors, memory controllers, and I/O controllers in multiple server nodes .... vSMP takes multiple physical servers and – using InfiniBand as a backplane interconnect – makes them look like a giant virtual SMP server with a shared memory space. vSMP has its limits....The vSMP hypervisor that glues systems together is not for every workload, but on workloads where there is a lot of message passing between server nodes – financial modeling, supercomputing, data analytics, and similar parallel workloads. Shai Fultheim, the company's founder and chief executive officer, says ScaleMP has over 300 customers now. "We focused on HPC as the low-hanging fruit..."
A developer in the comments explains:
"...I tried running a nicely parallel shared memory workload (75% efficiency on 24 cores in a 4 socket opteron box) on a 64 core ScaleMP box with 8 2-socket boards linked by infiniband. Result: horrible. It might look like a shared memory, but access to off-board bits has huge latency...."
.
If you really need true a single 32TB RAM server which is not a cluster, you need to go to Oracle SPARC M6-32 which is the only SMP alike server on the market with that much RAM. Running databases from memory will be extremely fast. (Memory database Hana is a cluster and does not count, it can not do SMP workloads as each node has to communicate with each other). Also, the new SMP alike Fujitsu Solaris M10-4S server with 64 sockets, has 32TB today and will have 64TB RAM with the new memory sticks. I think the largest IBM Mainframe has 3.5TB RAM? And the largest IBM P795 with 32 sockets has 4 (or is it 8) TB RAM? And the largest HP Superdome/Integrity has 2 or is it 4TB RAM? Matt Bryant, can you please fill us in?