Re: systemd a copy of Solaris SMF
Yes, you are correct in that Linux are typically run on the supercomputer top500 list, with 10.000s of cpus, or even 100.000s of cpus. I was sloppy and for that I apologize.
I meant: "nobody use Linux on large 16/32 socket servers to run business ERP systems, such as SAP, databases, etc".
The supercomputers are all clusters, consisting of many compute nodes on a fast switch. You simply add another node, and performance of the supercomputer increase. These servers are called scale-out servers. Scale-out servers are only fit for embarassingly parallel workloads, such as HPC number crunching, scientific computations, etc - where there is not much of communication going on between the nodes. Each node runs a tight for loop which fits in the cache, and when computation is done, the result is finally sent for aggregation and summation. Like SETI@home, which runs distributed stuff on many pc nodes. This code seldom branches, so everything can fit into the cache.
However, I was talking about scale-up servers. One huge large server with 16 or even 32 cpus. They are not clusters, but a single large server. Business software (databases, ERP, SAP, etc) are monolithic and the code branches very heavily, going all over the place. So there are lot of communication going on between the nodes. This type of code makes scaling very difficult, and the limit is typically 16/32 sockets and this domain belongs exclusively to large Unix boxes, such as IBM E880, Oracle SPARC M6, Fujitsu M10-4s, Mainframes, etc.
In fact, until recently, the largest Linux server I have ever seen, was a 8-socket x86 server from IBM, Oracle, Dell, etc. Now there are a 16-socket Linux out there, as far as I know. From Bull, it is called Bullion. But the scaling is awful, as Linux can not handle 16 sockets well in a scale up server. There are no SAP benchmarks this server has won. It is not even on the SAP list. On scale-out clusters, Linux scales excellent.
Ideally to scale well, every socket need a connection to each other socket. And the number of connections increase as O(n^2). Which means that if you have 32 sockets, you will have hundreds of connections! This makes constructing large scale-up servers very difficult. Now imagine a SGI UV2000 server, which has 256 sockets. This Linux server, is exclusively used for scientific computations and no one use them to run business software. With 256 sockets, you would need 35.000 connections, clearly that is not doable. So there are lot of short cuts in the SGI UV2000 server, maybe there are only a few hundreds of connections. So SGI Linux server can not run code that branch much. Hence it is only for scientific computations.
SGI explains themselves about their large Linux server (the Altix is the predecessor to UV2000):
"....Typically, much of the work in HPC scientific code is done inside loops, whereas commercial applications, such as database or ERP software are far more branch intensive. This makes the memory hierarchy more important, particularly the latency to main memory. Whether Linux can scale well with a ERP workload is an open question. However, there is no doubt that with each passing month, the scalability in such environments will improve. Unfortunately, SGI has no plans to move into this [scale-up] market, at this point in time...."
The ScaleMP has a similar server, a huge Linux server with 100s of sockets. It is also exclusively running HPC number crunching workloads.
"...The vSMP hypervisor that glues systems together is not for every workload, but on workloads where there is a lot of message passing between server nodes – financial modeling, supercomputing, data analytics, and similar parallel workloads. Shai Fultheim, the company's founder and chief executive officer, says ScaleMP has over 300 customers now. "We focused on HPC as the low-hanging fruit..."
No one runs SAP on these Linux clusters. SAP is for monolithic scale-up servers. On the SAP benchmark top list, there are no SGI / ScaleMP Linux scale-out servers. All the top benchmarks are Unix scale-up servers with 16/32 sockets. Fujitsu even has a 64 socket SPARC server.
BTW, SAP Hana is a clustered database. And clustered databases have lower performance than a monolithic database, such as Oracle database. If you run a 64TB SAP Hana cluster, vs a Oracle SPARC M7 server with 32 sockets and 64TB RAM - the Oracle scale-up server will crush the cluster in terms of performance.
Scale-up servers with 32 sockets are incredibly expensive. Scale-out servers with 100s of sockets are very cheap. For instance, one single IBM P595 server with 32 sockets used for the TPC-C record, costed $35 million. You can buy many SGI clusters for that sum. Large business servers costs very much, and the big money is there. Look at IBM incredibly profitable Mainframe business. Clusters are not profitable, it is just a bunch of PCs stringed together on a fast switch.
>>Still don't understand why businesses buy [big SPARC M6 servers with 32 sockets] instead of scaling-out [SGI 256-socket clusters]. Cost? Complexity?
>I'm not saying that Oracle hardware or software is the solution, but "scaling-out" is incredibly difficult in transaction processing. I worked at a mid-size tech company with what I imagine was a fairly typical workload, and we spent a ton of money on database hardware because it would have been either incredibly complicated or slow to maintain data integrity across multiple machines.
>Generally it's just that it's really difficult to do it right. Sometime's it's impossible. It's often loads more work (which can be hard to debug). Furthermore, it's frequently not even an advantage.
Regarding Linux being bloated. Well, Linus Torvalds himself says so.
"...Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.
"We're getting bloated and huge. Yes, it's a problem," said Torvalds."