back to article Oracle tunes Solaris for Intel's big Xeons

Word is trickling out of Oracle that is recently acquired Solaris has been heavily tuned to support Intel's new eight-core Nehalem-EX Xeon 7500 beasties. Pity, then, Oracle does not yet seem to have a Nehalem-EX box in the field. But the indications are that Oracle is working on an eight-socket box. First, said eight-socket …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Go

    Larry

    ..is doing the right thing. Now teminate Slow Processor Architecture and the strategy is right. Work with Intel and AMD to scale to hundreds of processors.

  2. Jon Whiteoak

    M Series Hot Plug CPUs

    "The current Fujitsu-designed Sparc64-based Sparc Enterprise M servers support hot plug CPU and memory cards already running Solaris 10, and Sun itself has supported hot plugging of these features since the UltraSparc-III systems nearly a decade ago. Support could be even older than that, with the Starfire E10000 high-end servers. If so, add it to the comments at the end."

    I think you'll find that feature is only available in the M8000 and M9000, not the M4000 and M5000, happy to be corrected though. With UltraSPARC III it was available in the F3800 upwards, not in the V Class systems (V480/V800 etc..).

    1. Anonymous Coward
      Anonymous Coward

      Re: M Series Hot Plug CPUs

      True, the M4000 and M5000 don't support hot swap of cpu/memory boards. I think they did this as a cost savings for the midrange Sparc64 based systems since it wasn't a feature most people used on the SunFire midrange line. Just look at the cost of a M8000/9000 cpu/mem board compared to a M5000.

      You can still however use hardware domains on the M4000 and M5000 servers and DR components around between the domains. But you will need an outage if you have to replace a component that failed.

    2. Jesper Frimann
      Heart

      yes...

      Now Starfire that was a cool name for a server. The E10K rocked.

      // jesper

  3. Anonymous Coward
    Anonymous Coward

    E10000 DR

    The E10000 already supported hot add and remove of system boards. This therefore includes CPU, memory and IO cards. Also its smaller brothers (E3x00 to E6x00) supported dynamic reconfiguration, however nobody used it as there were some problems with that functionality. On the E10k however that worked fine and has been used quite often.

  4. Magellan

    History of Hot Add and Remove with Solaris

    This SPARC/Solaris capability goes back to 1993 with Cray's Business System Division (BSD) Cray Superserver 6400 (or CS6400), the SuperSPARC predecessor of the E10000, and Cray's OEM version of Solaris 2.3. The CS6400 had the ability to hot remove and hot add CPU/Memory boards from a running system. This was not a mature capability, as it required briefly suspending the operating system during the actual physical removal. Some applications could not tolerate the OS suspension. The E10K resolved that, removing the need for a pause. The pause based hot swap was brought to the UltraSPARC-II E3500 - E6500 with Solaris 7, pauseless hot-swap on midrange servers came out with the UltraSPARC-III F3800-F6800.

  5. Kebabbert

    x86 sucks

    Actually, the x86 requires you to support over 1000 instructions in it's instruction set. It is really buggy and bloated and old. It takes many millions of transistors to decode and figure out where the next instruction is. Many more transistors to support old instructions that no one uses. etc. x86 is buggy and bloated.

    But Intel and AMD has poured lots of money in it, so it is getting helluva fast! Nehalem-EX is extremely fast, for a cheap price. Soon the 32nm version will come, with even higher performance. Soon x86 will be the fastest commodity cpus on earth. The pace is extreme. Much higher than for any other CPU architecture.

    If half of that money went into a clean architecture, such as SPARC, we would have faster CPUs, for much less power. It is only a matter of resources. If someone developed a badly designed OS, that crashed all the time, but poured in huge amounts of money, then that OS would take over the world with a market share of 90%. Even though that OS would suck big time.

    It is only a matter of resources. Right now, x86 has the most resouces so it will crush everything else soon.

    .

    Regarding 8-socket Nehalem-EX and Solaris. That is a match in heaven. Solaris has a long reputation of scaling well, with excellent performance and stability. I expect this machine to be quite cheap, with a high efficiency ratio. It will surely best other, more expensive Unix machines.

    1. fch

      x86 hoovers ...

      ... them all up. If that's what you mean by "sucks", I fully agree.

      Instruction set these days are much less of a distinguisher than they used to be. You say "transistor count" - well, add two million or so additional transistors to the next-gen x86 for improvements on the CISC decoder stages, i.e. roughly quadruple them - that's far less than 0.1% of the total additional transistor budget for a new-gen CPU (remind me, how many billions of transistors does a 8-core chip have ?). In the big picture of things, having 1 million or 10 million transistors doing instruction predecoding, so what. There's orders of magnitude more in caches, buffers, interlinks and other glue these days.

      If instruction efficiency were really key to the success of anything, then we'd never have seen Java get to where it is - the bytecode being a stack machine is just about as hardware-implementation-scalability unfriendly as could be. Who (apart from JVM/JIT implementors that have both my pity & admiration) cares ?

      Also, wrt. to what current x86/x64 actually are:

      First, both Intel as well as AMD have used RISC engines in their cores for many years (variously called my-ops, micro-ops, R-Ops or some such), including superscalar/VLIW-style instruction bundling. The x86 instruction set compatibility is only a shim layer.

      Second, current x86 optimization guides by both Intel and AMD actually state (if phrased less in-your-face): You want this thing fast, use simple instructions, order them for no/little interdependency - in short, use RISC, the x86/x64 bits mapping 1:1 to what the low-level engine does are by far the best for you.

      Third, agreed that x86/x64 may have a few thousand instructions (the instruction set reference manuals have 1600+ pages these days). But have you ever made the experiment how many of them are indeed used in executable binary code on your systems ? When teaching low-level debugging and x86 assembly language, one of my favourite experiments was to let students write a little perl script that found all ELF i386/x64-64 executables, disassembled the code, sorted by instruction and then created an instruction set histogram. Invariably, one found that 99.9% were made up from a set of no more than about 50 opcodes. Programming a CPU using only ~50 instructions, where have I heard that before ? Ah - yes, I think it's called "RISC".

      That's not to say splashing resources and giant transistor counts all over it is the only way to get decently-performing CPUs. It's just the one that, by experiment, in the last 20 years has proven to create the best-performing server / workstation class CPUs. But you only need to look beyond that space, what do you see ? Set-top boxes, mobile phones, tablet devices, consoles, the entire "embedded" space uses MIPS, ARM, PPC and other, elsewhere forgotten/abandoned architectures. And Intel's attempt to push x86 into that space isn't anywhere near as simple as Intel dreamt it to be.

      To sum this up:

      x86/x64, in the high-end, "stationary" systems space, has proven to be just about efficient and performant enough to beat everything else. There, it sucks ... up all the competitors.

      x86/x64, in the mobile/embedded space, never made inroads. There SoC solutions, tiny power/thermal/form factors are mandatory, and implementors want to license modular designs to combine cores, graphics, comms etc. into a single package. Off-the-shelf components there are frowned upon. Here, x86 sucks.

      Gosh, you're right after all. No matter how you look at x86/x64, it always sucks !

    2. Jesper Frimann
      Jobs Halo

      8 Sockets ?

      You mean 4 socket Nehalem-EX, right ?

      I mean Nehalem scaling isn't really that nice going above 4 sockets. Not from the data that has been released up until now.

      // Jesper

  6. Anonymous Coward
    Go

    @x86 sucks

    Mr Kebap, living up to your reputation as a food with rotten meat, I suggest you check your facts. Indeed x86 is an old instruction set, but neither Intel nor AMD has problems with delivering chips that do what they are intended to.

    Amazingly, even though this ISA is so complex, they have managed to deliver silicon which is much better than anything else, except POWER.

    As long as the silicon is very fast, reliable AND CHEAP I could not care less abou the ISA. x86 will take over the CPU world, now that sub-30nm design is getting extremely costly.

    Old instructions sets often live longer than expected, as the S/360/390 ISA demonstrates. It's from the 1960s !

    1. Anonymous Coward
      Anonymous Coward

      except POWER?

      Are you sure? I'm underwhelmed by the performance of the Power5/6's at work given their massive ghz lead.

      They sure don't feel fast, maybe its something else that slows them down, maybe the LPAR tech itself, or maybe its the downtime to rejig that flaky interconnect that needs reseating every few months?

      1. Jesper Frimann
        Grenade

        Funny.

        Have the exact opposite experience where I work.

        But well Keb' you do like your AC posts that ditch POWER.

        But sure you can have performance problems.. like the one I am looking at right now.. UUUHHH.. SAP BW runs slow !? Jesper looks.. why the F*** do you only use 11 GB for SAP and Oracle when there is 60GB of RAM in the machine ?

        Lesson 1 on POWER, DAMN YOU NEED MORE RAM, cause use your normal 2-4GB per CPU core from other platforms, rules of thump and you will have CPU's idling.

        // Jesper

    2. Kebabbert

      jlocke

      "...Mr Kebap, living up to your reputation as a food with rotten meat, I suggest you check your facts...." Well, I suggest you check up YOUR facts. I have always explained that I am able to backup whatever I claim. You, OTOH makes unsubstantiated claims and is therefore revealed as a Liar and someone who can not back up your "facts". It is even better when you accuse a mathematician with double Masters degree that always talks about the importance of being able to prove one's claims. In the future, a hint: dont FUD about a mathematician that he is does not know what he is talking about, ok? It only makes you look silly, when he is able to show links that back ups his claim, which you have denied.

      .

      "...Indeed x86 is an old instruction set, but neither Intel nor AMD has problems with delivering chips that do what they are intended to..." You clearly have never heard about the many bugs that x86 has had. AMD has had many bugs, for instance, AMD could not release their first true quad core Phenom because of a TLB bug. A quick googling shows:

      http://en.wikipedia.org/wiki/AMD_Phenom

      And intel also had lots of bugs. You missed the excel division bug?

      http://en.wikipedia.org/wiki/Pentium_FDIV_bug

      Earlier, Intel had to with draw all Pentium(?) cpus and swap them for bug free ones. You missed that too? It costed lots of money. If I really took time and googled, I could present you many links showing a list of many expensive bugs in x86 for you. But I am not convinced that list would make you change your mind.

      In fact, an Intel engineer said about Phenom TLB bug that "intel has much worse bugs, I dont understand the fuzz about TLB bug, it is nothing". I think there is a huge list showing all bugs in intel x86 somewhere?

      .

      Here are people complaining about all the bloat in the buggy x86 instruction set, and it should be cleaned up:

      http://www.anandtech.com/show/3593

      - The total number of x86 instructions is well above one thousand" (!!)

      - "CPU dispatching ... makes the code bigger, and it is so costly in terms of development time and maintenance costs that it is almost never done in a way that adequately optimizes for all brands of CPUs."

      - "the decoding of instructions can be a serious bottleneck, and it becomes worse the more complicated the instruction codes are"

      - The costs of supporting obsolete instructions is not negligible. You need large execution units to support a large number of instructions. This means more silicon space, longer data paths, more power consumption, and slower execution.

      .

      There are managers that never would let x86 into their computer halls. You clearly have missed that, too. Totally off from reality. Jeez.

  7. Anonymous Coward
    Anonymous Coward

    but only for Sun's X86 hardware

    latest licensing changes seem to indicate that you can no longer purchase support or get an entitlement for Solaris 10 x86 for any version after Solaris 10 10.8 on non-sun hardware

  8. Anonymous Coward
    Go

    @except POWER?

    I have to admit that it's quite some time I used a power CPU for my own programs. Very, very fast then. Best hardware I ever used, actually.

    Looking at the current SPEC results makes me think this is still the case. Did you really perform a proper test ?

    I am sure IBM gets LPAR right, as they invented Virtual Machine technology. Maybe your system is overloaded or badly configured ? Or another LPAR eating all the CPU cycles ?

    If you want to try POWER yourself, there is the "Virtual Loaner Program" of IBM that gives you remote access to new hardware for free !

    Just tell them you would like to port some Free Software of your own to POWER and they will make it available. At least they did it for me without any problems.

This topic is closed for new posts.

Other stories you might like