back to article Oracle revisits Sparc T processor roadmap

Three weeks ago, Oracle co-founder and chief executive officer, Larry Ellison, gave us all a preview of the upcoming Sparc T series processor roadmap as part of the rollout of the Sparc SuperCluster, an Exadata-style parallel database machine based on the current Sparc T3 processors. In the wake of Ellison's revelations, Rick …

COMMENTS

This topic is closed for new posts.
  1. Matt Bryant Silver badge
    Boffin

    Overclocking is not just the CPU.

    Whilst the rest of the article was pretty yawntastic, the bit about overclocking caught my attention. Overclocking is a tricky business in designs not built from the ground up for OCing. Your ideal is a design that has a separate power feed and clock for the socket so you can ramp up the power and frequency as you go (in simplistic terms, to push cycles faster through a chip means using higher voltages, which is why OC'd chips get so hot). Problem is, many designs use the same clock and power feeds for the memory and IO bus as they do for the CPUs, which means changes in clock and voltage for the latter impacts the memory and IO devices. And that can be real trouble, causing signalling issues, overheating or just plain failures in components built to standards that are being exceeded by the OC'd system.

    So I don't think it will be quite that easy to OC the SPARC Ts. If you look at PC mobos that are designed for overclockers they have software tools and hardware additions to allow you to change the socket clock and voltages without affecting the memory or IO. I'm not sure if they exist today for the SPARC Ts so that may mean a redesign of their mobos. This isn't just a problem for SPARC, it's the same for Itanium and Power, they usually have mobo designs built for reliability rather than overclocking.

  2. Anton Ivanov
    Flame

    The scheduler hogging is old

    It used to be possible to do that back in Solaris 8. There were special scheduler tables which allowed to alter the priorities and keep specific apps more or less nailed to always-on as well as alter the thread scheduler behaviour. This is not particularly new to Solaris. I have not played a lot with 9 and 10, but it was definitely around in the days of 8 and people used it.

    What strikes me on the diagrams is not all this. Is it just me or it they show just what a spectacular failure is Java on a conventional CPU? You throw x40 times row CPU performance at it and it results in a mere x10 increase.

    So let's say you have a java app which has hit the scalability buffers and you want to increase its performance by throwing hardware at it instead of rearchitecting it. This means that to grow the performance 10 times you are looking at at least 40 times your initial up-front investment in hardware (probably more as you have to move to "bigger and more expensive" boxes).

  3. Anonymous Coward
    Anonymous Coward

    @Anton

    It was 40x the database transactions, not CPU surely? Java is hardly a failure for server apps and if you've built an app that requires vertical scalability then that's as a generalisation pretty stupid.

  4. Anonymous Coward
    Coffee/keyboard

    10x Java Ops per second

    It doesn't look good for Java does it? And, given this is an Oracle chart, the 10x could be 'optimistic'.

    But if you really need that 10x improvement, you probably don't have the time to re-architect your system, so you'll just pay what Oracle demands. Or move hardware.

  5. Kebabbert

    Not over clock

    Oracle is not going to over clock to get 3x better thread performance.

    I talked to the chief architect of T4, and he said that the T4 cores are redesigned. Earler, Niagara cores have basically been the same: simple with no OO order execution, etc. Now, for the first time, T4 will have a totally redesigned and much beefier core with OO-order execution, etc.

    So the 3x better thread performance comes from a beefy core. It is not the same simple cores, with much higher cpu Hz. It is a new beefy core.

    1. Jesper Frimann
      Grenade

      So ...

      Basically admitting that the CoolThread concept was wrong.

      // Jesper

      1. Matt Bryant Silver badge
        Thumb Up

        RE: So ...

        Wrong for enterprise computing is probably more concise. To be fair, the Niagaras do give great performances in niches such as webserving.

      2. Kebabbert

        @Jesper Frimann

        Oh yeah? How did you arrive at that conclusion? CoolThread cpus such as Niagara T3 holds world records at this moment - how does that make the CoolThread concept a failure? And, in case you missed it, if you need the world's highest TPC-C performance, you have no other choice but to use.... CoolThread CPUs. IBM has no chance of catching up the superior CoolThread cpus at TPC-C benchmark. "wrong concept", uh? Is this an example of your fine logic, when you "think out-of-the-box" as you claim you do?

        What I call a wrong concept and a failure, is the IBM Mainframe z196 cpu. It has in total almost half a GB of cache (376MB) and it runs at 5.26 GHz - and still it is slower than a Intel Nehalem-EX that runs at half the clock speed, and the x86 cpu has 10% of the cache. What has IBM done with their transistor budget? Total mess and a failure. IBM achieved nothing with it's huge transistor budget - which is a sign of bad concept. The z196 Mainframe should be put out of it's misery.

        Not to mention the IBM CELL cpu, which has been discontinued. Now that is a concept gone wrong, because it is terminated. It had 8 cores and ran at 3.2GHz - and still you needed thirteen (13) CELL cpus to match one Niagara T2 using less than half the clock speed - in String Pattern matching. Now that is a total failure. How can IBM perform so bad, with such a high clock speed? And how can the CoolThread cpus out class the IBM cpus so much? 10x or more? For a fraction of the price? And you say CoolThread is a "wrong concept"? Thinking out of the box, again?

        "Gee, let me see.... CoolThread cpu are 10x faster or more... Hmm... that must mean the CoolThread concept is a failure? Yes it is! It is a failure. Wow! I found it out by myself, with some aid from the IBM marketing department, and IBM marketing never lies, they always speak true. I can trust IBM marketing!"

        Another concept that was totally wrong from IBM, is when they mocked Sun for using many lower clocked cpus, and IBM declared that one or two beefy cores, at very high clock speeds is the future. IBM said 5GHz is not enough, but talked about even higher clock speeds. And look at IBM now: IBM are using many cores, at lower clock speeds. They have abandoned one core at very high clock speeds. Basically, IBM is taking the same route as Sun did with many lower clocked cores. Now, IBMs earlier approach is what I call a failure and a "wrong concept". But when Sun was first with many lower clocked cores, IBM laughed. Now IBM is doing the same thing, only years after, and suddenly it is the way forward.

        .

        .

        Regarding the Oracle T4, it will be a blend of beefy cores, blended with CoolThreads concept. Oracle has not abandoned the CoolThreads concept. It is too successful and too good to be abandoned. Only IBM, using their aggressive marketing, would say it is a failure.

        1. Pony Tail

          wake up

          Kebabbert.......you obviously do not understand computer architecture.

          The notion that all that matters is the cpu is myopic. The mainframe is the perfect example of a system of CPU chips / cache chips / I/O subsystem chips / service processor chips / etc.....it is not about a single chip.

          As far as the T goes...yes it is good for WebLogic but sucks for the rest of Oracle's "data intensive" products.

          The SPARC64 is dead and Oracle has to do something to keep those Sun customers paying thru the nose for SPARC maintenance. And who cares about poor core performance as long as all the profit is based on core licensing.

          They cannot clock chips to 5GHz without major problems. The chips are made in China then shipped to Mexico and have a huge dead on arrival percentage. Unfortunately for Oracle they are still paying for the mistakes Sun make during the last 10 years. KKR forced mfg to Mexico and TI refused to keep up on the fab technology because DLP does not benefit from lower nm's .

          Only time will tell if this 4 year old core baked on the Chinese fab then assembled in Mexico with an aggressive clock speed will bring us back to the SPARCIII eCache memory reliability again.

          Keep stressing how the T chip saved Sun an they are the leaders though. We just got two PS3's another xbox for kinect....so two more cell chips and another power chip at the house this Christmas. I hear my BMW has a bunch of Power chips also, but that is just rumor. Maybe Larry's next project could be going to Mars, since IBM has 100% market share there with Power chips.

        2. Jesper Frimann
          Troll

          Coolthread

          Lets see... T1, T2, T2+ and T3 all implement simple in-order SPARC processors that rely on fine grained multi-threading, and other techniques, to hide memory latencies. Another technique is actually having more L1 and L2 cache per thread per throughput unit, than for example POWER7.

          From what I've been able to dig up on T4 then it's going to be a more complex core that implements OO and perhaps even SMT, that sounds a lot more like Nehalem-EX and POWER7, rather than the original CoolThread concept.

          So you haven't even technically understood the whole coolthread concept, and actually also the beauty of the design. Cause for Niagara is a brilliant concept for what it was designed for. But again again the problem is that now it is peddled for workloads that it wasn't designed for.

          And also the power usage of the T2+ and T3 are in the same range as for example Nehalem-EX.

          // Jesper

  6. Beachrider

    Isn't T3 already cache/memory starved?

    If I recall correctly, the current T3 machines have many-cores and dearth-of-cache. How does overclocking this help?

    1. Kebabbert

      @Beachrider

      No, that is not really correct.

      If you think about it, it doesnt add up. Something is logically wrong. Can you spot the error?

      A) Oracle competitors FUD: "Niagara is cache starved and suffers from a too small cache"

      B) Fact: Niagara T3 holds several world records, for instance:

      http://blogs.sun.com/BestPerf/entry/sparc_t3_4_sets_world

      And other world records, including highest TPC-C record ever achieved with a score of 30 million tmpc.

      So... if Niagara T3 is cache starved, how can it hold several world records? How can a T3 at 1.6GHz which has in total, 6MB of cache per chip - compete with, and even surpass cpus at 5GHz with huge caches?

      Something is not right, right? What is your conclusion? Does the Niagara T3 suffer from a too small a cache, or does it do perfectly fine with a small cache? So are Oracle's competitors correct, or are they FUDing? What does the benchmarks and real life results say? Think about this for a while. It is not too hard.

      .

      .

      FYI, each core has access to 0.4 MB cache. That is not much. But if you know how to design CPUs, then that does suffice.

      The Niagara is a new and revolutionary cpu design. As soon as there is a cache miss in a core, it switches thread and continues to work while waiting for data from RAM. So, in effect the Niagara does work 90% of the time. It never idles.

      Studies of Intel Corp, shows that an normal 2GHz x86 server idles 50% of the time, under full load. Under max load, it waits for data from RAM, 50% of the time. If you have a faster cpu, then there will be more penalty for cache miss. RAM is slow, CPU is fast. If you have a 5GHz cpu, then it will wait for data in RAM, maybe 75% of the time - under full load.

      No one has ever managed to reduce waiting time, everyone use huge caches, and complex pre fetch logic - to avoid cache misses. No one has ever succeeded - except Oracle with the new Niagara design. Facts say it holds several world records - therefore Niagara does not suffer from a small cache.

      1. Matt Bryant Silver badge
        FAIL

        RE: @Beachrider

        Ah, Kebbie simply doesn't recall what has been pointed out to him on these forums so many times. Can someone please print this out and ensure he reads it the next time he thinks of posting any more Sunshine benchmarks?

        "....Niagara does work 90% of the time...." Assuming it is running an app that fits the Niagara work profile. The problem is that most enterprise apps don't, they are largely single-thread heavy, and that means that the Niagaras spend a lot of time idling in real-World situations because the cores are too weak. Rock was supposed to be the "heavy-lift" enterprise Sun CPU with single-thread performance, not Niagara. The new cores T4 look like being base cores salvaged from the Rock carcass bolted onto the Niagara's interfaces - a Frankenstein hack to try and make Niagara more competitive.

        "....Studies of Intel Corp, shows that an normal 2GHz x86 server idles 50% of the time...." Maybe, but when that "50% idle" CPU manages to churn through a task faster because it has better single-threaded performance, then the Xeon makes the Niagara look expensive and slow. The simple proof of that is that the market bought and continues to buy massively more Xeons than Niagaras. If the so-called advantages of Niagara that Kebbie blathers on about meant SFA then the reverse would be true. Great technology is only really great when it meets or exceeds the market's requirements, and the mismatch between Niagara and the applications that us customers want to run mean the Niagara is not great tech, no matter how "revolutionary" some people think it is. The Sunshiners can console themselves by thinking that Niagara is the Betamax of CPU design if it makes them feel better, but it's still a failure.

        /SP&L

        1. Mike Timbers

          Enterprise Apps

          I don't know what Enterprise apps you're talking about, but most large-scale enterprises run applications which are massively multi-user where single-threaded performance is unnecessary. Do Java apps need single-threading? Try running a J2EE ecommerce app sometime and watch how fast Niagaras (even T1s) can blaze through jsp's.

          1. Matt Bryant Silver badge
            Go

            RE: Enterprise Apps

            Mike, your example, "a J2EE ecommerce app", is exactly the webserving, NON-enterprise app that I already said Niagara is good at. You're looking at the edge, not the enterprise core. Real enterprise apps are stacks like you would use with Oracle and SAP CRM. Depsite some lovely Sunshiner benchmarks with wildly unrealistic setups, the market has shown zero interest in Niagara for such tasks. Indeed, so poor was Niagara's performance with old-school enterprise apps that Sun was forced to bring out the M3000 with a SPARC64 CPU in an attempt to stop the haemorrhaging of the Slowaris low-end. It didn't work. Snoreacle are now trying to make a "beefier" SPARC-T core as they have failed to swing the developers behind the idea of the parallelised, multi-threaded app that SPARC-T needs to shine.

      2. Anonymous Coward
        WTF?

        Can't buy the SPARC cluster until next June

        "No, that is not really correct.

        If you think about it, it doesnt add up. Something is logically wrong. Can you spot the error?

        A) Oracle competitors FUD: "Niagara is cache starved and suffers from a too small cache"

        B) Fact: Niagara T3 holds several world records, for instance:

        http://blogs.sun.com/BestPerf/entry/sparc_t3_4_sets_world

        And other world records, including highest TPC-C record ever achieved with a score of 30 million tmpc.

        So... if Niagara T3 is cache starved, how can it hold several world records? How can a T3 at 1.6GHz which has in total, 6MB of cache per chip - compete with, and even surpass cpus at 5GHz with huge caches?

        Something is not right, right? What is your conclusion? Does the Niagara T3 suffer from a too small a cache, or does it do perfectly fine with a small cache? So are Oracle's competitors correct, or are they FUDing? What does the benchmarks and real life results say? Think about this for a while. It is not too hard."

        You keep crowing about the TPC-C record, and never mention the availability dates:

        1 Oracle SPARC SuperCluster with T3-4 Servers 30,249,688 1.01 USD NR 06/01/11

        2 IBM IBM Power 780 Server Model 9179-MHB 10,366,254 1.38 USD NR 10/13/10

        The Niagara config isn't even for sale today. IBM's is for sale now.

        It must have bugs or they'd be selling it now ;)

  7. David Halko
    Go

    hmmm... over-clocking a SPARC T3?

    Most of all the circuitry required to build a system is on-board the T3 chip.

    If the T3 chip is over-clocked, there are really not that many external components to address (in comparison to other architectures), with the exception of memory chips and I/O components.

    Let the external I/O components burn, most are un-necessary on a SPARC T3 server, anyway. Many of the external I/O components (USB ports, 4x 1 GigE, hard disk controllers, etc.) could be ignored in an over-clocked T3 system and the user would just concentrate on the embedded T3 components (network boot, embedded T3 10 GigE, use faster memory chips for faster clock rates, use Sun Ray for console & USB ports, etc.)

    It could be interesting... pump a T3 up to 3.3GHz and maybe get 20 GigE as an unintended side-effect??? :-) Might need a custom Ethernet switch... :-( Maybe just another server with a bunch of over-clocked Ethernet cards acting as a switch! ;-)

    I would love to see a project publication from Tom's Hardware on over-clocking a T3 system!

    [GO] icon, for obvious reasons!

    1. Matt Bryant Silver badge
      Stop

      RE: hmmm... over-clocking a SPARC T3?

      "....Let the external I/O components burn..." Erm, how exactly do you expect to communicate with anything outside the mobo without interfaces? You wouldn't even be able to talk to local disk, let alone SAN, if they can't talk to each other! LAN connections are probably also gone as anything that needs a clock to synch is going to be out-of-synch with external parties. Sunray won't work without a LAN connection, and you can forget a keyboard if the console and USB are out-of-synch. I spent a lot of time doing stupid overclocking with chips like the old 486s, it is not trivial even with a mobo designed for overclocking.

      "....maybe get 20 GigE as an unintended side-effect...." Please, leave the eggnog alone and get at least a toehold in reality. A LAN standard doesn't just depend on doubling the operating frequency, it needs interaction and communication in a standardised and synchronised manner. I lived through the fun of the early 100-BaseT releases, belive me an experience like that teaches you that smart technolgy is just expensive junk if it can't comminicate. Overclocking the Niagara cores would require either segregation from all the I/O interfaces or completely new interface redesigns. IIRC, the T2 had one clock unit per chip, not per core, which synchronised everything with the mobo, so upping that clock would affect all the interfaces on the chip (memory, built-in 10GbE, PCI-e, even the cache). Unless the new Niagara cores have a completely separate clock just for core frequency then Snoreacle are looking at a lot of work.

      /SP&L

  8. David Halko
    IT Angle

    RE: Isn't T3 already cache/memory starved?

    Some processor architectures are impacted more than others with a small cache.

    SPARC T processors are not always pushing registers to memory when there is a miss. Each thread can stall in it's place and continue once the data is available. The availability of many threads keep the CPU cores always busy on a loaded system. Also, SPARC utilized windowed registers, reducing the need to push registers to memory. Also, the bus passing between the CPU and memory is usually a really high capacity on the T systems.

    Other processor architectures don't have the ability to reduce register pushes to slow memory. Also, when a thread of execution stalls on other architectures, it is normally very expensive since the CPU core remains idle. Other architectures use larger caches to keep the CPU pipelines full and processing, often to compensate for low memory bandwidth on the bus going from a CPU.

    Since the SPARC T processors are not impacted as greatly by cache size as much as with other architectures, cache is not a relevant apple-per-apple metric when comparing to other architectures. It was a design decision to leverage the transistors in the die to add additional threads in order to keep the cores more busy instead of adding additional cache to keep the cores less busy.

    1. Matt Bryant Silver badge
      WTF?

      RE: RE: Isn't T3 already cache/memory starved?

      Yes, all very interesting if you live in some theoretical Wonderland, but back in here in reality the simple truth is CMT does not run as fast with today's typical commercial apps as Xeon, Opteron, Power or Itanium. There are a few cases where CMT is actually the best solution, where apps such as webserving use lots of small and easily-parallelised threads, which do suit the CMT model, and there we see Niagara shine. But the majority of cases don't match the CMT model, and then we see even the hottest Niagara being outperformed by last generation x64. You can warble on about how clever the SPARC-T designs are until you're blue in the face, but the only real apples-to-apples measure is market-adoption as this is the one that signifies profits available for re-investment in future developments, and it was the lack of profits from hardware sales that killed Sun. Larry can try using Oracle software sales to prop up the old Sun hardware biz, but I hope he doesn't risk it all on a poor bet like CMT because I'd hate to have to unplumb all our Oracle DBs and replace it with something like DB2 (<shudders at the thought>).

      /SP&L

    2. Kebabbert

      Yes

      So, the net effect is that Niagara does not need a huge cache, like the legacy designed CPUs such as POWER and Itanium do. If they had cut down the cache to 6MB - they would perform very bad. And, they all idle at 50-60% or more, under full load - because of cache misses. To reach high clock speed, you need a deep pipelined cpu. Deep pipelined cpus are even more severely punished by cache misses. So the x86 which has a short pipeline, waits for data 50% of the time - under full load. So, for instance, the POWER6 which had double cpu Hz, probably waited for data 75% of the time - under full load. Because of cache misses.

      It has not occured to you people, who heard that Niagara is cache starved, that it is a strange thing how the Niagara can perform so well, and hold several world records if the Niagara would truly be cache starved? Something is not right? The conclusion is, if you look at the benchmarks, that Niagara is not cache starved. Simple conclusion.

      .

      It is if someone says "I hear that guy over there is poor, is it true?". Then I would say: "look at the real life results and benchmarks. He has the most number of Rolls Royce cars in the world, and owns more buildings in the world than any one else - do you really think he is as poor as IBM claims? Think a bit. What is your conclusion?"

    3. Jesper Frimann
      Coffee/keyboard

      Cache.

      "Some processor architectures are impacted more than others with a small cache. ". Unless "Other processor architectures" only refers to AMD's x86, then your statement is wrong.

      POWER implements SMT, with POWER7 implementing 4 way SMT.

      Intel x86 implements 2 way SMT.

      SPARC64 implements 2 way SMT.

      Niagara style processors have up until now implemented fine grained multi-threading, which of cause will help greatly on throughput when hardware threads encounter a cache miss.

      Now a T series machine running the workloads it was designed for, like for example a single application with a fairly small code footprint and possible many instances or the app using many threads, will not be as affected by it's small cache, as it will running for example a myriad of different applications with a lot of updates.

      // jesper

  9. Beachrider

    More on cache...

    OK, there are several kinds of cache.

    SPARC T2/T3 has 8KB of L1 Cache per core. They have 4MB/6MB of L2 on a socket.

    Compare it to Ultra Sparc VII+ which has 128KB of L1 per core and 12MB of L2 on a socket.

    No FUD, no defensiveness. Both are sold as current by Oracle.

    In that context, we have two very different implementations of the same instruction set. One is relatively core-rich and the other is relatively cache-rich.

    My point was that IF Ultra VII+ validly needed 128KB of L1 to run at its GHz-range, why WOULDN'T T3 need similar cache to overclock to high GHz?

    I hope that helps.

    1. Billl
      Happy

      re: More on cache...

      You seem to make a valid point, but miss the point in the end. The point is that the non-CMT procs, such as SPARC64 VII+, need more cache because the CPU is stalled while waiting for memory. When waiting for memory (which is slow relatively) CMT just runs another thread that has it's data already, and the thread that is waiting for memory can wait as it normally would - there is no cache-miss penalty. SPARC64 (and other non-CMT CPU's) if the CPU has a cache miss (likely), then the whole CPU must stall and nothing gets done until the memory gets there. The instruction set has nothing to do with it (it's not the cores as much as it is the threads).

      Very simple really. They could add more cache to a CMT processor, but it really would not make much difference (theoretically of course). If you did increase the speed of the CMT processor, then it is possible (I guess) that more cache would help a small bit, but without doing it, who knows?

      1. Jesper Frimann
        Coat

        Well

        In what way is Tukwila,POWER7, Nehalem-EX, Mangy-cours or SPARC64 VII not CMT processors ?

        In what way does the same above processors not implement Multi-threading (if you count Mangy-Cours out) ?

        I don't really think you understand why people say Niagara is cache starved. If used for what it was originality designed for, running many threads that all execute the same code, which uses data with a relative small memory footprint, then Niagara will really excel.

        The whole concept of using the 'Niagara' CoolThread concept to for example running webservers is a very good idea. This is the niche that the Niagara style processors fit into.

        BUT, running 8 different programs on each of the HW threads on a single Core, specially if this code uses data with a big footprint, is a terrible idea.

        There is no magic here that allows Niagara to disregard the laws of physics. Although there are a lot of posters here who think so.

        // jesper

  10. Billl
    WTF?

    re: So ...

    Jesper, do you spend a lot of time at the pub with your other marketing buddy, Matt, coming up with nonsense fud? If Increasing the clock on SPARC means Sun/Oracle was wrong, then Intel and IBM going to more threads and cores means they were wrong????? Take a basic logic course. This is the logical progression of chip design. Go to a new chip design. Figure out how to get more cores/threads on a processor, then figure out how to make it more general purpose by increasing the per thread perf....

    1. Matt Bryant Silver badge
      WTF?

      RE: re: So ...

      "Jesper, do you spend a lot of time at the pub with your other marketing buddy, Matt...." Bill, you're just not paying attention. Jesper has already stated that he works for a major outsourcer in Denmark, which makes him "the opposition" as far as I'm concerned. Having said that, he does speak the odd bit of sense every now and again. ;) If you want a laugh you can search for some of the shots we've exchanged over Itanium vs Power in other forums threads here.

      "....If Increasing the clock on SPARC means Sun/Oracle was wrong...." Where did the clock speed bit come from? What Jesper and I are saying is that, in our opinions, Snoreacle continuing the SPARC-T push is wrong, not the clock speed increases. Sun lost their real enterprise contender when Rock was killed and it doesn't look like Fudgeitso is going to help Larry out with anything other than a speedbump of the current SPARC64 designs, so now they have to bridge the gap left by trying and re-engineering SPARC-T. Snoreacle has effectively admitted this by their change in core design in an attempt to make the next gen SPARC-Ts more beefy, but the reason they are doing this is because they need to bridge the gap between where Niagara works now and where us customers want to go. IBM and hpo are already there.

      In short, I'm saying I don't think a speedbump or overclocking for next gen Niagara will do anything to keep x64 eating the Slowaris base from below, and will not stop IBM and hp converting the high-end Slowaris base with Tukzilla and Pee7. And, as Jesper pointed out, a solution needs to be the whole system and stack, not just a core. In my opinion, IBM and hp have done better jobs on getting the whole solution right by producing products (and services) more suited to what us customers want.

      /SP&L

    2. Jesper Frimann
      Pint

      RE:Bill

      Jup, we sit and plan the downfall of Oracle....

      Intel and IBM have been pretty consistent the last many years with beefy cores, with good single threaded throughput.

      And yes the next logical evolution step of the TX processors is to get beefier cores OOE etc etc. Just like Xeon and POWER. But in doing so you are basically abandoning the whole coolthread concept.

      And if you disagree, then perhaps it's cause you never really understood the CoolThread concept.

      // Jesper

  11. Matt Bryant Silver badge
    Pirate

    Note for the Sunshiners.

    Before you get all shirty and post more hyperventillating worship at the altar of SPARC, let's just get a few things clear. If you wish to disagree afterwards then go ahead, but please try and read and comprehend the following.

    1. Niagara, the SPARC-T concept, the whole CMT thing, is very clever engineering. I'm sure you'll actually read, understand and agree with that bit, your difficulty is going to be accepting what comes next. The problem (for Larry and you) is that, whilst it's very clever, it's just not what us customers want when compared to other products from Intel, AMD and IBM that can run the applications we currently need to run better and usually cheaper. This is quite starkly shown by the market-figures for new server sales. It is also clearly shown in the decline of Sun. You can hug yourselves and feel all superior that CMT is the Betamax solution, the cleverer technology, but that doesn't change the fact that Betamax lost to VHS, and CMT has lost to x64. Snoreacle's attempts to beef-up SPARC-T just look like too little too late unless there is a massive swing in developer programs, which is unlikley.

    2. Us customers are untrustworthy, disloyal, self-centered, money-grubbing capitalists. We do not believe in charity purchasing, we use scientific tools and methods in an attempt to buy the best solution for our needs. Even when we show loyalty to a brand it's for self-serving reasons - we used to love the combination of SPARC and Slowaris as it used to give us a business edge and make our lives easier. Now, other vendors are doing a much better job of making our lives easier. Again, this is shown in the Sunset and the fact that Snoreacle server sales just haven't rebounded after the recession in the same way as IBM's and hp's have. Indeed, Snoreacle is being outgrown by the whitebox vendors.

    3. Developers, developers, developers. Developers also want an easy life. Unless you pay them with large wads of cash (as done by Intel, Microsoft and IBM) they do not go out of their way to develop on new platforms, especially if that new platform requires them to completely rewire their apps to suit your platform. Yes, I know you're going to repeat that same, tired line about SPARC-T Slowaris being binary compatible with trad SPARC, but the truth is any app for CMT has to be rewritten to be multi-threaded or the performance is pants. If a devloper has limited development funds (and most do after the recession), they will want to develop as cheaply and with as little risk as possible, and CMT means added expense and risk so they are not bothering. Larry has not shown any inclination to throw large wads of cash at what is often his competition, so that is unlikley to change.

    Now, argue those points if you like, but please at least try and read and comprehend them first.

    /SP&L - 'cos I know the worship won't end!

  12. Billl
    FAIL

    re: RE: re: So ...

    It was just a joke. Yes, I know you guys on opposite sides of the discussion when it comes to Itanic and Power, but your tired fud when it comes to SPARC is overly repetitive.

    The fact is, Sun stated that they designed the "beefy" cores 5 years ago, and are just now getting them out to market. That does not sound like a change in direction, it sounds like clear planning. Increase the thread count, then make those same threads faster once the technology can handle it.

    Not a change, just a natural progression.

    1. Matt Bryant Silver badge
      Happy

      RE: re: RE: re: So ...

      "....Sun stated that they designed the "beefy" cores 5 years ago....." So the nice way to say it is "Sun realised five years ago that they had got it wrong", which means the current design steps are just five years behind where the other chip vendors are already at. Bad enough. But five years ago, Sun assumed they would have Rock for the enterprise battleground and SPARC-T would be the junior partner it in the same way x64 is to Itanium. It's just now Snoreacle doesn't have a big buddy for SPARC-T. So a more damning version would be that Snoreacle would be five years behind the other vendors if it started a new enterprise SPARC chip from scratch, but instead they are trying to modify an existing non-enterprise design to make it competitve with the other vendors that have that five year march on them. So not just bad but also a kludge in progress. Bill, you're really not helping yourself here!

      /SP&L

  13. Jesper Frimann
    Coat

    Cause it takes 5 years.

    No FUD Bill, I think you are unfair. It's very easy to cry FUD, rather than trying to counter arguments. And If I was spreading FUD I wouldn't call Niagara 'genial' and 'Brilliant' for the workloads that it was designed for now would I ?

    Bill that is cause it takes many years from design work starts on a processor until it hits the marked, and your argument just strengthens mine. That when the T1 was actually used for real workloads then SUN quickly realized it's shortcomings, and thus started taking the processor in another direction, the same as everyone else.

    // Jesper

This topic is closed for new posts.

Other stories you might like