back to article HP revs up Integrity, Superdomes for Itanium 9500s

Now that Intel's "Poulson" Itanium 9500 processors are out and Oracle is supporting its database on HP-UX 11i v3 running atop those processors, HP CEO Meg Whitman has two fewer things to worry about. The life of Ric Lewis, the new general manager of the Business Critical Systems division who took over that job late last week, is …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    IT Angle

    2.7x the performance over Tukwila systems isn't very much to brag about... that still leaves Itanium half the per-socket performance of first-generation POWER7 (which also scales to 1024 cores, not 256).

    1. Matt Bryant Silver badge
      FAIL

      "......half the per-socket performance of first-generation POWER7...." If only IBM could actually build a system that could balance out performance throughout the whole design, rather than concentrating on harping on about fictional core performance and ignoring the bottlenecks that make their AIX servers no faster than Integrity ones at best. Oh, and don't mention the Power blades - an even worse case of bottleneck city, all wrapped up in a blade chassis that can't deliver enough power to actually run all the blades at full whack!

      /Aint this FUD stuff fun?

    2. Anonymous Coward
      Anonymous Coward

      Double the cores = double the performance... again

      The second generation in a row where HP... er, I mean Intel (being held by the collar by HP so they don't run off the stage), has doubled the performance of the CPU!*

      *Double the performance with twice the number cores = zero performance improvement

      1. Matt Bryant Silver badge
        FAIL

        Re: Double the cores = double the performance... again

        "....Double the performance with twice the number cores = zero performance improvement." Well, going on the hp press release (I haven't got a demo box to verify with yet), there is more than double the performance increase. But the actual article quotes; "We have systems in the labs that are significantly above 3X." You did read the article before dribbling, didn't you?

  2. Allison Park
    IT Angle

    a few questions from folks stuck with a few Itanium systems

    1) Is Itanium the last major chip to get to 8 cores? Better 3 years late than never I guess.

    2) Does it still have the strange 5 QPI links? You need 7 for the blades and they only use 3 for superdome

    3) Do they have hardware based virtualization yet? IVM needs hardware assists

    4) When will HP have an SAP benchmark? three generations since a benchmark?

    5) What is the performance per oracle license? Seems to be the lowest in the list.

    6) After tukwila delay we were promised socket compatibility thru kittson as a reason, what happened to that promise.

    7) Are they still planning to have two versions of kittson the first one handicapped to pretend there are two more chips?

    to-da-loo

    1. Matt Bryant Silver badge
      Facepalm

      Re: a few questions from folks stuck with a few Itanium systems

      Well, on the plus side the IBM FUD is getting briefer, even if it is still just as much fantasy as the last launch. You know IBM are worried about a competitor when they start FUDing so hard!

      "1) Is Itanium the last major chip to get to 8 cores? Better 3 years late than never I guess." But the first in a balanced design without the performance bottlenecks crippling Power systems, which are already being outperformed by cheaper Xeon systems.

      "2) Does it still have the strange 5 QPI links? You need 7 for the blades and they only use 3 for superdome" Meaning what? That it worked before and works now with even faster QPI links? Please, try and actually come up with a point to your FUD, Alli.

      "3) Do they have hardware based virtualization yet? IVM needs hardware assists" You know that hp has had a far better partitioning story for years than Power, having full electrical isolation hardware partitioning (nPars) years ago and which IBM still can't match. There's also software partitioning (vPars) and virtual machine hosting (IVM) and shared resources (Containers). IBM simply cannot match that. I'm always amused that the IBM trolls even want to try arguing this point seeing as AIX has been playing catchup on partitioning for years.

      "4) When will HP have an SAP benchmark? three generations since a benchmark?" Dunno, ask hp. Which benchmark do you have in mind as I'm just dying to point out it's probably one IBM gamed with $5m of storage doign the real work. I always prefer benchign in my own environment, with my own data, but if you want to blindly follow benchmarks then I suppose it's no surprise you buy IBM.

      "5) What is the performance per oracle license? Seems to be the lowest in the list." Compared to what? Seeing as you don't have any benchmarks to compare that is pulling a statement on performance out of your anus.

      "6) After tukwila delay we were promised socket compatibility thru kittson as a reason, what happened to that promise." They are socket compatible, it's just the new servers have extra motherboard tech to maximise the advantage of the new core design. Now, concentrate, and try and remeber an IBM upgrade that wasn't a fork-lift one...? Trolls in glass houses and all that.

      "7) Are they still planning to have two versions of kittson the first one handicapped to pretend there are two more chips?" What on Earth is that about? Seeing as Intel haven't released any such details (I get to see the NDA stuff from hp and Intel) I'd have to say that's more info pulled out of your ar$e. Please provide a link to backup (any of) your claims. I'm betting you can't!

      1. Allison Park

        Re: a few questions from folks stuck with a few Itanium systems

        HP instructed Intel to break Kittson (renamed “Kittson22” or “K22”) into “two

        sequential HP system product releases” separated by one to two-and-a-half years “with the

        timing as requested by HP . . . .” The obvious and intended purpose of this is to further the

        illusion of a longer roadmap—and again, extend the end of life visibility date that was so

        important to customers. In HP’s words, “HP will be able to extend the Itanium roadmap by

        releasing a follow-on to Kittson about 2 years later (dubbed K22+ for now)” which will

        “[e]xtend our BCS and TS profit pool longer (this takes us to about 2017).” Importantly, the

        “second” Kittson chip (K22+) will not reflect incremental development and functionality. Under

        the agreement, the aggregate functionality of the two releases is established first, with HP

        retaining the right to withhold known Kittson functionalities until the ostensibly next-generation

        chip. HP did not reveal any of this to the marketplace.

        19. The new agreement also clearly allows Intel to disinvest in Itanium,

        immediately. A key part of this is that K22 is to be a “Xeon socket compatible” microprocessor.

        That means that Intel is only developing a new Itanium “core,” which will then be combined

        with Xeon components (“uncore”) to create the full chipset solution. The typical reason this is

        done—and the reason here—is that it is cheaper, here for Intel, to reuse uncore from another

        product (Xeon) rather than build specialized uncore for Itanium

        1. Matt Bryant Silver badge
          Happy

          Re: Re: a few questions from folks stuck with a few Itanium systems

          LOL, Alli is mainining the IBM FUD! I particularly like an IBMer trying to FUD the idea of a speedstep when that is all the Plus versions of the Power chips are! What, can I claim IBM have "given up on" Power just because they released a Power7+? Truly a desperate attempt at FUD. Look at the Xeon roadmap, look at the "tick-tock" design philosophy - introduce a new chip on the tick, then a die shrink on the tock - and you'll see the K22+ idea is simply business as usual, as used on the previous generations of Itanium. IBM really are getting desperate!

          I also like the bit where the longterm hp plan of Xeon and Itanium socket-compatibility, first discussed back in 2002, is somehow "disinvesting"! Could it be because the benefits to Intel's partners that make both Xeon and Itanium servers - having only to develop one line of servers for both - won't be available to IBM, because IBM will still have to pay the extra to develop a completely separate Power servers line alongside Xeon ones? Well, that's if IBM haven't sold all their x64 server bizz to Lenovo by then. Come on, Alli, go give your Elmers a kicking and tell them they need to do a lot better than that effort. If anyone is "disinvesting" it would seem to be IBM's marketing people as, going by this poor effort, they obviously can't pay for serious competitive anaylsis.

          Of course there is the other option - IBM aren't FUDing Kittson too hard because they plan to have their own future Xeon servers also have Itanium chip options. After all, they do need a way to get off the Power bandwagon before it gets completely overtaken by x64. And the last time they shipped Itanium servers, without even trying, they sold 10,000+ X445 units DESPITE only selling it as an option when their customers said no to Power. Just think how much IBM might sell if they actually didn't have to worry about the politics of the Power lobby? Maybe even half as much as hp.

          1. Allison Park
            Mushroom

            Re: a few questions from folks stuck with a few Itanium systems

            Matt has got completely insane.

            My questions are not ibm, they are customer questions.

            I jokingly copied the kittson remarks from Oracle's website and he thinks its from IBM marketing. google the kittson comment and you will see they are oracle. The Oracle rep here is still telling us they are doing the minimum development possible and since hp-ux is not getting any new licenses customers would be insane to put new releases on something that does not have critical mass cause they might get nuked.

            Then Matt goes on to say ibm might have itanium servers in the future. Matt are on you vacation in Colorado smoking somthings? Last I checked the only o/s's running on itanium were HP-UX, VMS and Tandem non-stop. I am sure there is some obscure OS since they are showing bull, nec, hitachi china somebody. I think NEC is really just a HP OEM so they dont count.

            socket compatibility with xeon is a divestment from intel not a customer benefit since they broke their promise of socket compatibility from tukwila to kittson.

            well so much for my weekend in FL, gotta catch my flight back to NY. I miss FL

            Too-da-loo

            1. Matt Bryant Silver badge
              FAIL

              Re: a few questions from folks stuck with a few Itanium systems

              "....I jokingly copied the kittson remarks from Oracle's website...." Oh, so you simply switched to mixing Oracle's FUD in with your usual IBM FUDfests? I can't say I noticed any increase in either relevance or reality.

              ".....The Oracle rep here is still telling us they are doing the minimum development possible....." The minimum is that Oracle is contractually bound to deliver their software on Itanium for hp-ux. There is no such binding contract for AIX. In fact, hp-ux and OpenVMS on Itanium are the only OSs with the guaranteed availability of Oracle software. I'm sure the US legal authorities would love to talk to your Oracle rep about his statements as they seem to be in breach of the US court judgement of September 20th 2012; "A California judge has ruled that Oracle's decision to drop support for the Itanium processor in future versions of its database products was a breach of contract, that Oracle is required by its agreement to continue to develop products for Itanium "until such time as HP discontinues the sales of its Itanium-based servers," and that "Oracle is required to port its products to HP’s Itanium-based servers without charge to HP.""(http://arstechnica.com/information-technology/2012/08/hp-wins-judgement-in-itanium-suit-against-oracle/). Looks like your Oracle FUD is just as easily debunked as your IBM FUD, will you try Dell's next?

              "....... Last I checked the only o/s's running on itanium were HP-UX, VMS and Tandem non-stop....." A very carefully crafted answer, as you should be well aware that IBM was on the Itanium bandwagon before they realised in 2002 it would kill their mainframe golden goose. A version of AIX 5L was booted on Itanium, IBM even sold forty licences for it before they killed the project, and it wouldn't take much development work to get the current releases working on Itanium.

              ".....since they broke their promise of socket compatibility from tukwila to kittson....." But they didn't. They are socket-compatible, it's just that you need the new mobo's with their additional tech to get the advantage of the new Poulson CPUs. But it's not like this line of FUD is new, it's the same with all Intel chip releases, you get one or the other from the IBM Elmers - if the new CPU fits in the old socket in exactly the same way the IBM FUD is "there is no development, that chip is dead"; and if there is any change then the IBM FUD is "they have not kept socket-compatibility". LOL, so predictable, and just as lame. Seriously, if this is the best FUD you can get then IBM seem to be the ones disinvesting in marketing.

              "....Too-da-loo." Ah, I think I have just spotted the source of your new FUD - the loo!

              1. Anonymous Coward
                Anonymous Coward

                Re: a few questions from folks stuck with a few Itanium systems

                I don't think you can call it "Oracle FUD" when Oracle takes exactly what HP executives are writing to each other about scamming their customers and then copies/posts it. I suppose it does create Fear, Uncertainty and Doubt... but completely justifiably. If you are on Itanium and don't have any doubts, you are either on the 4 person Itanium development team or you haven't been paying attention for the last decade.

                1. Matt Bryant Silver badge
                  FAIL

                  Re: a few questions from folks stuck with a few Itanium systems

                  "I don't think you can call it "Oracle FUD" when Oracle takes exactly what HP executives are writing to each other about scamming their customers and then copies/posts it....." But that's not what Oracle did. As the judge in teh case ruled, they took unrelated quotes and info, wound it into a completely incorrect story, and then tried to use it to influence the market and sell their own servers. The outcome was not what Oracle intended - hp's Integrity line is now the ONLY one with a guarantee of availability of Oracle software. IBM's Power does not have that, even Oracle's own CMT range, nor Fujitsu's SPARC64.

                  ".....you are either on the 4 person Itanium development team....." From the article; "That's because some of the largest financial networks employ Itanium-based servers in one form or another, as do most of the Fortune 100." Seem a lot of us customers are quite fine with Itanium, and we're in the Fortune 100 before you ask.

  3. Matt Bryant Silver badge
    Boffin

    "....what happened to the i3 generation of machines....."

    Apparently there is an Intel chip with that name and hp didn't want journalists getting confuzzled. According to our hp salegrunt, they also wanted to stress the doubling of the number of cores per socket, hence i2 to i4.

  4. Jesper Frimann
    Headmaster

    Well.....

    Nice article!

    IMHO, the problem for HP, is gong to be that SD2's are going to compete against power 770 and 780's, and that is going to BE tough on price, as you are competing against a midrange system with your best system.

    Although i Think this is a nice upgrade, but it is to little to late.

    // Jesper

    1. Matt Bryant Silver badge
      WTF?

      Re: Well.....

      "....SD2's are going to compete against power 770 and 780's...." Why? The modular 770 and 780 can be outmatched by the hp Integrity blades. To go up against the SD2 with Poulson IBM will have to go to the expensive 795 and still not have an answer to either hp's superior partitioning options.

      1. Allison Park
        Alert

        Re: Well.....

        show proof that itanium can compete with Power7+ and is not 1/3rd the performance. Everything we have seen from ibm and hp would say Power7 cores are 2.5x the performance of tukwila cores and Power7+ will be at least 3x the performance of Poulson cores. When do Poulson systems ship? We got delivery of Power7+ systems last month.

        do the math and sd2 can only compete with 770 and not with the new 780 power7+ box

        still trying to find out why the tukwila/poulson chips have 5 qpi links since ther are not enough for the blades and only 2 (3) are used on sd2. Just seems bizarre.

        1. Matt Bryant Silver badge
          FAIL

          Re: Well.....

          ".....show proof that itanium can compete with Power7+...." Actually, I'd like you to show a real World case where this is true.

          "..... Everything we have seen from ibm and hp would say Power7 cores are 2.5x the performance of tukwila cores and Power7+ will be at least 3x the performance of Poulson cores....." What, you mean those really realistic IBM benchmarking sessions, where they switch off all the CPUs in the systems except one, but keep all thirty-two sockets of cache switched on to distort the performance figures? Or the ones where they have $5m of short-stroked storage doing the actual work in the backend? Yes, we all know how believable the IBM benchmarks are. The truth is IBM will not backup their benchmarks with guarantees. If you ask IBM to guarantee you will see the same performance on site then they suddenly start backtracking and making all types of excuses. I know, I have asked them.

          The truth is, despite the Oracle media hype during the courtcase, and despite IBM's best efforts to both FUD Itanium and poach hp SD customers during the case, hp still kept on selling Itanium servers. With the court judgement removing the Oracle threat, and with the main plank of IBM's attack removed, it's pretty easy to see that hp will soon be winning sales of SD2 against 795.

          ".....5 qpi links since ther are not enough for the blades...." I love how you keep repeating this statement without supplying any actual technical argument as to why! Do you really believe that if you repeat your fondest wishes enough some fairy godmother will alter reality to make them true? Sorry, darling, that's not how the real World works.

          1. Jesper Frimann
            Devil

            Re: Well.....

            "where they switch off all the CPUs in the systems except one, but keep all thirty-two sockets of cache switched on to distort the performance figures?"

            Eh ? You need to go and read some manuals, Matt. When turning off cores in a POWER7 chip, you are also turning off that part of the distributed L3 Cache, that is held closely to the core. Kind of a bugger IMHO,

            And as unfair as I think the whole Oracle versus Itanium story is.. it has cost Integrity, it will still have the smell of "dead man walking". Which I take absolutely no pleasure in stating that it has. But again.. people don't buy hardware they buy solutions to run a software stack on, and when the higher parts of the solution stack, states that they don't like the lower parts... then it's the lower parts that has the problem.

            And as for guarantees, that is typical EDS sales tactics, and we've seen that in action a few times.. Sure we hit our performance guarantees wrong with a factor of 2.. dear client.. here are som more HP x86 blades for you as we promised, oh.. yes you have to pay for us managing them.. and the Oracle licenses ?

            Now that is also you who has to pay the extra cost there.. check your contract.

            Guarantees is not about having the best hardware but the most scrupulousness lawyers.

            // Jesper

      2. Jesper Frimann
        Holmes

        Re: Well.....

        You've gotta be kidding Matt.

        Yes, I've also seen the HP sales manual for how to sell against POWER, that what you get from partnering with everyone. That doesn't mean that it's right.

        Do you seriously think that a 8 socket Poulson Blade with limited IO, limited Memory, can compete against a POWER7+ based POWER 770/780 with 16 sockets, fullblown IO and Memory ? And hotswap of everything ?

        That is in Kebabbert territory of fanboiship. You should know better.

        I think the HP Poulson blades looks like a solid product, but it's not remotely in a position to compete against POWER 770 or 780's.

        Now I have no doubt that the bl890 i4 will be able to beat the PS704 blade, but again that is a 2.4GHz POWER7 system, in a form factor that I would never use for enterprise computing.

        And you say it yourself.. partitioning... POWER stopped doing partitioning back in 2005. You know ... it's kind of a bugger to claim that you have the best moped, when trying to win a race against someone in race car. Nobody ... except SUN and HP talk about partitioning anymore... and HP does actually have a decent virtualization option.

        come on....

        // Jesper

        1. Matt Bryant Silver badge
          Happy

          Re: Re: Well.....

          ".....You've gotta be kidding Matt....." You wish. Unlike Alli, I can back my statements up.

          ".......Do you seriously think that a 8 socket Poulson Blade with limited IO......" How is the IO limited? A BL890c i4 will have 12 PCIe mezz slots, each of which can hold quad-port cards if required. And that's on top of the built-in Flex LOMs which give 16 Flex-10 converged network adapters. A four-box p780 has to start adding expansion IO bays in a second rack to get even close to the single BL890c i4's IO capability, and then still does not have built-in CNAs. For the P780 you have to add in dual-port PCIe FCOE cards to get a CNA option, and doing so reduces the number of PCIe slots available to other adapters - only six per box unless you add expansion IO cages. And when the P780 adds IO expansion cages it does not add bandwidth but merely spreads the same bandwidth across more IO slots. That's not scale-out IO, that's inevitable IO contention.

          ".....limited Memory....." <Yawn> The existing BL890c i2 can do 1.5TB with the old style memory, and it looks like the new RAM options for the BL890c i4 will at least match the maxed out P780's 4TB.

          ".....a POWER7+ based POWER 770/780 with 16 sockets...." Ah, but you didn't mention the limits on your P780 when you start trying to load it up to sixteen sockets. To have octo-core CPUs you have to drop the speed down to 3.8GHz, and then those CPUs only come with 4MB of L3 cache per core compared to the 4.2GHz CPU's 8MB. But then that's a trick in itself - what IBM do is supply a proecessor card for the P780 with two octo-core 3.8GHz CPUs which you can then run in either MaxCore mode (as a slower octo-core with 4MB cacher per core) or TurboMode (four core's switched off, the remaining four cranked up to 4.14GHz and sharing the cache out at 8MB per core). Once again, another IBM design, just like the IBM blades, where you can't have speed and scale becasue the system can't supply enough power to run all the cores flat out and keep them cool. Once again, the P780 is another case where IBM forces you to choose either scale or performance, but you can't have both. Hilariously, IBM tries to sell this inability to deliver as a feature! With the new eight-core Poulson Itanium CPUs actually drawing LESS power than the Tukwila quad-cores you can still maximise both scale and performance with the hp option, being able to use the fastest octo-core Poulsons without having to switch cores off or choose less cache and less CPU grunt.

          "....POWER stopped doing partitioning back in 2005...." I think that's more of a case of IBM throwing in the towel when they realised the limitations of the Power architecture meant they could not match the hp option of nPars (true share-nothing hardware partitioning), vPars (software partitioning), Integrity Virtual Machines (hosted VMs) and now Containers (shared OS image supporting multiple sandboxed instances). Please do explain to Alli that IBM gave up in 2005 as she seems to be running about eight years behind everyone else.

          1. Matt Bryant Silver badge
            Pirate

            Re: Re: Re: Well.....

            Oh come on, Jesper, I've been waiting all day for you to fall into my trap, please get a move on! What, one post stopped you dead? Even Alli keeps on FUDing longer!

            OK, let's take it another step and look at what Turbomode on the P780 actually does and - more importantly - how, and then why IBM tries so hard to keep the realities of Turbomode a secret from their customers. First, look at how IBM sells Turbomode, it's all about "increasing performance by increasing the core frequency". By how much? A paltry 7% increase in clock. Yet IBM will talk wildly about 10-20% increase in single-threaded applications (just don't ask them for a guarantee of this). So how can a 7% increase in clock make that much difference? Truth is it doesn't, all the increase in performance is in the increase in cache and memory per core that you get when you switch half the cores off. Instead of eight cores per socket fighting for the same cache and memory bandwidth it's now divided between four cores. It probably also helps that the congested IO structure on the P780 is eased when half as many cores are trying to drag data through. In truth, the 7% clock hike is just there to make you think the cores are generating the extra oomph, when the reality is it is just IBM smoothing out the bottlenecks in their system.

            And that's just the start of why this is a big secret that IBM tries to hide. The Power architecture has awful bottlenecks and it is a truly terrible piece of design when switching off half the cores actually makes the overall system faster. But the even bigger secret IBM will not tell you about Turbomode is the impact on licensing, and if you bring this up in the sales conversation it will really make your IBM salesgrunt cry. Since Alli is so fond of mentioning Oracle, maybe she'd like to explain what happens with Oracle Enterprise licensing when you switch your off half your cores? No? OK, I'll tell you - Oracle doesn't give a f*ck and still charges you for all eight cores per socket! Yes, that's right - if you want the same thread count and plan on using Turbomode it will not only mean doubling up on the P780 modules you need (extra hardware cost) but also effectively doubles your licensing cost. And that is why BL890c i4 will win against P780 in Turbomode becasue the savings on Oracle licenses and support costs will pay for another server!

            1. Jesper Frimann
              Trollface

              Re: Well.....

              Amazing ... you haven't even understood the difference between TurboCore mode and then intelligent Energy Optimization (what corresponds to Intels TurboBoost.). There is not such thing as something called Turbomode for POWER.

              And you build a whole post up around this.. amazing...

              TurboCore mode is mostly useless, unless you have bought a fully loaded system where you only enabled half the cores, then you have the option of letting the half of the cores you have activated run at a constant higher frequency.. IMHO not particular usefull. But again.. it's not a feature that costs anything. It's IMHO mostly marketing bull. But hey it seems to have worked.. it got you all fired up.

              And I am very well aware of the fact that Oracle will charge you for the full 8 cores, which IMHO is not that unfair... you have bought all the cores.. and I don't think that Oracle recognises activations as a hard partitioning technology.

              // Jesper

              1. Matt Bryant Silver badge
                Happy

                Re: Well.....

                "....There is not such thing as something called Turbomode for POWER...." Apologies, I admit it's been a few months since the IBM salesgrunt got laughed out of the room for suggesting we try it.

                ".....TurboCore mode is mostly useless...." Really? You posted a specintrate2006 benchmark that used exactly that, a fully-stacked P780 with only four cores per CPU active, i.e. in TurboCore mode! In fact, it seems to be the preferred way for IBM to configure their system for all benchmarks. So what you're saying is that all IBM benchmarks are useless? Glad we agree on something at last. Care to explain why you think a system where you have to turn off half the CPUs to get it to go faster is a good design when it is very obviously a move to surmount bottlenecks in the system? Surely you would like to think that more cores would mean more processing power, not less, especially when IBM like to harp on about their superduperfast cores?

                "...... it's not a feature that costs anything....And I am very well aware of the fact that Oracle will charge you for the full 8 cores......" LOL! So you simply chose to ignore the point on Oracle Enterprise licensing being per core for all eight cores in each Power CPU even when you switch off half of them? That would seem to be a feature that costs a small fortune! Looking at the list price for RAC it's $23000 per core without software upgrades and support - ouch! So that's $736,000 dollars flushed down the drain for your benchmark P780 in TurboCore mode just on RAC, what about the rest of the software stack? No wonder you don't want to talk about it.

          2. Jesper Frimann
            Trollface

            Re: Well.....

            Actually you are not backing up your claim.

            And first before I start, I have the outmost respect for the BL8x0c i2/4 blades these are great products, but it is what it is, a blade server.

            Memory.

            Well, the memory you can buy now in a bl890c i4 is 1.5 TB, many of the HP itanium servers have always had good memory sizes. Sure they needed it cause of the bloated code but that is another story. In a POWER 770-MMD it's 4 TB, both machines will most likely double the amount of memory that will be able to hold, But the POWER server has a trick up it's sleeve, and that is memory compression. And with the POWER7+ that is done in hardware outside the core.Sure it's not a miracle cure, and although it supports up to a factor of 10 and IBM marketing crap material will put in big numbers, then we've seen good results ranging from 1.5-2.

            And then there is HPVM, lets see what does the sizing guide says oh yes set aside 18% for virtualization overhead, and then then there is the usual few percent for bloated Itanium executables. So your 1.5TB of RAM all of a sudden is down to 1.2TB, Not that a POWER server doesn't use memory for virtualization, but not in the ranges that HPVM does.

            IO.

            Come on. You have 1 QPI link going off each chip to a single INTEL IO chip. that is a total of 8 QPI links, verus the 16 GX++ links in the 770-MMC and I haven't seen the MMD layout but potential it can to to at least 32 links, as each chip have two GX++ busses.

            And the BL890c i2/4 cannot even hotswap an adapter.. honestly.. it's a blade.

            And as for the number of adapters... the POWER 770/780 can house up to 184 PCI-e x8 adapters.... versus 12 for the bl 890c i4. Honestly.. how can you even start to compare.

            And your ramblings about limits in the POWER 780.. RTFM. Sorry, not to be harsh but ....

            As for Processor power..

            Again according to INTEL then the specintrate2006 numers for Poulson is ~2.3 times that of Tukwila suggests a result of less than 1250 specintrate2006 for a bl890c i4 blade. And that sure is good documentation that the i890c i4 blade is just as fast as a POWER 780-MHD that documented does 6130 specintrate2006.

            That difference is so big...... that it's really not a contest. You should start to be a bit more sceptical about marketing material.

            Furthermore the bl890c is a blade... it can't hotswap memory, SMP links, Processors, IO, it doesn't have redundant clock, service processors and and and...

            Now virtualization...

            The fact that you still think partitioning is king just shows that you don't get it. Sure partitioning has advantages when it comes to limiting and controlling software licenses, but the whole electrical isolated carving a machine up into smaller more inefficient bits is.. well.. so 90ies. Sure HPVM is on the right track, and far superior to anything that Oracle is doing, But it's not in the same class as POWERVM, it's a freaking HPUX os running guest HPUX'es. It's not like people are doing different firewalls zones, mixing and matching test, development and production inside the same VMhost without blinking as people are on POWERVM. Try to get out in the real world a bit, you've been do to many x86 blade solutions.

            // Jesper

            1. Matt Bryant Silver badge
              Happy

              Re: Re: Well.....

              ".....the memory you can buy now in a bl890c i4 is 1.5 TB...." Not very convincing start when you start with a mistake! You can't order BL8x0c i4 models yet, only the i2 versions. Which does kinda blow a big hole in the rest of your comparisons.

              ".... and that is memory compression...." Oh dear, do you really want to go there? I'm sure you don't want to tell any potential customers about the performance hit you get when you try compressing memory in p-series.

              "......lets see what does the sizing guide says oh yes set aside 18% for virtualization overhead...." What, first generation Integrity Virtual Machines? Times have moved on. But how about you compare to hp npars or vpars, which don't require any overheads, which can be booted, started, stopped and patched independently of each other, then try and find an IBM equivalent? Don't mention Lpars as PowerVM means all your eggs are in one basket. And does PowerVM run on smiles and pixie dust? Er, no, it requires a service partition which is an overhead, just like Integrity Virtual Machines. And when you need to patch that PowerVM service partition? All the PowerVMs have to come down then!

              "....as each chip have two GX++ busses...." Each chip might, but both chips on a module are connecting to the same two port bus on the back of the module. You also need to add in more power (and cooling) into your rack for the P780 expansion cages, whereas with the BL8x0 blades the power (and cooling) is in the blade chassis and adding mezz cards and/or more chassis switches does not mean running more power cables into the rack. Oh, and don't forget the cost of those external switches you have to add to your P780 solution to get to close to the same capability as are already in the blades chassis. Compared to hp's i4 Integrity blades P780 will be pricey and use far too much rack space, and when compared to Superdome2 with Poulsons it will be slow and use far too much rack.

              "....specintrate2006...." And we're off into the usual hidey-hole for IBMers, the Benchmark Zone! Not to be confused with the Twilight Zone though there is a similar amount of smoke and mirrors and a marked lack of reality. Why does an IBMer like talking about specintrate2006? Because it is as far from the reality of daily enterprise computing as possible. If your job entails running specintrate2006 all day, then you might be interested in actually running a specintrate2006 comparison for a Poulson-equipped BL890c i4 (hp haven't yet, so Jesper is just guessing). Let's ignore that the hp blade Jesper does want to compare results with was not only running the older CPU but also wasn't even running the fastest version of that CPU, so his scaling for Poulson performance is already way off, but instead ask is your company is willing to buy a P780 and run JUST ONE CORE? Not one CPU, but only one core is used in the SPECint tests. Now, what happens on Power when you run one just one core? You get all the cache and all the memory. Does any customer in the World run their business software on just one core of a fully-stacked P780? Of course not. But even if it was your business to run nothing but specintrate2006 all day, you couldn't actually run the same test as the software used by IBM at the time of the test (Oct 2012) isn't available for sale ("Software availability" is Nov 2012 in the test report but not listed on the IBM website). So not only is Jesper's use of the benchmark a farce, he couldn't even reproduce that test for a customer that wanted it!

              ".....That difference is so big...... that it's really not a contest...." Well of course it's not, seeing as you actually haven't even run the test on Poulson! Maybe you should wait until the Poulson systems are orderable and then buy one to test. Better still, try some real World benchmarking with real data and applications. I would never think of making a purchase without doing just that, but I can see why you would prefer pushing IBM benchmarks.

              "..... it can't hotswap memory...." That one always makes me smile. IBM will say they can hotswap EVERYTHING! Then ask them to do it on a live system with penalties in the support contract if it falls over. Guess what they ask you to do then - stop the system!

              Then you have to consider what happens with the P780 when you hotswap items such as adapters because they can't offer Virtual Connect or any similar capability on either LAN or SAN on P780. When you hotswap an HBA on a P780 you have to first stop and move all the SAN devices connected to it if you want to stay connected - but your app probably won't like that, so you have downtime for your app even if the box is still up. After you "hotswap" it you then have to go through all the SAN switch zoning and change all the WWN entries before you can re-start your application. With LAN "hotswaps" on P780 it's the same story, the MAC address changes. In short the so-called hotswap is just the start of a larger administration exercise. On the hp BL8x0 blades I can use Virtual Connect and Flex Fabric to ensure the same WWN and MAC addresses are still presented to the same slots, meaning I can actually change the WHOLE sever out and still have a new one boot up with the same identity WITHOUT any changes to the LAN or SAN. But I suppose the extra admin tasks keeps IBM Global Services admins in work. And, at the end of the day, seeing as BL890c i4 is likely to be a lot cheaper than P780 even without you spending twice as much on licenses for the P780 cores you can't even use, I can always build a cluster of two BL890c i4s and still save on the cost of a P780 solution.

              "......partitioning has advantages when it comes to limiting and controlling software licenses...." What, after you cost the customer TWICE in Oracle licensing because you had to turn half the cores of to make the P780 go faster, now you want to NOT talk about license savings? LOL! A lot of the UNIX market out there is consolidating older systems onto new and more powerful ones, which means partitioning if you are to get the best utilisation. Of course, seeing as you don't seem to care about license costs you probably don't care about utilisation either. Sure, there are customers out there that actually want 256 cores in one OS instance but they're a much smaller amount of market than the ones doing consolidations. And for those that do want large instances hp will simply put up a SD2 option, probably two for the same cost as you wasted on core licenses you can't even use on your P780!

              1. Jesper Frimann
                Thumb Down

                Re: Well.....

                Oh.. just what I needed after a terrible day at work with dorks that

                "Not very convincing start when you start with a mistake! You can't order BL8x0c i4 models yet"

                Big clients get privileges, you can be quite sure that If I call HP and ask for 100 BL890c i4 blades.. fully equipped I'll be able to put in an order, you can bet you I would. And with regards to how much RAM the bl890c i4 can have and that of the POWER7+ based 770/780. Then I am stuck with what is out there of information.

                A part of my job is to evaluate and interact with different vendors, this includes HP,IBM,Cisco, Oracle and others, hence I talk about what is out in the open space. Not the different vendor NDA info I have.

                "Memory compression"

                Actually it performs quite well, specially on POWER7+, sure there isn't anything like a free ride, but it actually cuts down on memory bandwidth needs. But again it's not for all software stacks. But it works.

                "18% HPVM/IVM overhead("

                The source is current and valid no bull here, and it's even with 64K page size. Again... not 4K pages *hint* *hint* So no worst case bullshit from my side, I am being deadly honest and serious. And no we do not run 6.1 yet. An n-1 strategy works. I know that if you look at sizing a partition then it's 8.0-8.5% you have to allocate. But for the whole server the sizing guidelines from HP says 18%. It is what it is.

                "nPars/vPars"

                Sure they require an overhead. It's a partition layout overhead, but it's still an overhead.

                One of the big advantages of big servers is that you are able to 'pack' the virtual machines better. This is first year computer science Matt. And you know it.

                "POWERVM"

                Service partition ? POWERVM ? Ehhhh... it can do IO virtualization through X number of Virtual IO servers, no single point of failure *hint* *hint* if you choose to do so. You don't have to. And POWERVM can do processor pools which basically is partitioning, specially good for limiting licenses. But which is still under the shared pool processor concept. We had a client who financed two POWER 795 in license savings, by simply having me and their WW lead architects sit down and design things right.

                The whole logical/virtual/physical processor abstraction layer, actually protects against errors, simply by the fact that it puts in another real abstraction layer between the hardware and the OS. Hence it's basically not the OS that primary has to worry about failing hardware.

                AFAIK it's much the same functionality you get in HPVM. So your whole talk about vPars and nPars just shows that you are out of touch with reality.

                "Each chip might, but both chips on a module are connecting to the same two port bus on the back of the module."

                No. I don't know what you are talking about. Again you need to read a manual. I think I recognise that exact wording... isn't that from the HP attack kit against POWER servers. It looks like the line that is to counter MultiChipModules. But again it's not even the slightest relevanse to current POWER7+ servers.

                And sure to house 184 adapters you need IO drawers. Which means you can have close to 800 physical LAN/SAN ports, is so beyond what you can have in bl890c i2/4 that you are being ridiculous. But generally the internal IO is more than enough. So the modular design means that you are not limited to what you can put in a blade chasis.

                Furthermore if you've just touched any serious recent literature on datacenter design you would know that having high heat density blade chasis, it being IBM/HP/DELL, will have serious consequences for your datacenter. I remember 6 years ago where I was in a project where a client insisted upon buying blades, due to the huge savings, only problem was that their datacenter couldn't handle the heat density, so they build a extra cooling unit just where the blades were. Which kind of ruined their business case. And just as a side story, I remember their HP hw technician was furious cause the dorks hadn't taken airflow into account so the temperature in their racks of rx4640's (and IBM and SUN eq also) actually meant that they violated their service contract.

                And besides your whole premiss that a BL890c i4 is just as fast as a POWER 780 is a load of bull.

                Now your whole rambling about specint.. is just ... well.. "You can't keep using Oracle won't let us do benchmarks", as an excuse all the time. And I am relating to the Intel released numbers, which is not done on a recompiled code... Again Poulson is a new compiler target for compling code for Itanium. So we should see better numbers.. perhaps up to the factor of 3 that HP is claiming.. but it's still way behind POWER7+. And you don't want to talk about RL performance... or price/performance.. it just makes the case for POWER better.

                As for hotswap.

                Sure.. with the skill you are displaying with regards to POWER, then surely they will ask you to stop the system. That actually sounds rather responsible of the HW technician. If you client don't seem to have a clue, then don't put their system into jeopardy.

                When I design solutions where RAS is of the essence, then you can be damn sure that I'll involve my IBM/HP/SUN whatever hw technician(s), and get his/her opinion, and let them know the system they are to service in the future. The knowledge base that those people can draw upon is far greater than what I have access to. So I would be a fool not to involve them. And sometimes my HP/SUN/IBM hw technician will say, I know that the manuals says xxx but we have a gut feeling that .... so we think we need to close down the machine, then I'll not hold it against them, on the contrary, I'll know that I am dealing with a professional vendor.

                And we do hotswap memory and cores hot patch microcode etc on POWER, but it's not you just... you have to know what you are doing.

                And your ramblings about hotswapping adapters on a POWER 780 is ... well laughable.. it has nothing to do with reality. Basically the ways you are describing is how you would have done things on a POWER4 based p690 or a SD. It has nothing to do with how things are done today. Again you are talking blade servers versus old old iron. RTFM Matt. Don't project the shortcomings of how you did things when you actually had anything to do with systems onto your competitors current offerings.

                And even if you want to run with physical adapters then there is something called MPIO.

                But again we run virtual machines with up to 168 logical cores on a slice of a POWER 780, with fully virtualized IO, that does several GB/sec of IO sustained. Sure such large single system images needs a bit of special care, but it's not a problem.

                As for licenses.

                Why do you keep on talking about TurboCore, don't you get it ? You are rambling. We do not have a single system that in production runs TurboCore, sure the few 780 and 795 we have can do it, again it's a feature code that doesn't cost you anything so why not order it. You are trying to use the worst case scenario on POWER, a scenario that no sane architect would ever use.

                You comparison is fundamentally flawed, consolidation means partitioning ? Why.. why not just simply virtualize. I have POWER 770 and 570's here that run hundreds of different virtual machines from perhaps 20-30 different clients, production,test,development... located in different firewall zones.. all on the same physical box'es.

                Sure we do need smaller machines like for example we have some 740's, for those people that has license restrictions, but the virtual machines are more expensive on the small hardware. (Again a good reason for using POWER is that you can get away with perhaps a Oracle std. [one] edition, where you have to cough up for a more expensive license on itanium.)

                And you are still talking about partitioning when doing consolidation..... Amazing.. how can you stay competitive using such antiquated methods ?

                And as for 256 cores, who would want that, that is in 99% of the time a sign of a bad design, but perhaps the architects where you work aren't any better ?

                If you really want to talk about a problem with the POWER servers, then it's that they are getting to powerfull the capacity growth rate is higher, than the growth rate for more capacity of many of our clients. So that even our largest clients who used to have hundreds of UNIX servers we are today contemplating moving them to a shared environment. And we are trying cannibalize on other solutions.

                But what stands when the dust clears is that your comparison between an entry level product like blades and highend/midrange UNIX servers is fundamentally flawed.

                // Jesper

                1. Matt Bryant Silver badge
                  FAIL

                  Re: Well.....

                  "....Big clients get privileges, you can be quite sure that If I call HP and ask for 100 BL890c i4 blades.. fully equipped I'll be able to put in an order, you can bet you I would...." Great, but you haven't which means you're just guessing. It's OK, you can admit it, everyone on the forum knows it. Well, except probably Alli as she seems to be still on the loo.

                  "......sure there isn't anything like a free ride....." So you admit there is a performance hit. Good. Next point?

                  "....And no we do not run 6.1 yet....." So you're not talking about the latest version, as I said.

                  "....."nPars/vPars" Sure they require an overhead. It's a partition layout overhead, but it's still an overhead....." Wrong! The nPars tech has zero overhead as it's managed by the frame, not the hardware used by the OS images. And vPars you only use the system when you create or change the vPars, which means no overhead in operation. Which means both are superior to PowerVM, which does have overhead, and you still haven't answered the original question and provided any p-series comparable tech. I'm not surprised you'd avoid that comparison though.

                  ".....with the skill you are displaying with regards to POWER, then surely they will ask you to stop the system....." LOL! Our Power stacks are designed, POC'd and implemented with IBM, so if they are duff designs as you contend then so are IBM's best practices! It says a lot when IBM themselves have no faith in their own so-called "hotswap" tech!

                  "......And your ramblings about hotswapping adapters on a POWER 780...." Gosh, I was rambling, yet you didn't manage to disprove a single point! You claim it was that way on Power4, that it's not the same on P780, yet fail to actually supply any information to back up this claim. Want to try again?

                  ".....Why do you keep on talking about TurboCore...." Well, apart fromt eh way IBM try and sell Power by saying it's so fast, look at the GHz rating, over 4GHz (in quad-core mode, not eight-core), but also because YOU insisted on basing your argument around a specintrate2006 result. That result was attained using TurboCore to work round the bottlenecks in the Power architecture. You then salted as TurboCore as "useless" - make your mind up! And now you admit "We do not have a single system that in production runs TurboCore"! So the whole range of P780 benchmarks using TurboCore is completely unrealistic. And yet you saw fit to use it as the basis of your argument that P780 was the better system..... You sure you know how to design systems for customers? How many of them did you feed that specintrate male-bovine-manure to? But I do note your whole rambling non-argument STILL does not address the licensing issue.

                  So, let's review - you admit you don't have a BL890c i4 to test; you admit the memory tech you tried to tell us was "great" actually has a performance hit; you admitted you were not talking about the latest version of Integrity partitioning; you failed to answer the challenge of matching nPars and vPars; you failed to answer the point about IBM not having faith in their own hotswap tech; you failed to answer the problem with adapter swaps on P780 as it can't do Virtual Connect; you fail to counter the licensing point; and finally you fail (again) to note that the tech you described as "useless" is how IBM avoid the bottlenecks in Power. Looks like you must have had a very bad day as that whole rambling post of yours was just more evasion. Try again!

                  1. Jesper Frimann
                    Big Brother

                    Re: Well.....

                    "Great, but you haven't which means you're just guessing"

                    No.I am not. I don't have the prices for the individual parts, and very detailed info. But I do know which Itanium processors will be available for the blades, I know minimum and max RAM sizes etc etc. Again, there are benefits from being a business partner.

                    "So you admit there is a performance hit. Good. Next point?"

                    Sure there is, why would I claim otherwise ? If you want to critisize the POWER7 platform then it should be that at least the 'B' versions have had to much processor power compared to Memory, you simply ran out of memory before you ran out of processor resources. Now here we've had very positive effects with using that excess processor power to level out the playing field.

                    Try looking at this SAP paper:

                    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b074b49a-a17f-2e10-8397-f2108ad4257f?QuickLink=index&overridelayout=true&51346334076479http://

                    On the 'C' versions of the POWER7 machines, things got better with 4TB RAM support, on the machines we use (770's and 780's)

                    "So you're not talking about the latest version, as I said"

                    No, you said "What, first generation Integrity Virtual Machines?" And no we don't, we do n-1. And I haven't seen any references to a change in memory overhead in 6.1. So.. nice try to brush off. I'm right and you know it.

                    "Wrong! The nPars tech has zero overhead as it's managed by the frame"

                    You don't listen to what I say. I'll try to cut it out into cardboard for you.

                    I have two SD's lets call them SD1 and SD2 each with 128 Cores

                    I carve SD1 up into 16 nPARS of 8 cores each inside that I can run vPars that matches the virtual machine size I want. On SD2 I simply run one large IVM of 128 cores.

                    Now lets say I want to fit 'n' client virtual machines onto these machines, of an average size of 3 cores. Then the overhead I am wasting is on average something like Average virtual machine size divided by 2.

                    On SD2 the 'waste overhead' will be around ~1.5 Cores. On SD1 it'll be around 16x1,5 cores~= 24 cores.

                    It's really not that complicated, this is why partitioning is not cutting edge any more. Again this is pretty basic capacity planning. That is why technologies like IVM/HPVM and POWERVM is kicking the butt of partitioning technology, like for example LDOMS/vPARS And we haven't even beginning talking about overcommitment yet.

                    "Gosh, I was rambling, yet you didn't manage to disprove a single point! You claim it was that way on Power4, that it's not the same on P780, yet fail to actually supply any information to back up this claim. Want to try again?"

                    Are you kidding ? Are you using your obvious lack of knowledge of something as basic as MPIO, (that is MultiPath I/O) to try to fabricate an issue ? Do you want me to sit down with you and run through the whole freaking manual ?

                    RTFM Matt.. it's not that hard. And with virtualization and NPIV and MPIO it's even easier, cause then all the devices are virtual.

                    "Turbocore"

                    No the Specintrate2006 result I mentioned was not using turbocore mode. You just have to look it up:

                    http://www.spec.org/cpu2006/results/res2012q4/cpu2006-20121002-24653.html

                    "CPU(s) enabled: 128 cores, 16 chips, 8 cores/chip, 4 threads/core"

                    That wasn't that hard was it ? And no.. the POWER 795 running in TurboCore mode with 128 4.1GHz cores is not the same as the POWER 780 running with 128 cores with Intelligent Energy Optimization enabled and thus a maximum frequency of 4.1 GHz.

                    Again you haven't got your facts straight, the whole premiss for your argument is Dead wrong.

                    And no I don't use specint.... in my work. We pay a company called ideasInternational, to use their independent measurements figures to do sizings. For example they have an measurement called RPE2. And I can't praise them enough. But I am not allowed to post their RPE2 numbers for the different platforms on the web. It's in their contract, you have to understand this is their bread and butter. And IMHO it's money well spend. Although I think their numbers for the Oracle T4 systems are to high.

                    The reason I used specint in my post here is that it more or less the only Tukwila benchmark that HP have posted.

                    And btw. IBM does sell a 8 core per chip POWER7 running at 4.1 GHz. There is one in the Flex System p260 Compute Node,

                    Your "let's review"

                    So.. there is no licensing issue, no turbocore issue (other than we agree that upon that it's mostly useless), no hotswap issue and the male-bovine-manure comes from You. And if your local IBM staff is crap, then hire a business partner or take responsibility and educate your people, or don't use their products. What do I care..

                    // Jesper

                    1. Matt Bryant Silver badge
                      Happy

                      Re: Re: Well.....

                      ".....If you want to critisize the POWER7 platform then it should be that at least the 'B' versions have had to much processor power compared to Memory...." So, as I said, it's an unbalanced design with bottlenecks, and IBM's (and yours and Alli's) constantly harping on about core speed is pointless as the bottlenecks in the design mean the core performance is neither here nor there. Thanks for confirming that point. Which is why I would suggest anyone looking to buy Power (or any server) benchmark it in their own environment, with their data, and not believe the fairytales that are IBM benchmarks and core perfarmance blather.

                      "....no we don't, we do n-1. And I haven't seen any references to a change in memory overhead in 6.1...." So, as I said, you don't use the latest version of hp's partitioning software and don't have proof it is still the case with the latest version.

                      ".......Now lets say I want to fit 'n' client virtual machines onto these machines, of an average size of 3 cores....." Which just goes to show you don't understand hp's partitioning or the SD2. In your carefully constructed worst case, three cores, I would simply create an npar of three Superdome2 blades (twelve sockets) and then split those down into 3-core vPars without any wasted cores whatsoever, and no overhead. Indeed, I could go a step further and include iCAP or TiCAP cores so I can have the exact right cores at the time of implementation and then activate extra cores as required should the solution grow, adding those core online or even migrating them online between vPars. Now, do you really want to compare that to Power (no nPars)? And I bet you don't want to compare that with the wasting four cores per socket of P780 TurboCore!

                      ".....It's really not that complicated....." Not when you know what you're talking about.

                      "......And with virtualization and NPIV and MPIO it's even easier, cause then all the devices are virtual....." You mean using the FCOE adapters, the only CNA option on P780, which just doesn't have the flexibility or featureset of Virtual Connect or FlexFabris? Nice try. What about the other LAN and SAN cards on P780, all of which have fixed MAC and WWN addresses? Sure, you can multipath, but it's not the same as FlexFabric.

                      ".....http://www.spec.org/cpu2006/results/res2012q4/cpu2006-20121002-24653.html....." Oh, you mean the one where IBM had to dial down the core speed to 3.724GHz? Because that's what they have to do with the Power design, limit core frequency as they scale up, because the system cannot supply enough power or cooling for running eight cores continually at 4+GHz, and nowhere near the 4.4GHz they blather on about in their marketing slides when they implement TurboCore. And all Intelligent Energy Optimization does it let the cores run very short bursts at high frequency, and then not all the cores at once. In short, it's another IBM fudge. I didn't realise you were talking about that particular specintrate result becuase it was with RHEL (evidently it's not just SLES that is faster then AIX then), rather then the fastest 780 result with AIX which was what I linked. My sincere apologies for not realising you wanted to show that AIX is slower than that Linux distro too.

                      ".....And no I don't use specint....." So you admit that you wouldn't use TurboCore becasue it is "useless", and wouldn't use specint in your work (so why mention it then?). Yet your argument is based around specint and you quote frequency figures for TurboCore......

                      "......But I am not allowed to post their RPE2 numbers for the different platforms on the web....." So, having quoted a benchmark you wouldn't use in work, you now want to refer to a benchmark you can't share.... Still not convincing! What's next, are you going to channel the dead for some performance stats? Why not ask IBM what they say about benchmarks - "All performance estimates are provided “AS IS” and no warranties or guarantees are expressed or implied by IBM. Buyers should consult other sources of information, including system benchmarks and application sizing guides to evaluate the performance of a system they are considering buying.Actual system performance may vary and is dependent upon many factors including system hardware configuration and software design and configuration. IBM recommends application-oriented testing for performance predictions." (IBM Power Facts and Features IBM Power Systems, IBM PureFlex and Power Blades October 2012). Looks like IBM don't actually put any worth in the specintrate blather either!

                      "....IBM does sell a 8 core per chip POWER7 running at 4.1 GHz. There is one in the Flex System p260 Compute Node...." Which is basically a specilaist blade with very limited scaling capabilities for AIX and very limited choice of specialised low-height RAM modules. The BL860c i4 will trounce it in every way. So IBM can build a two-socket Power7+ blades and run all eight cores per socket at 4.1GHz? No, they can't. Again, it's "up to 4.1GHz" in the p260 because they're using Intelligent Energy Optimization to hit the 4.1GHz figure the average speed you can actually get is closer to 3.7GHz. And the four-core option for p260 goes up to 4.4GHz per core, again, which hints at what the eight-core could do if IBM could figure out power and cooling. BTW, weren't IBM marketing talking about 5.1GHz Power7+ at launch, where did that go then?

                      The likely reason Power7+ has bottleneck issues is because IBM chose to use cheaper and more space-efficeint DRAM for L3 cache rather than faster SRAM (which is what Poulson uses). In their mainframes they add an off-die L4 cache to get around the bottleneck of slow L3 cache, but that's too pricey to implement in p-series. Cache is very important (unless your name is Kebabbert) as you need cache to keep your cores spinning productively, otherwise you're just wasting cycles and electricity whilst waiting on memory or disk to provide data. So you want lots of cache and as fast as possible. IBM compromised and went for lots of slow DRAM as L3 cache so they could fit it inside the power and cooling envelope of the Power7+ package. To have used SRAM on Power7+ within the same package-size would have meant reducing the L3 cache (meaning more cycles spent waiting for data, making the Power bottleneck even bigger) or reducing the number of cores to make room on the die and provide power for the cache (sales suicide with Poulson going eight-core and Xeon already at ten). So instead IBM compromised wth TurboCore, which allows half the cores to get twice as much (slow) L3 cache. Whilst Poulson can do a full-speed eight-core with lots of fast cache, in a balanced system design, with IBM it's compromise, compromise, compromise, choose either performance or scale, they just can't seem to get round those bottlenecks in the Power design.

                      1. Jesper Frimann
                        Thumb Down

                        Re: Well.....

                        "So, as I said, it's an unbalanced design with bottlenecks..."

                        *CACKLE* now that is kind of pathetic. No you were doing some rather incoherent rambling about TurboCore, and how that means that an 8 core POWER7 don't perform. And now that I say the there is to much processing power in the earlier versions of the B machines, in comparision to how much RAM the machines could hold. Then that is what you said ?

                        Have you considered running for office in the US ?

                        "So, as I said, you don't use the latest version of hp's partitioning software and don't have proof it is still the case with the latest version."

                        It's the same. The 6.1 manual referes to to the same whitepaper, as the 4.3 manuals. And actually there are examples in the 6.1 manuals that indicates that the average memory overhead have gone up.

                        But at least they then this time talk about overcommitment. So sorry Matt you are wrong.

                        "Which just goes to show you don't understand hp's partitioning or the SD2. In your carefully constructed worst case"

                        Whoever talked about a SD2 ? And it's not the worst case. I could have been much worse. I've seen much worse, but that was on SUN eq. back in the late 90ies.

                        But I can evaluate your proposition. First I talked about an 'average' size of 3, not an exact size of 3.

                        Again this means that you'll on each nPAR have a waste of ~1.5 core due to not an exact fit. Furthermore each nPar will have to have it's own iCAP and TiCAP cores, rather than drawing from a single pool.

                        Hence you will have an overhead on a machine with 4 nPars, 4x ~1.5 cores = 6 cores + an overhead cause you'd have to have 4 pools of iCap processor not not just one. Lets be super optimistic and put that waste at half a core for each npar, again cause we are nice to Matt. that gives us 2 cores.

                        Then there is the vpars. Again we talked about an average size of 3 cores. That means that all the virtual machines that needs 2.1-2.9 cores will need 3 cores. Hence aprox 0,5 cores is wasted per vpar. And inside your 12 socket nPAR you have 32 vpars hence that is roughly 16 cores wasted.

                        Now as we are talking partitions, not virtual machines capable of running overcommitted. Then the processor time that these vPARS aren't using is wasted.. it is wasted.. if a virtual machine isn't using it CPU cycles.. nobody else can use it. That means that if a virtual machine runs at 100% utilization for 1 minute and for the next 4 minutes runs at 0% utilization, then 80% of that processor power is wasted. Sure normal workloads are a bit different, but the general picture is the same.

                        Everybody that runs VMware, IVM, Xen or POWERVM, zVM .... knows this for a fact. So on top of the partitioning waste then you are stuck at perhaps 20-30% physical utilization, rather than the 50-70% you would be if you ran fully virtualized environment.

                        So basically Matt your SD2 here running the antiquated way you want to do things will have a massive overhead, most likely somewhere around 50% off all the cores inside the machine.

                        If this is how you design solutions, how the hell can you stay in business.. you guys need to be outsourced.

                        "You mean using the FCOE adapters...."

                        No I mean virtual devices inside the virtual machines. You know where you by a few quick commands or clicks of a mouse or a script whatever you want .. have created new virtual networks, with virtual network adapters and virtual storage adapters.

                        "Specint"

                        You blew it.. just admit it. And who cares about clock speed, people care about capacity. And no matter how much TurboCore and GHz FUD you throw. It still stands that Intel numbers (you know the guys who make Itanium) claims that the top bin Itanium will do around 1250 specintrate2006 on 8 socket and 64 cores in a BL890c i4 blade. And you are claiming that that blade is just as fast as a POWER 780 with 16 sockets and 128 cores doing 6130/6134 specintrate2006. Again.. the claim is ridiculous, and the fact that you just don't admit you are wrong is.. actually kind of sad.

                        RPE2.

                        I really don't understand why you need to put down Ideas International, and the fact that I use an independent vendor when doing sizings for our clients. I personally think it a great idea, and I would recommend that clients used for example ideas international when benchmarking vendors against each other. It is IMHO a great investment, as it lets you cut through the vendor crap.

                        And sure all performance depends... if you don't size things right and utilize the features which the sizing depends upon.. then things will differ. You are FUD'ing.

                        "Which is basically a specilaist blade with very limited scaling "

                        So ... when everybody but HP is doing blades then it's specialist.. and crap *CACKLE* Actually the 2 socket p260 blade is OK. It supports 512GB RAM and a dual Virtual IO server setup.

                        And yes it's 4.1GHz, not up to 4.1GHz, with 'boost', which kind again makes your ramblings about turbocore and speeds and.. kind of funny.

                        "The BL860c i4 will trounce it in every way."

                        Oh ? ... *cackle* You do know that the p260 beats the ProLiant BL460c Gen8 with Intel Xeon E5-2680 with around 35%. So you are actually predicting that the bl860c i4 will basically crush the current Intel Xeon processors ?

                        Perhaps you should think a bit about that, and contact your local Intel sales representative for a confirmation on that wet dream.

                        Your last TurboCore SRAM cache story is.. well like so far out that it's out there, that it belongs with the same category as the Orbital MindControll Lasers and the faked moon landing.

                        Personally I think that increasing your per core L3 cache from 4 to 10MB per core onchip cache is better than reducing the cache size as have happened from Tukwila to Poulson. (6 MB to 4 MB per core).

                        Again you ramble on, without checking your facts.

                        // Jesper

            2. Anonymous Coward
              Anonymous Coward

              Re: Well.....

              Chill Jesper. Honestly you, Matt and Alli are descending into a non-sensical rant.

              > Sure they needed it cause of the bloated code but that is another story.

              Why would you want to run HP-UX or AIX anyway and pay extraordinarily high costs. Except for the few folks with bags of money the world+dog doesn't give a sh!t about UNIX in the long term.

              Show me 1 workload that won't run on x86 ... and I'll show you 1000s that do...

              1. Jesper Frimann
                Headmaster

                Re: Well.....

                Well, IMHO your comment is actually very valid.

                But for large companies and people who do sourcing of IT for clients it does make sense.

                We have the advantage of being able to have a sheer volume that actually makes Large UNIX servers a viable option.

                For example the UNIX shared environment that I am responsible for, it's actually cheaper for you to buy a virtual server that runs AIX and has XXX number of capacity units than it is buying a virtual Windows server.

                Even though our UNIX hardware platform is a huge hotswap everything UNIX machine compared to a 2 socket somewhat good quality x86 servers.

                Simply cause we have to volume to do things efficiently and thus expensive UNIX servers make sense. So it's not for everyone.

                Now having 1000 wintel servers that all run at lousy utilization compared to a few large UNIX boxes is.. well.. expensive. You just use your money on something else.

                // Jesper

                1. Matt Bryant Silver badge
                  WTF?

                  Re: Re: Well.....

                  "....We have the advantage of being able to have a sheer volume that actually makes Large UNIX servers a viable option....." I nearly fell off my chair laughing at that! I'm guessing you designed the Windows environment because the idea of making Power even match x64 prices is laughable. Shall we compare? A top end Xeon already scales to ten cores and costs a lot less than a Power CPU. A sixteen-core AMD costs even less. I guess what you are trying to pretend is that you are using PowerVM on a large p-series system and comparing it to the most expensive two-socket Xeon box you can find. You are no doubt using PowerVM to produce lots of insecure instances with one common service partition (which means if you have to work on the servcie partition then all the instances have downtime). That is easy to exceed in VMware, which is more feature-rich than PowerVM, more secure, and - frankly - more stable, and a whole lot cheaper, and I can do it on Xeon systems that scale to hp DL980 size (eight ten-core Xeons) for a fraction of the price of p-series, and maximise utilisation with VMware. Or I can scale out on cheaper 2-socket blades like the BL460 or the SL6500 scaleable platforms, both of which will be massively cheaper than p-series, and still maximise utilisation with VMware (or KVM, or Xen).

                  But don't take my word for it, let's look at real large-scale environments like Google, are they using Power? No, they are using x64, they just design their solutions a lot better than your architects seem able to. For you to even try and pretend that AIX on Power can be cheaper than Windows on x64 is too silly to contemplate.

                  1. Jesper Frimann
                    Pirate

                    Re: Well.....

                    "You are no doubt using PowerVM to produce lots of insecure instances with one common service partition"

                    *CACKLE* You have really no bloody clue of how POWERVM works ? You are quoting the insecurities of HPVM and projecting them onto POWERVM. *CACKLE*

                    And sure Plastic Virtualization as VMware has more features and bells and whistles than all other virtualization technologies put together.

                    But still what is left here is the fact that you really have no clue. You don't.

                    And the google comment is even more hilarious. I have great respect for the stuff that Facebook, Google and others are doing. But the problem they are solving is more alike to supercomputing than 'business computing'. The mechanics is different.

                    And yes x86 hardware is cheap, but sometimes cheap is expensiv. Again pulling the DL980 out of the hat is actually again text book HP sales tactics. Again.. I get to see the slides, "when you fail to lead with itanium First SD2/BL890c i2/4 then pull out DL980", What became of all of the Itanium benefits ? What became of all the HP blade system advantages ? No.. now it DL980.

                    Again you are not trying to champion one platform over another, you are simply trying to FUD the POWER platform. Your sudden change of direction kind of is a tell.

                    Sorry Matt but you are kind of predictable.

                    // Jesper

                    1. Matt Bryant Silver badge
                      FAIL

                      Re: Well.....

                      "....You have really no bloody clue of how POWERVM works...." Really? So what is the VIOS, the Virtual IO Server, in PowerVM? It's the IBM marketing name for the service patition that has to sit below ALL instances in PowerVM to handle virtualising the hardware for the software instances! Shall we see what IBM says in its Redbooks about VIOS? "The Virtual I/O Server as a virtualization software appliance provides a restricted, scriptable command line interface (IOSCLI). All Virtual I/O Server configurations must be made on this IOSCLI using the restricted shell provided......A Virtual I/O Server partition with dedicated physical resources to share to other partitions is also required......" So not only is it an overhead (it requires system cycles and RAM to operate), but you also have to go play with it in a CLI, no GUI! How last century! Don't beleive me then go read here, page 218 and page 223 - http://www.redbooks.ibm.com/redbooks/pdfs/sg247940.pdf

                      But it gets better! Not only is VIOS an overhead, but IBM actually advise you to mirror the service partition as they know it is a vulnerability that could take down ALL the hosted VMs, so you actually end up with DOUBLE the overhead! Please do try and deny it, Jesper, just for fun. Wait, don't tell me, you're now going to claim IBM knows nothing about PowerVM too.....

                      "....And the google comment is even more hilarious...." More evasions. You stated you could produce a large AIX environment on Power and make it cheaper than 2-socket Windows instances. The cookie-tray type of x64 server hardware used for the Google systems is exactly what can be used with Windows and is being done so by hosting companies. You just don't want to admit you were wrong. Again.

                      ".......Again pulling the DL980 out of the hat is actually again text book HP sales tactics...." So providing proof of a large-scale x64 system that is a lot cheaper than your IBM equivalent can't be mentioned because hp might actually have told their salesgrunts to sell it? Oooh, winning argument - NOT! You brought Windows into the discussion with your claim you could make AIX and Power cheaper, don't get upset when I show how easy it is to debunk that.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: Well.....

                        "but IBM actually advise you to mirror the service partition "

                        It learns! Up till now you have either been lying or just shown a complete lack of competence. Which is it?

                        "but you also have to go play with it in a CLI, no GUI! How last century!"

                        I retract the question.

                        1. Matt Bryant Silver badge
                          FAIL

                          Re: Re: Well.....

                          "....It learns! ...." You haven't seeing as you still haven't supplied any counter to the point.

                      2. Jesper Frimann
                        Holmes

                        Re: Well.....

                        *CACKLE*

                        Man it's fun to watch you quite things from the manual and try to twist it into something that fits inside your misguided world view. And fail cause you don't understand it.

                        The Virtual IO server is what it is, it's a server that serves virtual IO to the virtual machines running on the physical server. It's been there for what 7+ years. In my standard design, we don't have 1.. or 2 or 3 we have 4. One that resides inside each System Unit, and owns the IO devices that are in that System Unit. Which is very smart when doing concurrent maintenance on the physical machine. We don't yet use the federated capability of the new VIO code but we will. So for now it's SEA failover for network, and MPIO for SAN traffic. Works like a charm. No single point of failure. And I can just boot a VIO server if I want to... no loss of packages no loss of traffic, sure virtual machines that are using this VIO server will cough and complain as they 'route' traffic to another VIO server.. if I 'just' pull the plug. So a controlled managed takedown is clearly preferred.

                        Now you don't have to use VIO servers if you don't like them. You can just dedicate an adapter, no problem or use the IVE adapters for network (hardware virtualized adapters). But if you are running a multitenant environment you want the flexibility and the security a VIO servers give you. It does not run beneath the virtual machines like IVM, it runs besides the virtual machines. So your whole double overhead is hilarious.. again you are taking the shortcomings of IVM and projecting them onto POWER. Shortcomings that aren't there.

                        Is there an overhead ? Sure there is. Memory wise it's usually 16-32GB for a machine with 2-4TB RAM and CPU wise who cares. It does normally reserve perhaps 2-3 cores in entitlement. That is the real overhead, and the real price you pay, well kind of because some of the IO CPU usage normally done by the virtual machines are actually shipped to the VIO servers, so the overhead is perhaps more like 1-2 cores. But again when compared with HPVM/IVM it's peanuts... I mean 18% compared to what 1-2% memory overhead for POWER. *CACKLE*

                        And with regards to cli. Now what is wrong with CLI ? IMHO it's actually nice and keeps the Cowboy GUI "just click here here and here and then..." riders away from the VIO part,

                        But you can actually GUI configure your whole Machine, using the System Planning tool, in which you design your whole virtualization layout of the physical machine. You then just point and click and push that design out on your physical machine via the HMC and wuupti.. you have a machine with VIO servers installed and all your virtual devices, virtual networks, virtual machines etc etc. defined. Ready to install AIX/LINUX whatever.

                        Sure it isn't that popular with your local cowboy consultant sysadmin, cause it takes away time where he could be clicking away for hours sipping coffee and making error 40ies.

                        But again.. so your whole POWER attack plan, in defence of Poulson comes down to the fact that the main VIO server management is done via a CLI ?

                        now the price stuff is in another post.

                        // Jesper

                        1. Matt Bryant Silver badge
                          FAIL

                          Re: Well.....

                          ".....The Virtual IO server is what it is, it's a server that serves virtual IO to the virtual machines running on the physical server......In my standard design, we don't have 1.. or 2 or 3 we have 4....." Hmmm, now didn't you just insist a few minutes ago that the VIOS was "optional":

                          "......There are no service partition in POWERVM, there are optional VIO servers....." Posted Monday 19th November 2012 08:55 GMT, Jesper Frimann.

                          So optional you need four of them.....

                          "....you don't have to use VIO servers if you don't like them. You can just dedicate an adapter....." So an IO adapter for each partition, that can't be shared without a VIOS instance (or two, or four - is the IBM code really that flakey you need four for comfort?). So instead of sharing an adapter, you have to add dedicated adapters for each instance, driving up the number of adapter cards in the solution and pushing down the efficiency and utilisation of the IO infrastructure, which is a pretty poor method compared to hp's FlexFabric.

                          ".......Is there an overhead ? Sure there is. Memory wise it's usually 16-32GB for a machine with 2-4TB RAM and CPU wise who cares....." Don't forget the disk you need for the VIOS, at least 30GB. And the MINIMUM core overhead is 5%, more if you use virtual SCSI. Is that you saying "I'd rather not talk about the core overhead"? So you make a big fuss about an overhead in an OLD version of IVM but don't want to talk about the overhead in the current version of PowerVM? Trolls in glass houses, me thinks.

                          ".....you can actually GUI configure your whole Machine, using the System Planning tool...." Would that be the IBM SPT that you need to use to validate the design and push it to the HMC? The SPT requires a whole separate p-series system to reside on. Wow, those IBM guys really do know how to inflate an order!

                          1. Anonymous Coward
                            Anonymous Coward

                            How to measure success

                            Power/AIX people are a conservative bunch. Anything new that may have an impact on stability and performance is regarded with a high amount of scepticism.

                            Then it is rather telling that about 70% of Power systems employ advanced virtualisation. If you leave out small 1 socket systems and older Power5 systems I think it is close to 100%. Flexible, performant, rock solid virtualisation architected from the ground up.

                            I don't know the Itanic numbers but from what I have seen I would guess the percentage there is close to zero. One might speculate why, but Itanics piss poor virtualisation seems like a probable explanation.

                            Btw. one of our customer migrated from Itanic to an Oracle solution. Talk about being at the bottom of the food chain when that is considered a step up the ladder.

                            1. Matt Bryant Silver badge
                              Facepalm

                              Re: How to measure success

                              "....70% of Power systems employ advanced virtualisation...." It's really funny when the IBM trolls start to contradict each other. First Jesper says big single instances are a must, then this troll says "close to 100%" of the p-series systems are virtualised. IBM needs to spend more on troll training.

                              "....I don't know the Itanic numbers....." Well, despite IBM and Oracle FUDing Inetgrity right through the Oracle trial, and despite customers putting off purchases whilst they waited for Poulson, hp still kept on selling Integrity servers. Now, with Poulson out and Integrity being the only platform with GUARANTEED Oracle availability, IBM have SFA chance of stealing those hp customers.

                              ".....one of our customer...." Ah, obviously an IBM salesgrunt or reseller masquerading as an impartial observer. Not just a troll but a stupid one!

                              1. Jesper Frimann
                                Devil

                                Re: How to measure success

                                "Now, with Poulson out and Integrity being the only platform with GUARANTEED Oracle availability"

                                That is like saying... "Now that I have a restraining order from my paranoid schizophrenic homicidal boyfriend I am safe".

                                When that is said... I am truly delighted that HP won, I think all UNIX people has to be. I really really liked that win. Having grown up on HP9000s300 with 68030's, a HP9000s430 and Embla the lovely HP9000 735 with it's lighning fast PARISC 71X0 processor (can't remember if it was the 7100 or the 7150).... but that was 20+ years ago.

                                Back when HP could make 'UNIX machines'

                                // jesper

                          2. Jesper Frimann
                            Headmaster

                            Re: Well.....

                            Hmmm, now didn't you just insist a few minutes ago that the VIOS was "optional":

                            It is. I use it.. cause it enables me to run hundreds of virtual machines, from tens of different clients in different Firewall security zones on the same HW. Securely and with Low TCO. Again a virtual netcard costs just some euro cents in memory usage, compared to thousands of Euro that a you'll pay for a your HP Virtual Connect Flex on your i4 blades. It's all about TCO.

                            "So instead of sharing an adapter, you have to add dedicated adapters for each instance, driving up the number of adapter cards in the solution and pushing down the efficiency and utilisation of the IO infrastructure, which is a pretty poor method compared to hp's FlexFabric."

                            Again.. you are so far behind in your comprehension of anything non HP that all your comments about 'We also have POWER and bla bla.." seems hollow.

                            POWER servers have IVE/HEA adapters which is hardware virtualized ethernet adapters. Typically 8x 1Gbit + 8 x10Gbit adapters for a POWER 770. Sure I can use those to have hundreds of logical network ports inside the machine all going out of the machine through the IVE ports.

                            Again I have all the options that you get all wet and hot about when it's in your own HP hardware, but I also have more and better options that I use.

                            "So you make a big fuss about an overhead in an OLD version of IVM but don't want to talk about the overhead in the current version of PowerVM? Trolls in glass houses, me thinks."

                            *BZZZZZ* Wrong the Memory overhead is the same in IVM 6.1. No change there. Again read the manual.

                            And it's actually shocking that you don't understand the concept of physical and virtual capacity, as this is also how IVM does things.

                            What I don't want to give up is entitlement. Try reading some of the more detailed IVM documentation, it AFAIR uses much the same type of virtual and physical allocation system as POWERVM. Virtual Capacity I'll throw around like monopoly money, as long as I don't hit 100% physical hardware utilization. Hence "I'd rather not talk about the core overhead" cause it's pretty much irrelevant as long as you have free physical capacity.

                            Entitlement on the other hand I want to keep in check.

                            "Don't forget the disk you need for the VIOS, at least 30GB"

                            Eh ? 30 GB ? I have 8 internal disks, you are trying to say that a few cheap internal disks is a problem ?

                            "The SPT requires a whole separate p-series system to reside on. Wow, those IBM guys really do know how to inflate an order!"

                            *CACKLE* RTFMSF. It requires (unfortunately) someone with a windows PC. For me it's a virtual machine running on my Linux laptop. It then generates an compress xml file that contains your design.

                            I think it's pretty clear for everyone that your not really that sharp on POWER, now that one with the SPT needing a POWER server is new.

                            // Jesper

                            1. Matt Bryant Silver badge
                              Happy

                              Re: Re: Well.....

                              So, to summarize - VIOS is "optional" unless you want to virtualise IO to LAN and disk, which is pretty much what every virtualization job requires; it's "optional", which implies "not needed", but you need to use four per system; it has no overhead, apart from RAM, CPU cycles, disk and Ethernet (but it doesn't cut into your stock of pixie dust!); you claim you can make instances on PowerVM so efficient and cheap that they beat all possible x64 Windows solutions, but only want to compare to 2-socket x64 servers and not hyperscale x64 implementations or even large x64 servers like hp's DL980; and for you the gradual decline of UNIX is a myth......

                              If your defence wasn't so tragic I'd be laughing for hours!

                              1. Jesper Frimann
                                Angel

                                Re: Well.....

                                Well, laughing is good for you. I am glad that I can bring some joy into your life.

                                And actually I don't care that you think about me, or what I do. I am good at what I do, and I've successfully cut the prices of the products I am responsible for with 70% by simply utilizing the technology right. And there is still fat left to be cut off.

                                // Jesper

                  2. Jesper Frimann
                    Devil

                    Re: Well.....

                    "I nearly fell off my chair laughing at that! I'm guessing you designed the Windows environment because the idea of making Power even match x64 prices is laughable"

                    Yes sure it is:

                    http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=111050501&layout=

                    and

                    http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=110041301&layout=

                    6% price difference per transaction between a HP 2 socket server with a pure windows stack and a hotswap everything, highend POWER system.

                    "You are no doubt using PowerVM to produce lots of insecure instances with one common service partition"

                    No insecure instances with one common service partition.. that is HPVM.. you got things mixed around. There are no service partition in POWERVM, there are optional VIO servers. but that is exactly for ensuring security.

                    "Google ?"

                    *CACKLE*, you do know that those guys are trying to solve 1-4 different problems where they have total control 100% control of their software stack right ?

                    We have almost everything. THe mechanics of the business is very very different. Again Google/Amazon etc are more Supercomputing like installations.

                    // Jesper

                    1. Matt Bryant Silver badge
                      FAIL

                      Re: Well.....

                      ".....6% price difference per transaction between a HP 2 socket server with a pure windows stack and a hotswap everything, highend POWER system....." Oops, we've lost Jesper, he's hiding in the Benchmark Zone again! But, hold on a sec - Jesper claimed he could make his IBM solution CHEAPER than Windows 2-socket, but IBM can't even with their infamous benchmarking hi-jinks. Jesper is truly an IT guru - not! What makes it worse is that was a bench for the older G7 version of the Proliant, not the latest and faster Gen8. Looks like even the fantasy world of the Benchmark Zone isn't being friendly to Jesper today!

                      ".....There are no service partition in POWERVM, there are optional VIO servers....." Without a Virtual I/O Server instance you cannot virtualise the adapter cards, shared disk, tape drives, etc. In short, without VIOS you're just creating VMs that can't actually connect to anything. The typical partiton layout, with dual VIOS instances for redundancy, is shown on page 370 of the IBM Redbook on PowerVM (http://www.redbooks.ibm.com/redbooks/pdfs/sg247590.pdf). Once again, you want to disagree then go moan at IBM. You could do micropartitions via the PowerVM Hypervisor, but then that's really putting all your eggs in one basket as you cannot have a redundant Hypervisor.

                      "....*CACKLE*...." I'm getting a bit worried by that, it is some IBM salegrunt standing over you and zapping you with a cattleprod whilst screaming "more FUD, more FUD!"

                      "......you do know that those guys are trying to solve 1-4 different problems where they have total control 100% control of their software stack right ?...." Yes, with x64 tech, not Power. And the reason is becuase they can do it better and cheaper with x64.

                      ".... THe mechanics of the business is very very different...." Yes, backtrack some more why don't you. So it looks like you can make Power cheaper than x64 right up until the point where actual reality steps in.

                      1. Jesper Frimann
                        Headmaster

                        Re: Well.....

                        "Oops, we've lost Jesper, he's hiding in the Benchmark Zone again! But, hold on a sec -"

                        Heh.. so having a HP x86 benchmark 2011 battle it out on price for a 2 generation old POWER7 system from Q1 2010 isn't a big enough advantage for you ? *CACKLE*

                        In Real life besides this, the systems I have are.. well serial number wise for the most part 7+ years old, hence they are upgrades of upgrades ... so perhaps I save as much as 50% of the price of the system, due to the fact that I reuse old old software licenses, core and memory activations, adapters and other parts. I never buy new I upgrade.

                        Furthermore I don't use the POWER 780, I use the POWER 770 which is only around 40% of the price of a POWER 780, but delivers 85% of the capacity (CPU wise) Memory is the same.

                        So before I even start to talk about discounts... I am already at 20% of the price of the system you are looking at there.

                        Furthermore I can drive the utilization, again in the real world, to a higher level of utilization that you can do on your little x86 machine, specially if we take the same type of workloads. Sure consolidating X number of idle servers.. which I sometimes see on Vmware farms.. nobody can really match.

                        "Hypervisor, but then that's really putting all your eggs in one basket as you cannot have a redundant Hypervisor."

                        *CACKLE* Again you havn't got the faintest clue.. And Sure I use multiple VIO servers, I've written that already.. is it a problem ? No.. does it limit IO ? Naahh not really.

                        Try to look at number 9 on the best result for the SPC-1 benchmark (it used to be nr. 1 when it was released quite a few years ago)

                        http://www.ideasinternational.com/Benchmark-Top-Ten/SPC-1-SPC-1-E

                        It's... POWERVM with VIO servers.serving storage to a virtual machine... Now where is the result for IVM ?

                        Oh.. all they talk about in the manuals is over head and overhead and ohh... yeah right. What powerVM did was just going out proving that it was the fastest storage system. So lets just put the VIO is bad thing to rest.

                        And as for a dual Hypervisor ? You don't really know how POWERVM works do you ? It's not an independent program that executes beneath the virtual machines. It's more like a shared library that you call, and all execution is done by the virtual machine.. in hypervisor mode. Again it's fully supported and has special implementation in the POWER hardware.

                        And just for the record.. the hypervisor is mirrored... in different parts of memory on different books/system units... Again.. you are projecting the weaknesses of the solutions you normally work with onto a system that does not have these weaknesses. Again.. and I am getting tired of this..t ry reading a manual on the subject.

                        "I'm getting a bit worried by that, it is some IBM salegrunt standing over you and zapping you with a cattleprod whilst screaming "more FUD, more FUD!"

                        Actually they do not like me, rumour from a reliable source has it that after I took over as Design Authority on POWER, IBM hardware sales on POWER declined by more than 25% to us. Honestly.. they do not like me, sure the number of upgrades and POWER 770's sold has increased, but revenue is down. No shit.

                        "Yes, backtrack some more why don't you. So it looks like you can make Power cheaper than x64 right up until the point where actual reality steps in."

                        Again just bull from you, I do not claim to be able to solve the problem that Google, Facebook etc. are solving better than them, with the tools I am using. The bull here is you claiming that their problem and their tools can sustainable solve the problems I am solving.

                        Again we are seeing UNIX clients we have lost, to competitors on x86 scale out blade solutions coming back, asking us to help them, cause they got screwed over and have problems caused by these lousy solutions, problems that are visible on their yearly financial reports.

                        // jesper

                        1. Matt Bryant Silver badge
                          FAIL

                          Re: Re: Well.....

                          ".....so having a HP x86 benchmark 2011 battle it out on price for a 2 generation old POWER7 system from Q1 2010 isn't a big enough advantage for you ?....." You mean your apples and oranges comparison? Please try and keep at least one foot in reality.

                          "......Furthermore I can drive the utilization, again in the real world, to a higher level of utilization that you can do on your little x86 machine....." Oh puh-lease take your meds! Yes, PowerVM is just such the best choice for virtualization the World's favorite and preferred choice for virtualization is.... not PowerVM. It's VMware on x64. It's a pretty safe bet that there are more instances of KVM out there than PowerVM. And how can you claim PowerVM is going to get better utilization when you yourself admitted you use FOUR separate instances of VIOS - that's the four VMs lost just for virtualising IO!

                          "....So lets just put the VIO is bad thing to rest....." Que? It was you that first insisted a VIOS wasn't required, it was "optional", then admitted you put on FOUR separate VIOS instances per system! And it was you that insisted VIOS had no overhead and then backtracked to admitting it required lots of RAM and CPU (and you didn't mention the dedicated disk and Ethernet needed per VIOS as I showed in the IBM manual). And how did we get to VIOS? Because it was you that tried to switch the thread from FUDing Poulson to FUDing Integrity Virtual Machines by highlighting the memory overhead, hiding the fact that you were quoting the overhead for an old version. And yet now you're running scared when I counter by pointing out the problems with PowerVM in the current version! ROFLMAO!

                          "......Again we are seeing UNIX clients we have lost, to competitors on x86 scale out blade solutions coming back....." We'll, even if all that shows is Danish companies can't spec and implement x64 solutions, it still ignores the fact that x64 is growing and has been growing for years, whilst UNIX is in gradual and continual decline. Now, if what you claimed was true, that PowerVM was the best and cheapest solution, ignoring all the other fantasy claims you make for Power's and AIX's mythical features, it would be the reverse and Power would be out-selling x64. Please do try and deny it if only for the comedy value.

                          Sorry, but for such silly denial of reality I can only assign the FAIL icon.

                          1. Jesper Frimann
                            Devil

                            Re: Well.....

                            "Please try and keep at least one foot in reality."

                            But I am.. Do you say that I shouldn't upgrade my machines, but throw old POWER6 machines out ? A big part of my job is cost optimization, and I am pretty good at it. Again to the dismay of our HW vendors.

                            You think that the forklift upgrade from SD -> SD2 is better somehow than rebuilding a machine ?

                            I don't...

                            The difference between you and I, seems to be that I am actually working with what I am claiming to be working with.

                            If I do an ls -1 in my standard POWER hardware directory.. there are actually files that look like this:

                            STD-power-570-9117-MMA-v0.1.txt

                            STD-power-570-9117-MMA-v0.2.txt

                            STD-POWER-740-8206-E6C-v0.1.txt

                            STD-POWER-740-8206-E6C-v1.0.txt

                            STD-POWER-770-9117-MMB-v0.9.txt

                            STD-POWER-770-9117-MMB-v1.0.txt

                            STD-POWER-770-9117-MMB-v2.0.txt

                            STD-POWER-770-9117-MMB-v2.1.txt

                            STD-POWER-770-9117-MMC-v1.0.txt

                            STD-POWER-HMC-7042-CR6-v1.0.txt

                            And I need to add a STD-POWER-770-9117-MMD-v1.0 file, when I get around to it, but no hurry we have plenty of capacity to service our capacity pipeline.

                            "And how can you claim PowerVM is going to get better utilization when you yourself admitted you use FOUR separate instances of VIOS - that's the four VMs lost just for virtualising IO!"

                            Again your lack of technical understanding is shining through. For a machine with 64 cores and 4TB RAM, four VIO servers is the right choice IMHO. Again a fully redundant scalable design that lets me turn something as for example replacing hardware parts on a running machine into a trivial change, saving time, and money. Sure with your nPars and vPars on your BL890c i4 you just do the same right ?

                            And for the record I like KVM, run it myself. VMware is kind of expensive.

                            "And yet now you're running scared when I counter by pointing out the problems with PowerVM in the current version!"

                            No I am not really scared. And please provide me a link that shows the change in memory overhead on IVM. Please do. At least then I have a reason to force our HPUX, guys to upgrade to 6.1.

                            Now if you really could point out some problems with POWERVM that were real, I'd listen. I have no problem with real problems, as I pointed out I thought that the M[M|H]B versions of the POWER 770, had way to little memory. 2TB was not enough.

                            "We'll, even if all that shows is Danish companies can't spec and implement x64 solutions,

                            Can't really remember if it was one of the clients where you claimed to have been in as a consultant.....

                            " it still ignores the fact that x64 is growing and has been growing for years, whilst UNIX is in gradual and continual decline. "

                            Yes, it is sad.. but the real problem is that the UNIX marked is becoming a onehorse race.. the ones that are loosing marked share is HP and Oracle.. IBM is growing.. so it looks like there at least there'll be one UNIX vendor left in 20 years.

                            http://esj.com/articles/2012/03/12/ibm-power-systems-sales.aspx

                            Not that I wouldn't have liked all UNIX vendors revenue to increase. I honestly would.

                            // Jesper

                            1. Matt Bryant Silver badge
                              FAIL

                              Re: Well.....

                              Nothing shows up your continual FUDing than your re-use of the same and debunked arguments over and over again.

                              "....ou think that the forklift upgrade from SD -> SD2 ...." You mean the SD frame that had been unchanged for ten years? Please do name an IBM p-series system that stayed on sale with the same frame for that period. I know you can't because you have played that card before and been trumped every time.

                              "....The difference between you and I, seems to be that I am actually working with what I am claiming to be working with...." Really? So you can copy a screen that could have been lifted/edited from any IBM forum? Hmmmmm, proof beyond doubt! Your problem is that you don't stick to whatever you claim to work with, you spend time FUDing stuff you obviosly haven't touched. For example, you admitted you haven't touched the latest version of Integrity Virtual machines yet made claims about it. And you haven't touched Poulson or an SD2 but that still doesn't stop you make the most stupid of statements.

                              "....And please provide me a link that shows the change in memory overhead on IVM....." Actually, it was you that brought up that bit of FUD and when challenged was unable to provide a link!

                              "....Can't really remember if it was one of the clients where you claimed to have been in as a consultant....." I don't disclose the clients I have previousy worked for directly or contracted for, so you're just making things up now.

                              ".....but the real problem is that the UNIX marked is becoming a onehorse race...." LOL! Like I said, you know the IBM trolls are scared when they start spewing the FUD! If Power and AIX was just so peachy like you maintain then NO-ONE would have bought ANY Itanium, SPARC or x64 in the last ten years! They most obviously did. FAIL!

                              1. Jesper Frimann
                                Devil

                                Re: Well.....

                                "Please do name an IBM p-series system that stayed on sale with the same frame for that period."

                                I've never claimed that there were. Going from POWER4->POWER5 was a forklift upgrade. Same frame is currently 8+ years and counting. Look how easy it is to simply admit that this is how things are .. not trying to spin some FUD.

                                "you spend time FUDing stuff you obviosly haven't touched. For example, you admitted you haven't touched the latest version of Integrity Virtual machines yet made claims about it. And you haven't touched Poulson or an SD2 but that still doesn't stop you make the most stupid of statements."

                                Is quoting the manual stupid and FUD ?

                                Lets look at the manual then (admin guide):

                                http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03233037/c03233037.pdf

                                "As a result, the formulas used to calculate virtual machine capacity are outlined in the white paper Hardware Consolidation with Integrity Virtual Machines",

                                The current version of that whitepaper can be found here:

                                http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=ca&docIndexId=64179&taskId=101&prodTypeId=18964&prodSeriesId=4146186&printver=true

                                and the actual paper here:

                                http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02157777/c02157777.pdf

                                on page 14 in the chapter Identifying the Capacity Requirements for a New VM Host

                                Physical memory ≥ 1.18 × (aggregate VM memory size) + 1.3 GB

                                It's pretty clear isn't it ?

                                And just to make sure that it isn't a bummer then lets go back to the admin guide

                                On page 31:

                                "In terms of overall system memory, the VSP memory overhead typically equates roughly to 1500MB + 8.5% of the total physical memory."

                                and it continues on page 32:

                                "In addition to the VSP memory overhead, individual vPars and VMs have a memory overhead depending on their size. A rough estimation of the individual guest memory overhead can be done using the following formula:

                                Guest memory overhead = cpu_count * (guest_mem * 0.4% + 64M)"

                                So lets pull up your beloved BL890c i4 blade. Now we run with 25% overcommitment, and so we'll make 10 virtual machines each having 8 cores and 128 GB of RAM.

                                They then account for roughly 48 GB of RAM in additional overhead up to a total of 180 GB of memory or roughly 12%.

                                So why did the sizing whitepaper then say 18% ? Cause this is with 64K pages only, if you are to use 4K pages you'll need more VSP memory. Lets look at the HPVM 6.1 Release notes:

                                http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03233044/c03233044.pdf

                                on page 26 the release notes talks about a problem that might arise if you use 4K pages, that you are running out of/low on VSP memory, you can then adjust the HPVM_MEMORY_OVERHEAD_PERCENT to for example 15%.

                                Hmmm... that is an aditional 6.5% overhead to be added to the 12% we calculated before.. hence we get to aprox 18% overhead. So HP is actually being a responsible vendor and making their sizing recommendations on the fact that you are using 4K pages.

                                But that still leaves you with sizing numbers of ~18%, where as you might get away with 12% if you optimize things right.

                                Again.. I am not doing FUD. I am simply an irritating Infrastructure Architect that Reads The F****** Manual.

                                Is quoting official Intel estimations on performance stupid and FUD ?

                                http://www.xbitlabs.com/picture/?src=/images/news/2012-11/intel_itanium_poulson_2.jpg

                                // Jesper

                                1. Matt Bryant Silver badge
                                  FAIL

                                  Re: Re: Well.....

                                  ".....I've never claimed that there were. ...." No, you implied hp were somehow "cheating" customers by finally changing the frame after ten years, when it is very obvious that IBM must have "cheated" a lot more customers by having much shorter lifecycles for their fork-lift upgrades. Oh dear, did another bit of your FUD just come back and bite you in the a$$? Yet again?

                                  ".....But that still leaves you with sizing numbers of ~18%, where as you might get away with 12% if you optimize things right...." No, with the DEFAULT 64K page size you get 12%, it's only with the 4K page size setting that it goes up to @18%. So, once again your FUD is exposed - you took the worst possible case and tried to pass it off as the ONLY case.

                                  "....Is quoting official Intel estimations on performance stupid and FUD ?...." You're quoting an Intel SIMULATION (read the text at the bottom of the page) and NOT performance testing from an hp system. One of the reasons hp has cleaned up in the Itanium market is becasue hp has always designed better motherboards and always exceeded the performance of the Intel whitebox motherboards. As an IBM troll you should know that well as one of the systems hp thrashed were IBM's own Itanium servers. So it's still NOT accurate and you know you will only have accurate figures when you actually benchmark with a real hp system. Oh, and that's still FUD.

                                  1. Jesper Frimann
                                    Happy

                                    Re: Well.....

                                    "No, you implied hp were somehow "cheating" customers by finally changing the frame after ten years, when it is very obvious that IBM must have "cheated" a lot more customers by having much shorter lifecycles for their fork-lift upgrades."

                                    No what I am doing is exposing your double standards. When I use the fact that I have been able to do upgrades of POWER machines for the last 8+ years, then you label it as:

                                    "You mean your apples and oranges comparison? Please try and keep at least one foot in reality."

                                    But when it's the SD frame we are talking about then it's such a positive thing....

                                    Again I have no problem with calling an upgrade from POWER4->POWER5 a forklift upgrade.

                                    And I have no problem with calling a SD-SD2 a forklift upgrade. I don't see everything in pit pit black and pure pure white as you do.

                                    "No, with the DEFAULT 64K page size you get 12%, it's only with the 4K page size setting that it goes up to @18%. So, once again your FUD is exposed "

                                    Besides the fact that again I did the 12% calculation.. not you.. then the user guide and the sizing whitepaper still says 18%. And I did not do worst case.. worst case would have been 20 virtual cores per physical cores in the calculations of the memory overhead for the virtual guests. That would have raised the overhead from 12%-> 14.5%, and then adding the 6.5% for 4K pages would have gotten you to 21%.

                                    That would have been a valid worst case. And using 4K pages... is not something that nobody does... there are still a lot of commands on the VSP that will show incorrect values if you use them on for example HPVM 6.0. It's first in 6.1 that the default page size changed to 64K. Soooooo.... no matter how much you try to wiggle, then the overhead is... huge, compared to for example POWERVM. Perhaps you want to do the memory overhead calculations on POWERVM ?

                                    You can be quite sure that they won't be in the 180-300+ GB range for a machine with 1.5TB RAM.

                                    "You're quoting an Intel SIMULATION (read the text at the bottom of the page) and NOT performance testing from an hp system. "

                                    That is why I use the term 'estimations', cause that is what they are. But as I have also pointed out, then no matter if HP's own "up to a factor of 3" claims hold water, then it's still to little to late.

                                    // Jesper

                                    1. Matt Bryant Silver badge
                                      FAIL

                                      Re: Re: Well.....

                                      Usual Jesper denial cycle - make a statement, when it is debunked try and circle round and make out deny you made the statement.

                                      ".....No what I am doing is exposing your double standards....." You were the one that started on SD2 being a frame change from the original SD so how is that my double standards. Honestly, it would be nice if you even had one standard.

                                      "....Besides the fact that again I did the 12% calculation...." After your blanket claim of 18%, which turned out to be wrong.

                                      ".....worst case would have been 20 virtual cores per physical cores...." Oh don't be pathetic, there is simply no real World application for such tiny VMs. As I said before, try and keep at least one foot in reality.

                                      ".....It's first in 6.1 that the default page size changed to 64K...." So, like I said, you were wrong to simply rabbit on about an old version.

                                      ".....That is why I use the term 'estimations', cause that is what they are. But as I have also pointed out, then no matter if HP's own "up to a factor of 3" claims hold water....." When hp claimed 3x performance that was on REAL SYSTEMS IN THE LAB, not a simulation. I'd try explaining the difference but you seem to have a problem with discerning the real from the imaginary.

                                      Another summary - you squeal double standards on a point you originally applied; you made a blanket claim for IVM memory useage and then got caught out; you then tried to deny you got caught out; you then had to admit the matter only applied to earlier versions, which you had previsouly denied; you then couldn't understand the difference between estimates from simulations and real figures from a lab test. Please do fail some more, it's very entertaining.

                                      1. Jesper Frimann
                                        Thumb Down

                                        Re: Well.....

                                        "After your blanket claim of 18%, which turned out to be wrong."

                                        Nope, it was right that is still what the sizing guideline for HPVM states for 6.1. Clearly documented. Again RTFMSF.

                                        "".....worst case would have been 20 virtual cores per physical cores...." Oh don't be pathetic, there is simply no real World application for such tiny VMs. As I said before, try and keep at least one foot in reality."

                                        Which was why I didn't do the calculations the way I did them. Again as I wrote I could have if I wanted the worst possible result.. but I didn't.. so honestly you are just being childish.

                                        "So, like I said, you were wrong to simply rabbit on about an old version."

                                        Again the sizing guidelines still says 18%. And if you need/want to run with 4K pages.. again what have been the standard in every.. single.. version.. of HPVM since the start.. but for the last.. then the value used by HP themselves in their own sizing documents are really really valid.

                                        So it's really not me you have a beef with.. it's HP. Sorry for just doing a RTFM. But again you obvious knows better than the manuals.. or..

                                        "When hp claimed 3x performance that was on REAL SYSTEMS IN THE LAB, not a simulation. I'd try explaining the difference but you seem to have a problem with discerning the real from the imaginary."

                                        Up to 3 for some unspecified performance measurement on an unspecified server, in a marketing anouncement.. compared to very specific numbers for very specific workloads provided by Intel themselves. (Yes based upon simulations). Brrrrr... it's not really a clear cut case. If HP had just released some benchmark numbers that people could relate to... but no.

                                        And you are still trying to dodge the fact that the HPVM 6.1 manual clearly referes to a whitepaper that states that the sizing overhead of RAM on HPVM, should be 18%.

                                        Geee..

                                        // Jesper

                                        1. Matt Bryant Silver badge
                                          FAIL

                                          Re: Re: Well.....

                                          ".....Nope, it was right that is still what the sizing guideline for HPVM states for 6.1. Clearly documented. Again RTFMSF......" You did read it, indeed you posted the bit that destroyed your claim. Instead of requiring 18% the default choice requires only 12% in 6.1, so you were only out by a factor of 50%. Of course, in the long history of IBM fudges that's virtually nothing. It's only if the user selects the unrealistic 4K page size that the 18% figure comes into play. Do you need me to draw you a picture in crayon? Default, 12%. Special and unusual requirment, 18%. I know maths isn't your strong suit but even you should be able to see that 18 and 12 are different figures!

                                          "....Which was why I didn't do the calculations the way I did them...." No, you didn't do any calculations, you just repeated some easily debunked IBM FUD and got caught out. Don't moan at me, go tell IBM to update their FUD.

                                          "......Up to 3 for some unspecified performance measurement on an unspecified server, in a marketing anouncement...." Yeah, I know, how silly, using an actual system and real apps! Surely hp should have instead done a totally unrealistic benchmark using just one core in a 128-core server, where they could fudge the figures by using all the cores' cache and memory. Oh, like the IBM specintrate2006 benchmark you trumpeted right up until I pointed out how unrealistic a scam it was! LOL! Keep diggin your own hole, at this rate you'll be in China by next week.

                                          ".....And you are still trying to dodge...." Jesper, you haven't spent the whole thread dodging issues, denying and then having to admit to features (and then denying them again!), that for you to even try that is simply ludicrous. Usual, massive, unmitigated fail. I'm not even sure I want you to try again - it's getting so tiresome correcting you and pointign out your evasions it's wearing a bit thin.

                                          BTW, here's the Reg link to the public IBM Power roadmap (http://regmedia.co.uk/2011/08/28/ibm_power_processor_roadmap.jpg), where's the socket commopnality with anything? Oh, and is that Power8 with SFA details? What, still only eight cores? Alli will be screaming about that! Oh, hold on a sec, she's just another IBM troll, so she'll be quiet as a dormouse! And where has Power9 gone, it has disappeared from IBM roadmaps? The last time the public saw it was here (http://www.theregister.co.uk/2012/07/16/ibm_power7_plus_preview/page2.html) but it hasn't been seen since. Has IBM disinvested in Power? Is that because, when Itanium and Xeon are common socket, IBM will HAVE to offer Itanium servers again or be massively undercut in the UNIX market? ROFLMAO!

                                          1. Jesper Frimann
                                            Facepalm

                                            Re: Well.....

                                            "Default, 12%. Special and unusual requirment, 18%.."

                                            Again, every version of HPVM prior to 6.1 ,the default value of of the memory page size of VSP has been 4K. The sizing guidelines says 18%. If I were to state something it would most like be something like minimum overhead is ~12% Maximum is ~21% again this makes the 18% of the sizing guidelines seem like a good recommendation. But surely the Great Matt knows better. And so surely you should use the minimum requirements.

                                            "Yeah, I know, how silly, using an actual system and real apps! Surely hp should have instead done a totally unrealistic benchmark using just one core in a 128-core server, where they could fudge the figures by using all the cores' cache and memory.

                                            specintrate2006 is what it is, an audited industry standard benchmark, which surely has limited usability for estimation anything but raw processor power but compared to HP's UP TO x3 performance is... well an undocumented claim in some "marketing material" it's still a hell of a lot better.

                                            So lets surely go with the undocumented marketing claim of HP. Right...

                                            Amazing....

                                            "Jesper, you haven't spent the whole thread dodging issues, denying and then having to admit to features (and then denying them again!),"

                                            No the problem is your lack of technical understanding, and your amazing double standards. You just have to have vague indication of something that fits into your frame of mind and it's the truth. When everybody else clearly document things from the vendors own manuals. It's wrong.

                                            Again if we take your hilarious TurboCore rant, then I've tried to explain to your how you misunderstood things. Again you have claimed TurboCore can be used for POWER7+, it can't You have claimed that There is a 4.4GHz Turbo core solution there isn't. The only product you have been able to hook your whole twisted argumentation up upon is the POWER 780-MHB.

                                            Btw. a product that isn't sold anymore, (http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS912-109). Hence using your own words "So, like I said, you were wrong to simply rabbit on about an old version."

                                            // Jesper

                                            1. Matt Bryant Silver badge
                                              FAIL

                                              Re: Re: Well.....

                                              Jesper, you're still playing with evasions and denials. You said IVM had the problem as blanket FUD; I challenged you on the version; you said yes the latest version; I showed you were wrong; and now you're backtracking again! "....Again, every version of HPVM prior to 6.1...." So NOT the latest version, 6.1, as you insisted. You repeated a piece of dated and inaccurate IBM FUD and got caught, just admit it and move on.

                                              ".....specintrate2006 .... has limited usability for estimation anything but raw processor power ....." So you admit it in no way represents the performance of the system, but that doesn't stop you trying to mislead people. When you originally posted it you did not caveat the statement by adding that it was a test only of core performance and did not represent overall system performance (and also ignored how IBM gamed the benchmark with TurboCore and $17m of discounted kit), you simply insisted it represented how p780 would outperform a BL890c i4 in real World use. I've no doubt you regurgitated it straight out of the IBM brochures so it's not really your fault, but it would be nice if you made an effort and put in some real analysis rather than just dribbling brochure bumph.

                                              ".......Again you have claimed TurboCore can be used for POWER7+....." Erm, no! Again I pointed out that the IBM docs say so, and so does the Reg article linked to. Please do say the Reg author has a "lack of technical understanding, and .... amazing double standards" as it was the Reg's resident IBM cheerleader, Tim Pickett-Morgan!!! Careful where you swing that rant cannon, Jesper, you're liable to hit one of your own!

                                              1. Jesper Frimann
                                                Thumb Down

                                                Re: Well.....

                                                "So NOT the latest version, 6.1, as you insisted. "

                                                Again, the sizing information says 18%, which IMHO seems reasonable, given how HPVM works.

                                                Again you could most likely construct scenarios where 100% of the physical memory on the blade was consumed by simply doing a stupid setup. For example making a lot of small virtual machines that have the maximum memory allowed for dynamic allocation set to equal that of the total of physical memory.

                                                I have never claimed that or even tried to construct such an unrealistic scenario. Again I simply consulted the manual. If you have problem with the manual.. contact HP and get it changed.

                                                And just to add fuel to the fire, we haven't started adding memory to handle IO devices on the VSP, you are so fond of you 10Gbit mezanine cards... they also require Quite a few GB of memory on the VSP.

                                                But again it's always nice to hear how much more clever and skilled you are than HP themselves.

                                                "and did not represent overall system performance (and also ignored how IBM gamed the benchmark with TurboCore and $17m of discounted kit), you simply insisted it represented how p780 would outperform a BL890c i4 in real World use."

                                                Again, the specintrate2006 measurement I referred to did not use turbo core. That is all something that goes on inside your head. Please check your facts.

                                                "Erm, no! Again I pointed out that the IBM docs say"

                                                But it doesn't, what you liked to is this:

                                                http://www-03.ibm.com/systems/resources/systems_i_pwrsysperf_turbocore.pdf

                                                It has nothing to do with power7, it only talks about the POWER 780-MHB, a product that you haven't been able to buy since August.

                                                You are actually getting kind of boring.

                                                // Jesper

                                                1. Matt Bryant Silver badge
                                                  FAIL

                                                  Re: Well.....

                                                  ".....Again, the sizing information says...." What, for not the latest version, again? Dear Mrs Potter, please do try and focus on the matter and don't carry on with these pointless evasions and denials.

                                                  "....making a lot of small virtual machines that have the maximum memory allowed for dynamic allocation set to equal that of the total of physical memory....." <Sigh> You're just demonstrating how much you DON'T know about IVM - it wouldn't let you over-commit. Try again!

                                                  ".....we haven't started adding memory to handle IO devices on the VSP...." Nope, the 10Gb mezz would be handled by the host OS, not the IVM layer which would simply manage the virtual LAN switch and connections to the hosted VMs and the host OS.

                                                  ".....the specintrate2006 measurement I referred to did not use turbo core...." True, that one didn't, but the majority of them do. But, seeing as the test just uses one core anyway, and IBM deliberately game it by channelling all the cache and memory to that one core, it's just as bad as Turbocore and just as unrepresentative of a real World setup. Please try and pretend anyone would pay $17m to run one core.

                                                  ".....You are actually getting kind of boring...." Sorry, was that aimed at me or TPM seeing as he also stated Power7+ in his articel (which you have ignored, again, again, boringly yet again). Maybe you should go have a lie down, Mrs Potter.

                                                  1. Jesper Frimann
                                                    Holmes

                                                    Re: Well.....

                                                    "What, for not the latest version, again?"

                                                    YES for the latest version. Again if I am to construct a brand new spanking solution (being an infrastructure architect) I will consult the manual, the amount of memory I will buy in my HP bl890c i4 will be based upon the recommendations in the sizing guidelines. Again the most likely memory overhead if I use 64K pages will be in the 12% range. If I am to do an upgrade of an earlier version of HPVM, I almost certainly wouldn't change a parameter like the vsp page size, so as not to touch to many variables in a migration.

                                                    Doing so is pure cowboy IT. And my overhead would be in the 18% range depending on how big my installation is. If's just smaller light, the overhead won't reach 18%, but be somewhere inbetween 12-18 percent. And I actually sent a mail to one of our sysadmins to ask him what the overhead was on one of our clients smaller test HPVM installations it was 14%, that is on HPVM 6.0.

                                                    But again back to the essense, the administrative overhead on IVM is much bigger than it is on powervm.

                                                    "<Sigh> You're just demonstrating how much you DON'T know about IVM - it wouldn't let you over-commit. Try again!"

                                                    That is not what I am trying to do. Are you telling me that the sum of ram_dyn_max values has to be able to be less that then total amount of RAM physically in the machine ? This value is usually there to tell your hypervisor how much administrative overhead it should setup for a virtual machine, when it's started up.

                                                    Hence artificially increasing this to to high levels can normally allocate unrealistic amounts of memory for overhead. Seen it done by external consultants that didn't know sh*t..

                                                    "Nope, the 10Gb mezz would be handled by the host OS, not the IVM layer which would simply manage the virtual LAN switch and connections to the hosted VMs and the host OS."

                                                    That is actually a valid argument. Again I wouldn't use non hotswappeble network adapter for something important/usefull. But again.. there are no hotswapable cards on a bl890c i4 blade. But if you are using the internal virtual network on the VSP, then I have a point.

                                                    ".....the specintrate2006 measurement I referred to did not use turbo core...." True, that one didn't, but the majority of them do.

                                                    No that simply isn't correct.

                                                    11 of the submitted specint/fp/rate non rate are turbo core, out of 95 submittet POWER7 results, and on specintrate2006 specific it's 5 out of 47. So ..no Matt.

                                                    "But, seeing as the test just uses one core anyway, and IBM deliberately game it by channelling all the cache and memory to that one core"

                                                    No...it's rate. come one.

                                                    "it's just as bad as Turbocore and just as unrepresentative of a real World setup. Please try and pretend anyone would pay $17m to run one core."

                                                    Again.. no your are totally off.

                                                    "Sorry, was that aimed at me or TPM seeing as he also stated Power7+ in his articel (which you have ignored, again, again, boringly yet again). Maybe you should go have a lie down, Mrs Potter."

                                                    No. There is no POWER7+ TurboCore. RTFMSF. And I'll just wait a bit with going to bed, needs to see Patriots beat the sh*t out of New York jets.

                                                    // Jesper

                                                    1. Matt Bryant Silver badge
                                                      Facepalm

                                                      Re: Well.....

                                                      ".....If I am to do an upgrade of an earlier version of HPVM...." So, you start out with a blanket statement of all IVM, but now it's only if you're upgrading.... OK, so when you upgrade from AIX 5 to AIX 7.1, did you calmly refuse to use any new features of the newer version? Not retune memory? I bet not.

                                                      "....one of our clients smaller test HPVM installations it was 14%, that is on HPVM 6.0....." Again, not 18%, and again, not the latest version, 6.1. Yawn!

                                                      "....back to the essense, the administrative overhead on IVM is much bigger than it is on powervm...." Well, the essence was debunk the anti-Poulson FUD spewed by your fellow trolls, it was you that took the detour into virtualisation by making unsubstantiated comments about IVM, then being unable to prove them when challenged. Now you want to switch to administrative overjead? OK, how many panes of glass of IBM tools does it take to match hp SIM?

                                                      "....Are you telling me that the sum of ram_dyn_max values has to be able to be less that then total amount of RAM physically in the machine ?...." If you try and add static values - as in your example - and they are more than the total memory available you get an error message and it stops you creating the VM. You can overcommit memory by using Process Resource Manager and setting maximums, minimums and priority values for RAM, and Global Workload Manager to set service SLAs to control the overcommit. If you play smart with IVM layouts and mixing apps, and migrate VMs between IVM instances, you can avoid running out of memory.

                                                      "....Again I wouldn't use non hotswappeble network adapter for something important/usefull..." With hp blades I have redundancy built-in, with two independent dual-port Flex LOMs per blade, so total sixteen Flex ports (as in 10Gb LAN and FCOE SAN capability) built-in on a BL890c i2or i4. Ports are split out to different switch modules for added redundancy. Look, I know IBM aren't big on UNIX blades, so I'll let it go that you didn't consider that.

                                                      "....."Please try and pretend anyone would pay $17m to run one core." Again.. no your are totally off....." What, you DID pay $17m to run a solution through one core of a fully-stacked P780!?!?! I suggest you don't provide proof as every VAR in Europe will be calling to replace your system. Even Dell!

                                                      "....No. There is no POWER7+ TurboCore...." As I said, take it up with TPM.

                                                      ".....Patriots beat the sh*t out of New York jets." What, like Italy did to Denmark, despite being reduced to ten men? http://www.goal.com/en-india/match/92437/italy-vs-denmark/report :P

                                    2. Matt Bryant Silver badge
                                      Happy

                                      Re: Re: Well..... and!

                                      ".....You can be quite sure that they won't be in the 180-300+ GB range for a machine with 1.5TB RAM. You can be quite sure that they won't be in the 180-300+ GB range for a machine with 1.5TB RAM.........." Oops, forgot to go back and poke holes in that bit of your post. So, you're trying to imply that a system with 1.5TB of RAM running IVM 6.1 would require 180-300GB of RAM for the IVM as overhead? Really? So you completely failed to read the QuickSpecs for IVM 6.1 (http://h18004.www1.hp.com/products/quickspecs/12715_div/12715_div.HTML): "For VM Host: 1.2 GB plus 8.5% of physical memory" - hmmmm, that seems a lot less than 18%.

                                      So then I checked the IVM 6.1 Administration Guide (http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03233037/c03233037.pdf), and on page 31 it says the following:

                                      ".....T ypically , about 92% of free memory available at the Integrity VM product start time (after HP-UX

                                      has booted up on the VSP) is reserved for the vP ar/VM memory pool...." That seems a lot less than 18% too! What, could it be that Jesper has been talking male bovine manure yet again!?!?!?

                                      1. Jesper Frimann
                                        Thumb Down

                                        Re: Well..... and!

                                        Amazing you are not even capable of reading a manual.

                                        ".T ypically , about 92% of free memory available at the Integrity VM product start time (after HP-UX"

                                        Yes ? You forget that each time you start a virtual machine.. then there is an overhead associated with that virtual machine. Again I've already referenced the parts of the manual where the formulas are listed. You know... I actually read the manual.

                                        Come on....

                                        // Jesper

                                        1. Matt Bryant Silver badge
                                          FAIL

                                          Re: Re: Well..... and!

                                          "Amazing you are not even capable of reading a manual....." Amazing you are not even capable of comprehending what you read in a manual. Or are you trying to evade and mislead again? You stated IVM host had an 18% overhead, now you want to count it twice? There is 92% of memory free after IVM startup for use by the OS instances created in the VMs.

                                          "....Again I've already referenced the parts of the manual...." I've ref'd the Administration Guide, go follow the link and learn.

                                          Oh, by the way, I'm not going to claim the IVM host is "optional" as you did with VIOS, though I don't need to use four of them per host system as you eventually admitted you do!

                                          1. Jesper Frimann
                                            Holmes

                                            Re: Well..... and!

                                            "I've ref'd the Administration Guide, go follow the link and learn."

                                            Again I did and understood what it said. You obvious didn't.

                                            Again:

                                            "In addition to the VSP memory overhead, individual vPars and VMs have a memory overhead

                                            depending on their size. A rough estimation of the individual guest memory overhead can be done

                                            using the following formula:

                                            Guest memory overhead = cpu_count * (guest_mem * 0.4% + 64M)"

                                            "Oh, by the way, I'm not going to claim the IVM host is "optional" as you did with VIOS, though I don't need to use four of them per host system as you eventually admitted you do!"

                                            Again .. you don't get that IVM and the VIO servers are not the same thing. What the VIO server do is a subset of what IVM does. They server IO. If you choose to do so.

                                            And you still didn't get the reason for having 4, availability. In our design 2 for network and 2 for SAN, you can also do 2, if you want to use converged. But 4 for a large machine is IMHO apropriate. Not a single point of failure as your VSP is. 4 VIO servers will for example let me do a full memory upgrade of a system without closing it down. First shutting down one vio server, fencing the system unit it had it's IO in. doing my memory upgrade, reintegrating the system unit, and booting up the VIO server, etc etc.

                                            To be able to do this for a midrange system like the POWER 770, it's actually pretty cool. And it saves a shitload of money cause you don't have 'pork layer' midlevel managers and the like having to talk to clients and try to find out when they can make a service window and and and...

                                            Again, our industry have way way to many middle 'pork layer' managers, and way to few technical people who actually know what they are talking about.

                                            // Jesper

                                            1. Matt Bryant Silver badge
                                              FAIL

                                              Re: Re: Well..... and!

                                              <Yawn> Mrs Potter, when are you just going to learn and stop flogging a dead horse?

                                              "....Guest memory overhead = cpu_count * (guest_mem * 0.4% + 64M)...." Still not seing that magic 18% you claimed! Try again!

                                              "......And you still didn't get the reason for having 4, availability...." But they still sit on top of one great big SPOF, the hypervisor. If your hypervisor needs patching then all the VIOS and all the PowerVMs come down. Yes, you need a service window for ALL the hosted VMs. With hp Integrity there is the much neater option of using nPars, hardware partitions, which can each host their own vPar or IVM system. If I want to patch or reboot or completely power off one nPar it has no effect on the others. That's called hardware partitioning, you may have heard of it? Then again, probably not seeing as you can't do it on IBM p770s or p780s.

                                              1. Jesper Frimann
                                                Holmes

                                                Re: Well..... and!

                                                "Still not seing that magic 18% you claimed! Try again!"

                                                Again, it's not me.. it HP themselves, why don't you write the guy who writes the manual ?

                                                "But they still sit on top of one great big SPOF, the hypervisor."

                                                Again the hypervisor, is mirrored in memory. And it's not an independent program that is running beneath all the virtual machines. It doesn't function that way.

                                                " If your hypervisor needs patching then all the VIOS and all the PowerVMs come down. "

                                                Not for all patches, and again with the VIO part in independent virtual machines running besides the normal virtual machines, you have taken the most changing part out of the equation, and thus greatly reduced the amount of patching needed to do. Again POWERVM is included in the firmware for the server.

                                                "Yes, you need a service window for ALL the hosted VMs. "

                                                Again the hypervisor is a part of the microcode for the system. With the parts that gets changed the most located in the VIO servers, you actually only patch the microcode of the system when you have the system down anyway. Which is very seldom, you basically only do it to fix critical errors. Yearly.. perhaps. And you do have the option of just moving the virtual machines to another machine, while they run without any downtime, works like a charm and been there for years.

                                                "With hp Integrity there is the much neater option of using nPars, hardware partitions, which can each host their own vPar or IVM system. "

                                                Neater ? Why buy a big SMP machine and carve it up into hardware partitions ? Why not buy individual slammer servers.

                                                "If I want to patch or reboot or completely power off one nPar it has no effect on the others. That's called hardware partitioning, you may have heard of it? "

                                                Sure, I've heard about it, it was something that Mainframe did in the 60ies and 70ies, and that some UNIX vendors started doing 30 years ago or so.. and never really moved on.

                                                "Then again, probably not seeing as you can't do it on IBM p770s or p780s."

                                                No, it has moved on, and actually the idea with putting in a abstraction layer between the physical hardware and the virtual machine executing on it actually isolates the virtual machine from hardware failures, where as the idea with hardware partitioning just limits the damage done, and also adds a hell of a lot of overhead, in the form of wasted space that cannot be used. Again if you have to buy into the HW partitioning, you need to do n+1. and do n+1 on levels.

                                                // Jesper

                                                1. Matt Bryant Silver badge
                                                  Devil

                                                  Re: Re: Well..... and!

                                                  "....Again POWERVM is included in the firmware for the server...." And what happens when you need to update the firmware....? Face it, it's simply not as elegant as vPars (no shared hypervisor) or nPars (no shared hardware).

                                                  ".....Why buy a big SMP machine and carve it up into hardware partitions ?...." More efficeint power useage and simpler administration for a start.

                                                  ".... it was something that Mainframe did in the 60ies and 70ies...." So you have the tech and you STILL can't do it with AIX? What kind of crippled cast-off is AIX? :P

                                                  ".....actually isolates the virtual machine from hardware failures, where as the idea with hardware partitioning just limits the damage done...." Yes, because when your firmware issue takes down all your PowerVMs, that is so much better than just a quarter orr an eighth of them as with nPars? Yeah, that's about as believable as the one where you said you spent $17m on a one-core solution.

                                                  ".....you need to do n+1. and do n+1 on levels...." But I can. I can do nPars and then vPars and/or IVM, and use ServiceGuard to fail between VMs or even whole vPars, and PRM and GWLM to make sure it all happens inside set SLAs. As you said, just like on IBM mainframes, only better and cheaper. Which is the real reason you and your fellow trolls are FUDing Poulson and SD2 - it threatens not just AIX but also IBM's captive golden goose, the mainframe market.

                                                  Enjoy!

                                2. Anonymous Coward
                                  Anonymous Coward

                                  Re: Well.....

                                  It seems that Matt, who is now rapidly approaching the full state of Kebabbert, is doubting Intel's own performance figures for Poulson.

                                  http://www.xbitlabs.com/picture/?src=/images/news/2012-11/intel_itanium_poulson_2.jpg

                                  Usually companies tend to overstate the performance of their own products so in this case the laughable performance increase of 2.4(which means only 20% per core) for Poulson would normally be an exaggeration and thus represent an upper limit for the few remaining Itanic customers.

                                  But for the sake of discussion, let's assume that Matt is right. One would ask why would Intel downplay the performance of their own Poulson. One reason that immediately pops up is that Intel want out of this fiasco deal as soon as possible and rather focus on products where real value can be gained.

                                  It must be sad for Itanic customers to know that the only reason that CPU isn't put to death and that they still can run Oracle on it is not because of any technical reasons but because court and lawyers has had their say.

                                  1. Matt Bryant Silver badge
                                    FAIL

                                    Re: Re: Well.....

                                    ".....doubting Intel's own performance figures for Poulson...." Did you just wake up? Only just joined the thread and not bothered to read Jesper's previous frothing? That slide has alerady been discussed, it's Intel's whitebox estimation and in no way relates to the performance of Integrity seeing as every generation of Integrity has had far superior performance to the Intel whitebox kit. Do try and keep up.

                                    ".....Usually companies tend to overstate the performance of their own products...." Well, IBM may do that with their Inelegant Energy Optimization scam, but hp is painfully careful NOT to overestimate performance as they tend to guarantee it and write penalties into their support contracts. Hence, when hp (and Intel) talked about Itanium they use the frequency you are guaranteed to get for all eight cores 100% of the time, and not the peak value you could get for short bursts. We've been over this already, why don't you actually read some of the posts before making yourself look any more silly.

                                    ".....One reason that immediately pops up is that Intel want out of this fiasco deal as soon as possible...." So, having a public roadmap that is not only longer and has more detail than IBM's is "wanting out"? Preparing the systems so they can be common-socket with Xeon, which will offer hp the ability to really cut IBM off at the knees, all that is "wanting out"? You must be a very confused chap indeed.

                                    ".....they still can run Oracle on it...." GUARANTEED availability! On hp-ux, OpenVMS and NonStop. Now, do remember to ask your IBM rep if Power has the same (it doesn't). There is nothing to stop Larry turning round and trying the exact same trick with IBM's Power! And don't try pretending that DB2 offers IBM an out, it's like taking a banana to a knife fight.

                                    What an amusingly sad little troll you are!

                                    1. Anonymous Coward
                                      Anonymous Coward

                                      Re: Well.....

                                      " it's Intel's whitebox estimation and in no way relates to the performance of Integrity "

                                      Yeah, slapping some metal chassis with an Integrity sticker on top of that CPU will surely do wonders with those poor SPECint numbers.

                                      "There is nothing to stop Larry turning round and trying the exact same trick with IBM's Power!"

                                      That is correct. But the real difference between Power and Itanium is that one of them has lots of customers who want to stay on that platform where the other is limited to hard core fanbois like you.

                                      1. This post has been deleted by its author

                                      2. Matt Bryant Silver badge
                                        FAIL

                                        Re: Re: Well.....

                                        "....Yeah, slapping some metal chassis with an Integrity sticker on top of that CPU...." Ah, I can see now why you might be confused - you know nothing about hp Integrity! Just to clear it up for you, hp design their own motherboards for all their Itanium products rather than use the Intel whitebox ones that other Itanium server vendors use. Due to their historic part in developing Itanium, hp have always had an advantage over the whitebox versions and hence they have cornered so much of the Itanium market. BTW, I believe IBM used the Intel whitebox mobo in their X445 Itanium server, so it's no surprise the hp equivalent was 20% faster (my own bench at the time, with Oracle and Weblogic stack and real, production data). So using an Intel estimate based on a simulation would provide a figure well short of what hp are likely to produce.

                                        "....will surely do wonders with those poor SPECint numbers....." Oh dear, it's another IBM troll lost in the Benchmark Zone. Once again, show me a company that runs specint as their business and hp might be worried, but, as mentioned in the article, "some of the largest financial networks employ Itanium-based servers in one form or another, as do most of the Fortune 100". Looks like they're not too worried about benchmark fetishism either. Try doing some work in the real computing World, it might help.

                                        ".....That is correct...." Blimey, an IBM troll admitting there is nothing stopping Oracle shafting the Power franchise! Hold on a sec, I anticipate some weasel-worded evasion coming shortly, probably wrapped in a sulky rant!

                                        ".....But the real difference between Power and Itanium is that one of them has lots of customers who want to stay on that platform where the other is limited to hard core fanbois like you." And there it is! So, Mr Enterprise Guru, which bit of "some of the largest financial networks employ Itanium-based servers in one form or another, as do most of the Fortune 100" was too much for you to comprehend? Do you need it broken down into monosyllables?

                                        The really funny bit is this pathetic troll probably thinks he's helping Jesper! With him, Jesper and Alli, it's a bit like the Marx Brothers! Jesper's dim-witted and evasive answers certainly make him like Chico, and this AC troll is definately Harpo Marx. Oh dear, that makes Alli the Groucho - that moustache!

                                        1. Jesper Frimann
                                          Unhappy

                                          Re: Well.....

                                          And amazing you still try to spin the Oracle lawsuit into something positive for HP's Itanium products.

                                          "The really funny bit is this pathetic troll probably thinks he's helping Jesper! With him, Jesper and Alli, it's a bit like the Marx Brothers! Jesper's dim-witted and evasive answers certainly make him like Chico, and this AC troll is definately Harpo Marx. Oh dear, that makes Alli the Groucho - that moustache!"

                                          So.. you have now failed so many times in your arguments that the only thing you have left is personal attacks.

                                          Pathetic ...

                                          // Jesper

                                          1. Matt Bryant Silver badge
                                            Happy

                                            Re: Re: Well.....

                                            "..... you still try to spin the Oracle lawsuit into something positive for HP's Itanium products....." What, you don't think it's a positive that Integrity is the only platform in the market with guaranteed availability of Oracle software for the rest of the range's life? Gee, I'm pretty sure the IBM trolls were saying in these forums, right at the start of the hp-Oracle affair, that systems running Oracle software was the majority of the market, and that NOT having Oracle software availability meant Itanium was dead, so surely being the ONLY platform with assured availability makes Itanium a much better choice, n'est ce pas? I'm sure IBM themselves thought so, after all they did build a whole attack campaign around it, so are you saying IBM were wrong and were misleading their customers (even more than normal)?

                                            "......the only thing you have left is personal attacks.... Pathetic ..." Sorry, what was that about personal attacks, Mr Pot? I thought it was more humorous than vindictive, but proving you wrong seems to have dented your sense of humour. BTW, did you know that Jesper derives from the Greek word "iaspis" which is the feminine noun for "small sparkling stone"? How cute! So maybe I shouldn't call you Groucho Marx but Mrs Potter instead (young uns better get thee over to IMDB and look up Margaret Dumont). You learn something new and cultural every day here on The Reg forums!

      3. Anonymous Coward
        Anonymous Coward

        Re: Well.....

        Wow, how can you even compare Poulson to P7+?

        Power 7+ will have 3-4x the L3 cache, run at twice the clock speed, likely have more threads and threading options as 7 runs native 4 threaded SMT and Poulson's main enhancement is that it will run "up to 4 threads"... whatever that means. This is better than Tukwila, but not comparable with Power 7+ on any level. Probably have very similar performance to the Xeon E7 that it is based on.

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: Well.....

          "Power 7+ will have 3-4x the L3 cache...." Did you even read the spec of Poulson or Power7+ before you wrote that? If you did then you have the mathematical ability of a goldfish. Poulson has 4MB of faster SRAM L3 cache per core, whereas Power7+ has 5MB per dore of the slower DRAM L3 cache. Last time I checked, five was not "3-4x" four..... I won't even bother trying to explain the advantage the Poulson has with faster SRAM cache as that would require some technical knowledge to understand, and you obviosuly don't qualify.

          ".....run at twice the clock speed...." Top bin Poulson 2.53GHz for all eight cores. Top bin Power7+ 3.7GHz for all eight, with possible bursts of up to 4.1GHz for individual cores when using Intelligent Energy fudge mode. Even when they switch off half the cores they max out at 4.4GHz. Now, concentrate, it's the maths bit - what is two times 2.53GHz? Want some help? It's 5.06GHz, not 3.7GHz or 4.1GHz or even 4.4GHz. I know IBM marketing prattled on about 5GHz+ chips at teh Power7+ launch but they are nowhere near that in reality. I suggest next time you try reading more than just the marketing material.

          "....7 runs native 4 threaded SMT and Poulson's main enhancement is that it will run "up to 4 threads"... whatever that means....." It means both can run four threads per core. Your inabaility to comprehend even that basic statement is - frankly - worrying. I can't believe you actually work in computing so please tell me you are not in a position to operate machinery as you seem a danger to yourself and others.

          "....Probably have very similar performance to the Xeon E7 that it is based on." You really don't have a clue, do you? Please go do a lot more reading on the development history of the three CPUs mentioned before attempting a post, and make sure you do some remedial maths.

          1. Jesper Frimann
            Gimp

            Re: Well.....

            "Poulson has 4MB of faster SRAM L3 cache per core, whereas Power7+ has 5MB per dore of the slower DRAM L3 cache. Poulson has 4MB of faster SRAM L3 cache per core, whereas Power7+ has 5MB per dore of the slower DRAM L3 cache. Last time I checked, five was not "3-4x" four..... I won't even bother trying to explain the advantage the Poulson has with faster SRAM cache as that would require some technical knowledge to understand, and you obviosuly don't qualify."

            Apropos math. You do know that it's 10MB for per POWER7+ core right ? 80MB L3 cache divided by 8 gives... you 10MB L3 cache per core.

            "Top bin Power7+ 3.7GHz for all eight, with possible bursts of up to 4.1GHz for individual cores when using Intelligent Energy fudge mode."

            BZZZZZZZZ Wrong. IT's 4.1 GHz without any boost. It won't make it more true just cause you repeat it.

            And we haven't seen top bin yet, that usually goes in the POWER 795. So who knows.. x2 frequency (not that it matters) might end up being right.

            You haven't been doing anything but posting false numbers about Processors frequencies and TurboCore number and and.. have Kebbabert perhaps hacked Matts account ?

            // Jesper

            1. Matt Bryant Silver badge
              Facepalm

              Re: Well.....

              "....You do know that it's 10MB for per POWER7+ core right ?...." Apologies, I was looking at Power7, not Power7+. It's still not "3-4x" as claimed by the other IBM troll, though. And it is still slow DRAM cache, not faster SRAM cache. Please do try and deny that, I'd love to see how you want to make out that DRAM cache will outperfrom SRAM cache.

              "....BZZZZZZZZ Wrong. IT's 4.1 GHz without any boost....." Really? So IBM gurantee you will have all eight cores spining at 4.1GHz? Want to ask them? I already have. Check the brochures, they coyly state "up to 4.1GHz" because they cannot all spin at 4.1GHz at the same time. Once again, IBM cannot supply enough electrical power and cooling to do that, they have to keep them throttled back. It's only when they can restrict it to four cores per socket, by turning off half the cores per socket (but still requiring eight core licences per socket, i.e. doubling licensing costs), that they can suddenly spin the cores up to 4.4GHz. Now, please do explain how they can spin four cores to 4.4GHz but eight only "up to 4.1GHz) if there is no issue with power and cooling and no need to throttle the cores back?

              "....And we haven't seen top bin yet..." So you want to introduce vapourware into the discussion? I know you like to play fast and lose with the facts but that's stretching it even by your low standards.

              "....You haven't been doing anything but posting false numbers about Processors frequencies and TurboCore number ....." Go look in the IBM guide I posted a link to. You also earlier admitted to the TurboCore issue and already tried to ignore it by calling it "useless", despite it being used by IBM for their benchmarks that you also referred to. What, anything you find too hard to agree with isn't allowed? Good luck trying to sell to us customers, we kinda like asking questions rather than just accepting whatever marketing slides you throw at us.

              1. Jesper Frimann
                Holmes

                Re: Well.....

                "Cache"

                Actually it's not that simple, as fast versus faster. Size actually also matters a lot. And the cache in itanium is shrinking on a Per core level, where as POWER's is increasing.

                "Frequency"

                Again you are just quoting HP marketing material. Kind of pathetic really. If you had bothered read and understand how energyscale works, you would have realized, or I assume you would have, that it is the frequecy boost beyond 100%, that is not guaranteed. Here is what the manual says:

                Support Notes

                1 Note that CPU frequencies in excess of 100% are not guaranteed.

                Again you don't get it and start with the turbo core mode again, and wild speculations,that have no hold in reality. The way that the power7 processor acts is highly configurable, hence you can tell it to max performance or try to save energy, or you can simply tell it to cap the powerusage or not to boost frequency beyond 100% (that is the 4.1 ghz). And you can schedule this behaviour. Hence your rather outdated expectations to how a modern processor works comes up short. And funny enough the Intel xeons behave in much the same way.

                // jesper

                1. Matt Bryant Silver badge
                  Happy

                  Re: Well.....

                  "Actually it's not that simple, as fast versus faster..." Of course not, you're trying to defend IBM's decision to use cheap and slow DRAM so they could squeeze it onto the same die package, why would I expect you to admit it's slower than the SRAM Intel have on Poulson?

                  "....And the cache in itanium is shrinking on a Per core level...." Yes but it's still faster SRAM, meaning it will handle those requests from the cores faster, meaning lower latencies. And IBM does not have good cache hit ratios as Intel so the faster SRAM is even better in practice.

                  "....Again you are just quoting HP marketing material...." No, all frequencies quoted for IBM CPUs are from IBM brochures.

                  ".....that it is the frequecy boost beyond 100%, that is not guaranteed....." Well, using the phrase "up to 4.1GHz" doesn't sound very guaranteed! And then there's the obvious question - if 4.1GHz is supposedly flat out, how come the cores run at 4.4GHz when in TurboCore mode? Last time I checked, 4.1GHz was slower than 4.4GHz, or are you now going to tell us it's not as simple as 4.1GHz being slower than 4.4GHz, that IBM has special cycles that make 4.1GHz actually NOT slower than 4.4GHz?

                  "....and start with the turbo core mode again, and wild speculations,that have no hold in reality...." What, now you're saying I made up TurboCore? OK, can you deny the cores run at 4.4GHz in TurboCore mode but only 4.1GHz in eight-core mode (I'll be genarous and forget the "up to" as it pains you so much)? That should be quite easy for you. Then I want you to look at the two numbers and admit 4.4GHz is faster than 4.1GHz. Then I want you to explain how the cores can run faster in four-core mode than eight-core mode. If IBM could they would run all eight cores at 4.4GHz, which means they either can't supply enough power through the socket to do so, or they can't deal with the heat of eight cores running at 4.4GHz. That is called a design limitation. I know it's hard for you to accept IBM also have design limitations, they always try to sell them to us customers as "features", but they do. And this is a "feature" where you get a tiny jump in frequency but still have to pay for licences for all eight cores. That is called expensive.

                  1. Jesper Frimann
                    Holmes

                    Re: Well.....

                    "L3 cache."

                    You do understand why the Itanium implementation of an EPIC architecture needs large caches to generally perform well right ? And caches that are much larger than for example x86 and RISC's.

                    You do understand that the fact that you try to claim hilarious things like "And IBM does not have good cache hit ratios as Intel so the faster SRAM is even better in practice.", (in the context of Itanium) shows that you don't really understand the whole idea behind Itanium ?

                    RTFM Matt, my first compile and testing of Itanium was in January 2001, on an Intel 'whitebox'. And I've been using the architecture on and off since then.

                    Again

                    Merced 1 core 800 Mhz 0MB L3 cache 0 MB L3 cache per core

                    McKinley 1 core 900 MHz 1.5MB cache 1.5MB L3 cache per core

                    McKinley 1 core 1 GHz, 3MB cache 3 MB L3 cache per core

                    Madison 1 core 1.5 GHz, 6MB cache 6 MB L3 cache per core

                    Madison 1 core 1.67 GHz, 9MB cache 9 MB L3 cache per core

                    Montecito/Montvale 2 cores 1.66 GHz, 24 MB L3 cache 12 MB L3 cache per core

                    Tukwila 4 cores 1.73 GHz 24 MB L3 cache 6 MB L3 cache per core

                    Poulson 8 cores 2.53 GHz 32 MB L3 cache 4 MB L3 cache per core.

                    So basically the amount of L3 cache that is in Poulson brings us back to the day of McKinley, in the L3 per core ratio. And furthermore Poulson uses HW multithreads, which makes it even worse, in comparison to for example Madison.

                    For comparision POWER7+ has 80MB L3 cache, which is actually very clevery divided into two parts a local and a 'not so local', which speeds up access of 10MB of the L3 cache per core.

                    And last rumours has it that haswell will using eDRAM,

                    "No, all frequencies quoted for IBM CPUs are from IBM brochures."

                    Nice attempt at ducking.

                    "Well, using the phrase "up to 4.1GHz" doesn't sound very guaranteed!"

                    You are simply not making sense. You are mixing thing up, purposely misinterpreting numbers and drawing conclusion that have nothing to do with reality.

                    It's very simple. A 8 core 4.1GHz POWER7+ processor will run flat out at 4.1GHz on all cores if you provide provide load for all cores, and provide the cooling and airflow specified in the manuals.

                    If you specify in the energy policy for the system that the system should optimize energy usage over performance the system will do so. Including the cores, hence it will put cores and processors for that matter into states that uses less energy if there isn't load enough to provide work for these cores processors, IO slots whatever.

                    If you specify in the energy policy for the system that it should prioritize performance over energy usage it will try to boost the frequency of the cores when it's needed.

                    If you are using a POWER7 system booted up in TurboCore, mode, the cores that are activated will run at a boosted frequency rather than the normal frequency all the time, there will be no aditional frequency boost available, according to the manual.

                    Actually it's much like what Intel is doing with it's Xeon processors. It's actually pretty simple.. or at least for everyone else than you.

                    // Jesper

                    1. Matt Bryant Silver badge
                      Happy

                      Re: Well.....

                      ".....So basically the amount of L3 cache that is in Poulson brings us back to the day of McKinley....." Neatly ignoring the fact that the Poulson cache is faster and has wider buses, and plugs into faster a memory system. See, you're back to your IBM smoke-and-mirrors routine - "look at the numbers, just at the numbers, don't think about what is behind them".

                      "....For comparision POWER7+ has 80MB L3 cache...." Yeah, you're still dodging the bit about it being slower DRAM cache. Your evasions are becoming boring.

                      "....And last rumours has it that haswell will using eDRAM....." Power7 already uses eDRAM, all the term means is "embedded DRAM", ie DRAM cut on the same silicon as the CPU cores. It's still slower than SRAM and actually needs a separate controller just to refresh the DRAM periodically (which can mess with performance).

                      ".....Nice attempt at ducking....." Hmmm, so quoting a vendor's own manual about their own kit is "ducking". I guess you just want to call anything that show you up as "ducking", right?

                      "......You are simply not making sense...." No, it's more like you don't want it to make sense. IBM's own manual says "up to 4.1GHz", not "4.1GHz". Go argue with IBM if you don't believe them.

                      "......If you specify in the energy policy for the system that it should prioritize performance over energy usage it will try to boost the frequency of the cores when it's needed....." Actually, that's WHEN it can, because it can't provide enough energy to run all the cores at 4.1GHz with max memory and peripherals. Just like with the old IBM blades it's another case of IBM's flakey PSU designs clashing with power-hungry cores. It's a simple fact that hp's Inetgrity servers with Poulsons will not have that issue - when they say 2.53GHz for all eight cores it's what you get, no fudges or compromises.

                      "..... it's much like what Intel is doing with it's Xeon processors...." Yes, it is similar except for one very key point - Intel state up front the normal speed of the Xeon cores and then explain the boost is a temporary one, whereas IBM try and mislead customers into thinking it is available 100% of the time for all cores, regardless of the system configuration. The Intel approach is honest, whereas the IBM one is... well, IBM's.

                      1. Jesper Frimann
                        Headmaster

                        Re: Well.....

                        "Neatly ignoring the fact that the Poulson cache is faster and has wider buses, and plugs into faster a memory system."

                        Yes the L3 cache have been reworked quite a bit. But what you don't mention is that the number of cores have gone up by a factor of 2, the frequency have almost gone up 40%, and there have been put more emphasis on HW threads. That is quite a lot of more demand on the whole memory subsystem.

                        Now to serve this increase on a chip basis, the memory subsystem have only increased it's bandwidth and ability to do memory transactions by 33%, and the amount of L2 cache per processor is the same, and last the size of the L3 cache have also only increased by 33%.

                        So I am not ignoring the enhancements, I'm simply saying that it's not like the increase in execution potential on the chip is backed up by a proportional increase in cache sizes and bandwidth.

                        And for a chip that by design sometimes executes for example both paths of a branch and throws the one that isn't used. Then LOTS of fast cache is a must. And again with 16 threads to 32MB of L3 cache.. then it's a whole different story than for example Madison with it's 9 MB per thread.

                        Again try reading a technical whitepaper on how EPIC works or sumthing, rather than just being a troll.

                        "Yeah, you're still dodging the bit about it being slower DRAM cache."

                        No I haven't been denying that a eDRAM cache like the one on POWER7 is slower than it would be if it had been made of SRAM, but again size does matter, and 4 HW threads per core also helps hiding memory latency. But the perhaps best point is that POWER7 has the performance to back up it's claim of having done the right choices.

                        "No, it's more like you don't want it to make sense. IBM's own manual says "up to 4.1GHz", not "4.1GHz". Go argue with IBM if you don't believe them."

                        Ok, if you search the whole .ibm.com domain for that phrase, the only reference you get to up to 4.1 GHz is from a a technote where there is a clumsy formulation that can be interpreted in the way you would like to. All whitepapers, redpapers, manuals, system journals etc etc. that deals with this says something different:

                        Here is the technote its on page 6 of this paper:

                        http://www.redbooks.ibm.com/technotes/tips0880.pdf

                        And if you look at the section where it says up to 4.1GHz it's clear that it's most likely a cut and paste error.

                        Again if we look at other sources:

                        http://www.spec.org/cpu2006/results/res2010q1/cpu2006-20100208-09578.html where it states:

                        CPU Name: POWER7

                        CPU Characteristics: Intelligent Energy Optimization enabled, up to 3.41 GHz

                        CPU MHz: 3100

                        http://www.spec.org/cpu2006/results/res2011q2/cpu2006-20110411-15608.html

                        CPU Name: POWER7

                        CPU Characteristics: Intelligent Energy Optimization enabled,up to 3.86 GHz

                        CPU MHz: 3612

                        Or the whitepaper on Energy scale where the Max boost freqency for all POWER7 processor in all POWER7 models is clearly listed as % of the "normal" freqency.

                        You are exhibiting serious Troll behaviour.

                        // jesper

                        1. Matt Bryant Silver badge
                          Facepalm

                          Re: Re: Well.....

                          "......That is quite a lot of more demand on the whole memory ......" So you admit hp have made changes to accommodate more and faster cores, but then try to pretend the changes make no difference. You also forgot to mention the doubling of L2 cache per core, the increase in QPI bandwidth, all to keep the path to memory flowing nicely, the increase in memory space that will come with the larger memory modules. Oh, hold on a sec - after insisting that clock frequency is the ONLY factor in performance, just like all the other IBM trolls, now you want to talk about the rest of the system? What a hypocrite!

                          "....Again if we look at other sources......" So I link to IBM's own manuals and you want to look elsewhere? LOL!

                          ".....Intelligent Energy Optimization enabled, up to 3.41 GHz......" So what your own example shows is IBM use EXACTLY the "up to < x>GHz" term when Inelegant (sic) Energy Optimization is in use. Which begs the question - if IBM are actually using the IEO maximum figure, i.e., not the average frequency but the HIGHEST achieved on a core whilst the others were being throttled back to avoid breaching the poor power and cooling capabilities of the system, but misleading customers by implying that ALL cores were running at that highest speed of 4.1GHz, what is the ACTUAL average frequency being achieved? And if Power is constantly tuning up and throttling down cores, how predictable is the system performance? It's no good just knowing the performance with just one core under load for a short period (which is what Jesper's SPEC benchmark actually is), we need to know how the system will handle heavy loads over long periods. You really wouldn't want to wait until your year end run is stalling to have to tell your CIO "we'll, IBM did say 'up to'"!

                          "....You are exhibiting serious Troll behaviour...." So when your fellow IBM trolls started the thread by immediately FUDing Poulson before the systems are even available to test, and when you joined in with unrepresentative benchmarks and flat denial of the licensing issues with TurboCore, somehow that wasn't trolling? Hello, call for Mr Kettle from Mr Pot.....

                          1. Jesper Frimann

                            Re: Well.....

                            "So you admit hp have made changes to accommodate more and faster cores, but then try to pretend the changes make no difference"

                            No, I did not. I simply stated that with x2 the number of cores 40% higher clock speed, and better hw threading, then a 33% increase in memory bandwidth per chip, as well as less L3 cache, did not really suggest that the upgrade was a balanced one.

                            "You also forgot to mention the doubling of L2 cache per core"

                            Eh what ?

                            All sources that I've been able to dig up says that the L2 cache stays the same in size.

                            http://en.wikipedia.org/wiki/Itanium#Poulson

                            http://www.realworldtech.com/poulson/7/

                            But Again I don't work for HP nor Intel, but perhaps you have some more info you want to share ?

                            "Oh, hold on a sec - after insisting that clock frequency is the ONLY factor in performance, just like all the other IBM trolls, now you want to talk about the rest of the system? What a hypocrite!"

                            I have never made such a claim that frequency is the ONLY factor. Now that is quite an infantile attempt of a strawman, if I ever saw one. I won't even say nice try. Others might have said so, but not me.

                            "So I link to IBM's own manuals and you want to look elsewhere? LOL!"

                            No what you link to is a technote with a clear cut and paste error. Again for everyone else try doing a google search like this one "up to 4.1GHz" site:ibm.com, it returns 4 hits. And make up your own mind.

                            With regards to me linking to SPEC, If there is any place on earth where IBM would lie, if your whole IBM is a Benchmarketing only company fetish would be true, it should be there of all places. Hence the link.

                            But you want different Links ? Sure.. lets roll Lets just take one machine for starters.. The POWER 770-MMB.

                            The original announcement letter for the POWER 770-MMB:

                            http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSource&supplier=897&letternum=ENUS110-025

                            "POWER7 processor card per CEC enclosure: 16-core at 3.1 GHz or 12-core at 3.5 GHz."

                            Please tell me where it claims that the processor runs at the 3.41GHz boosted frequency ?

                            Here is the redpaper on the POWER 770:

                            http://www.redbooks.ibm.com/redpapers/pdfs/redp4639.pdf

                            Please show me where it says that the processors run at the boosted frequency of 3.41 GHz?

                            It doesn't it does on the other hand say:

                            "IBM Power 770 server

                            For the Power 770, each POWER7 processor SCM is available at frequencies of 3.1 GHz

                            with eight cores and 3.5 GHz with six cores."

                            Your whole processor trotting processor speed rant is hilarious. You are seriously FUDDING now.. No wonder why people get tired of listening to you. You do know the phrase don't throw stones when you live in a glass house ?

                            http://www.spec.org/cpu2006/results/res2010q2/cpu2006-20100426-10756.html

                            CPU Name: Intel Itanium 9350

                            CPU Characteristics: Intel Turbo Boost Technology up to 1.86 GHz

                            CPU MHz: 1730

                            try to take your whole argument about IBM and POWER and then substitute IBM with HP and POWER with Itanium. You are getting really really hilarious.

                            Why don't you just admit that you messed up the whole TurboCore thing and misuderstood it.

                            "So when your fellow IBM trolls started the thread by immediately FUDing Poulson before the systems are even available to test"

                            No.. here is the first sentence I wrote before I started on this discussion with you:

                            "And first before I start, I have the outmost respect for the BL8x0c i2/4 blades these are great products, but it is what it is, a blade server."

                            I don't FUD, you do on the other hand, I simply state the facts, and when I am proven wrong I have absolutely no problem admitting it (mostly anyway).

                            And as I also wrote on a danish board (translated) "I am glad that we can still keep alive some alternatives to x86. I think it's a solid upgrade, with what happened from Tukwila to Poulson. And it is gratifying that there is still life in the good old UNIX systems. Now that HP / Intel has come a new Itanium version (Poulson 95xx), a new IBM POWER (Power7 +) and so, Oracle will probably come with M4 (a modified S3) here sometime early next year."

                            // jesper

                            1. Matt Bryant Silver badge
                              FAIL

                              Re: Re: Well.....

                              Jesper, most of your frothing repsonse can just be laughed at, but the bit where you try and claim that Intel do exactly the same with Intel Turbo Boost Technology as IBM do is seriosuly and demonstrateably false. If you look at the way Intel market, classify and specify the 9350 you would see that they say it does 1.73GHz, i.e., they claim the MINIMUM clock performance the customer can expect to see 100% of the time, and NOT claim the maximum the core might spin up to (1.85GHz+) as the customer could only see that under certain circumstances and probably not accross all cores at once. The IBM approach is the opposite - claim the highest peak number - 4.1GHz - and hide the real performance, then cover themsleves with the little get out clause "up to...". It is pretty clear which one is the honest approach and which one is, frankly, deceiving.

                              "....Why don't you just admit that you messed up the whole TurboCore thing and misuderstood it...." Let's see - turn of four cores of an eight-core CPU so you can boost the clock per core to 4.4GHz, but still have to pay the licensing for all eight cores. So if you have a requirement for 256 threads, and each Poer7+ core can run four threads, that means you need 64 active core but have to pay 128 core licenses.... Nope, that seems pretty simple to understand to me! And then it is also very easy to see that if the cores can do 4.4GHz in TurboCore then IBM must be throttling them back in ordinary mode. It seems that even Inelegant Energy Optimization can't get any of those eight cores to that 4.4GHz peak, only 4.1GHz, which implies there is a serious power issue with the p-series socket design. Or it could be cooling, another common failure in IBM designs. No, I don't see any particular problems with underatanding that!

                              "....I don't FUD...." Let's see - you claim performance figures for Poulson but haven't even touched one, and then you claim there will be a memory overhead for Poulson with the latest release of IVM but use an old version as a reference. Your immediate and constant response to any article about Itanium is to repeat the same FUD and then make unsubstantiated claims about Power, AIX, or PowerVM. During the Oracle trial you were front and center of the IBM troll parade claiming that Itanium was dead, etc, etc. That's FUD, and for you to deny it is simply hilarious!

                              1. Jesper Frimann
                                Thumb Down

                                Re: Well.....

                                "but the bit where you try and claim that Intel do exactly the same with Intel Turbo Boost Technology as IBM do is seriosuly and demonstrateably false. If you look at the way Intel market, classify and specify the 9350 you would see that they say it does 1.73GHz, i.e., they claim the MINIMUM clock performance the customer can expect to see 100% of the time, and NOT claim the maximum the core might spin up to (1.85GHz+) as the customer could only see that under certain circumstances and probably not accross all cores at once. The IBM approach is the opposite - claim the highest peak number - 4.1GHz - and hide the real performance, then cover themsleves with the little get out clause "up to...". It is pretty clear which one is the honest approach and which one is, frankly, deceiving."

                                I have never claimed that Intel posts the 'up to frequency' as their minimum frequency.

                                It is you who claim that IBM does this, and you haven't posted a single proof. On the other hand you ignore the fact that all manuals, announcement letters, redpapers etc etc. use the exact same convention as Intel does. Stating the freqency without the frequency boost that under some conditions will be achievable.

                                You just can't admit when you are wrong can you ?

                                "Let's see - turn of four cores of an eight-core CPU so you can boost the clock per core to 4.4GHz, but still have to pay the licensing for all eight cores"

                                Again you are just fantasies. There is no such product. It's you making stuff up,

                                Please provide a link that supports your claim.

                                "So if you have a requirement for 256 threads, and each Poer7+ core can run four threads, that means you need 64 active core but have to pay 128 core licenses...."

                                The same fantasy again. There is no TurboCore POWER7+ products out there. You are making stuff up.

                                Please provide a link that supports your claim.

                                "Let's see - you claim performance figures for Poulson but haven't even touched one, and then you claim there will be a memory overhead for Poulson with the latest release of IVM but use an old version as a reference."

                                No that is simply not true. I stated that the 6.1 manual refers to the same sizing whitepaper for overhead as the 4.3 version did.

                                Please provide a link that supports your claim.

                                And the performance claims I have made about Poulson is based upon numbers from Intel. Not some marketing Bull.

                                Sorry Matt.. ...

                                // Jesper

                                1. Matt Bryant Silver badge
                                  Happy

                                  Re: Re: Well.....

                                  "......Again you are just fantasies. There is no such product. It's you making stuff up, Please provide a link that supports your claim...."

                                  http://www-03.ibm.com/systems/resources/systems_i_pwrsysperf_turbocore.pdf

                                  I like the bit on page 3: "....TurboCore is a special processing mode of these systems wherein only four cores per chip are activated. With only four active cores, EASE OF COOLING <my emphasis> allows the active cores to provide a frequency faster (~7.25%) than the nominal rate......" So there you have it. TurboCore means four cores get switched off so the dire cooling capabilites of p-series can be used to give about 7% clock gain. But it gets better on page 4: "....But it is important to note that in doing so, the processor chip’s core count has decreased from eight cores per chip to four. An 8-core partition formally residing on one processor chip now must reside on two. A system needing sixteen cores and packaged in a single drawer as in the earlier figure requires two drawers when using TurboCore...." So, use TurboCore mode and you have to DOUBLE the hardware - expensive! And seeing as oracle still charges for all eight cores, DOUBLE the Oracle license costs - really expensive! Disagree? Then best go moan at IBM.

                                  Or you can moan at The Reg seeing as they mention Turbo Core in this article (along with the snippet that DRAM is SLOWER than SRAM): http://www.theregister.co.uk/2012/10/03/ibm_power7_plus_server_launch/

                                  Or you could just give up FUDing and moaning. Yeah, I know, unlikely.

                                  1. Jesper Frimann
                                    Angel

                                    Re: Well.....

                                    Ehhhh ? Yes, TurboCore is useless, IMHO. But you didn't answer my question.

                                    Your original claims were:

                                    "Let's see - turn of four cores of an eight-core CPU so you can boost the clock per core to 4.4GHz, but still have to pay the licensing for all eight cores"

                                    Where is the processor that will run 4.4GHz in TurboCore mode ?

                                    "So if you have a requirement for 256 threads, and each Poer7+ core can run four threads, that means you need 64 active core but have to pay 128 core licenses...."

                                    Where is your documentation about TurboCore on POWER7+ ?

                                    "Actually, that's WHEN it can, because it can't provide enough energy to run all the cores at 4.1GHz with max memory and peripherals."

                                    Where is the documentation on this statement ?

                                    "No, it's more like you don't want it to make sense. IBM's own manual says "up to 4.1GHz", not "4.1GHz". Go argue with IBM if you don't believe them."

                                    Where is the documentation for this ?

                                    // Jesper

                                    1. Matt Bryant Silver badge
                                      FAIL

                                      Re: Re: Well.....

                                      "....Yes, TurboCore is useless...." So we're back to you saying Turbocore is useless again? First you deny it exists, then you say it's useless, then you deny it exists again, then it's useless again! Make up your mind. And now you're back to denying it exists yet again!!!!! I already linked to the IBM docs earlier, go do some legwork and read them yourself. Or are you going to deny anyone posted earlier, there was never any posts, etc, etc?

                                      1. Jesper Frimann
                                        Headmaster

                                        Re: Well.....

                                        *SIGH*

                                        Yes, you liked to a documentation about TurboCore. and It's somthing that only existed on the 780-MHB with 3,86GHz processors and the 795 using 4GHz processors.

                                        So your references to TurboCore on POWER7+ is wrong, it simply does not exist.

                                        Your reference to TurboCore on 4.4GHz processors is wrong.

                                        Your original claims were:

                                        "Let's see - turn of four cores of an eight-core CPU so you can boost the clock per core to 4.4GHz, but still have to pay the licensing for all eight cores"

                                        I ask again where is the processor that will run 4.4GHz in TurboCore mode ?

                                        "So if you have a requirement for 256 threads, and each Poer7+ core can run four threads, that means you need 64 active core but have to pay 128 core licenses...."

                                        I ask again where is your documentation about TurboCore on POWER7+ ?

                                        It's like saying Ford sell a convertible ford focus, hence all Ford focuses have the attributes associated with a convertible

                                        // Jesper.

                                        1. Matt Bryant Silver badge
                                          Facepalm

                                          Re: Re: Well.....

                                          And when he's not hiding out in the Benchmark Zone, old Jesper is playing hide the argument with cherrypicking repsonses. Not only did I link to the IBM documentation, I also linked to the Reg article on Power7+. I see you have failed to disprove the IBM angle and are avoiding the Reg article, so I guess it's pretty safe to say you're still in yout home from home, the Land Of Fail.

              2. Anonymous Coward
                Anonymous Coward

                Re: Well.....

                "It's still not "3-4x" as claimed by the other IBM troll, though"

                Power 7+ has 80 MB across 8 cores. Poulson will have 20-33 MB across 8 cores... or 25% to 41% that of Power 7+. I suppose it is technically 2.42-4x the amount of L3 cache, but, point being, a ton more than Poulson.

                "So IBM gurantee you will have all eight cores spining at 4.1GHz?"

                The IBM data sheet has four procs in the 770 CECs at 4.42 GHz. Again though, we are talking miles apart. Call it 4.4, 4.1, 3.8.... much faster than anything Poulson can provide is the point. You are splitting hairs to determine if P7+ is much faster or much, much faster than Poulson. If you determine it is much faster, is that some sort of victory for Poulson? The highest clock speed even *claimed* by HP is 2.53 Ghz... probably subject to the same thermal caveats you claim to be true for Power. Power 7+ fully clocked down will still be considerably faster than Poulson top bin.

                1. Matt Bryant Silver badge
                  FAIL

                  Re: Well.....

                  "....a ton more than Poulson...." LOL! So you can have four times SLOWER cache (becasue DRAM is slower than SRAM) but only if you turn off half the cores on Power. Hmmmm, let me see, do I really want to double my hardware costs (to get the same number of cores in TurboCore means twice as many CPUs which means twice as many P780 system blocks) and double my licensing costs (you still have to pay core licenses for the cores you have switched off), just to get enough cache to get round the IBM bottlenecks? Sounds a very expensive option to me!

                  "....much faster than anything Poulson can provide is the point...." And this is the crux of the IBM sales schpiel - "it has a faster clock, that's better, because faster is always better!" So Power7+ much be better than Poulson, right? And of course, Power7+ muct be better than Power6, right, otherwise why upgrade? And Power6 must have been much, much better than Power5, right, as Power5 was only 2.2GHz max? If IBM's and their trolls are to be believed, higher clock frequency means better performance, so you think their own benchmark results would show this (and we know Jesper just loves IBM benchmarks). But they don't. If you go look at the TPC-C results for the IBM System P5 570 (Power5 2.2GHz dual-core) it produces a result of 64,073.125 per core, but the 595 (Power6 5GHz dual-core) scored 95080.71875 per core, only a 48% increase. Ignoring that IBM made gaming TPC results an art, this is not as impressive a jump as the IBM trolls would like us to believe seeing as this faster result was with twice as many CPUs, eight times as much memory in the system and the the core frequency actually went up 127%! The test rig went from $4.5m to $17.1m (after discounts) - an increase of 280%!

                  http://c970058.r58.cf2.rackcdn.com/individual_results/IBM/IBM_570_16_20060213_ES.pdf

                  http://c970058.r58.cf2.rackcdn.com/individual_results/IBM/IBM_595_20080610_ES.pdf

                  So more than double the clock frequency did not produce double the performance, despite what the IBM trolls like to pretend. But it gets better! Power6 is actually faster than Power7+, or didn't you know that? Power6 is running at 5GHz in Enterprise 595 servers. Best Power7+ is 4.4GHz. Oops, did that just blow a great big hole in the "faster clock is best" bullcr*p? Want to go back to the Pentium4 at 3GHz versus an i7 at 2.8GHz and try claiming the Pentium4 will give better per core performance just because it has a faster clock? Only idiots swallow the "faster clock means faster system" sales schpiel and you have just been exposed as an idiot.

                  What IBM desperately try to hide is that clock frequency makes little difference, otherwise there would have been a far bigger jump between Power5 and Power6, and Power7+ would actually be SLOWER per core than Power6. The truth is it is what you do with each clock cycle, and not only does Poulson do more with each clock cycle, the hp Integrtity servers do a better job of making sure the required data is in cache each time it is needed.

  5. Matt Bryant Silver badge
    Happy

    Oops! Another IBM bottleneck!

    Just been looking at the specintrate results for p780, looks like the eight-core result for SLES is better than that for AIX! Looks like AIX has bottlenecks on top of those in the p780 hardware.

  6. Allison Park

    get a life

    Linux has always had a slight edge on aix on power because it is a light weight O/S with minimal reliability compared to AIX.

    I must say you have defended HP well, somwhere along the line the late, slow, divestment of socket compatibility promise, lake of hardware virtualization in Itanium, cratering revenue and marketshare, Oracle disinterest, Oracle doubling of cost per core for no performance justification, microsoft/redhat/suse dumping it, nasty the sky is falling hp internal documents outed by Oracle, deception of customers about itaniums futgure...etc...etc turned into a turbo core or not question on the 780. Go buy your 2.53GHZ itanic chips while the rest of the world decides if they would like to run Power7+ at 3.8GHz or 4.2GHz.

    toodalooo

    1. Matt Bryant Silver badge
      Happy

      Re: get a life

      "Linux has always had a slight edge on aix on power because it is a light weight O/S with minimal reliability compared to AIX....." SLES is a lightweight OS!?! I'd stay away from Penguinistas for a while if I were you. It's also a rather unconvincing answer as hp-ux has no problems beating SLES on Itanium. Maybe you'd like to explain how SLES is "lightweight" compared to AIX?

      By the way, going back to your original post about Itanium being the "last major chip to eight cores", maybe you'd like to change that as it looks like Power can't do eight cores and performance, whereas Poulson can!

      Have fun on the loo and don't strain yourself trying to dump out more FUD!

      1. Anonymous Coward
        Anonymous Coward

        Re: get a life

        Choices, choices.

        1: Stay on AIX or migrate to SLES on Power and gain a whopping 0.5% performance increase.

        2: Stay on Power or migrate to Itanium and lose not more than 70% of performance.

        Hard times for Power/AIX users.

        1. Anonymous Coward
          Anonymous Coward

          Re: get a life

          Or get pounded in the @$$ when they announce new $#!t

    2. Anonymous Coward
      Anonymous Coward

      Re: get a life

      "Go buy your 2.53GHZ itanic chips while the rest of the world decides if they would like to run Power7+ at 3.8GHz or 4.2GHz."

      Power has always been way faster on clock speed that x86/Itanium. The big advantage of 7+ is that it absolutely blows everything else out of the water in cache. Poulson is going to have 20-33 MB of L3 cache across 8 cores, Xeon high end is 30 MB across 10 cores.... Power 7+ is 80 MB across 8 cores, 10 MB per core.

      1. Matt Bryant Silver badge
        FAIL

        Re: get a life

        ".....Power has always been way faster on clock speed that x86/Itanium....." Oooh, faster clock! I suppose it must be so much more important to have high clock speeds than a balanced system design without bottlenecks, right? So how do you explain that the clock jump of Power5 (2.3GHz max) to Power6 (5GHz max) did not provide a 100%+ increase in performance? It's double the clock frequency per core, but your reasoning it MUST lead to double the performance, right? In reality, us customers saw on average 10% increases for a fork-lift upgrade. Having very fast cores sitting doing nothing because the system has so many bottlenecks it can't keep those cores supplied with data is just a big waste of electrical power and cooling.

        "....he big advantage of 7+ is that it absolutely blows everything else out of the water in cache....." When they switch to TurboCore mode you mean? And the only way they can squeeze cache onto the Power7+ die is to use the much cheaper and slower DRAM cache, whereas the Poulson uses much faster SRAM cache. I suppose having lots of even slow DRAM cache will help Power7+ to some extent, but the Poulson has the better cache design and in a much better balanced system.

        You could compare it to an old Pentium 4 system with 3GHz cores and 4GB of DDR266 RAM against a modern Core i7 core at 2.8GHz with 2GB of DDR3 RAM, do you really think the P4 is better just because it has a faster clock and more RAM? Try understanding the technologies rather than just seeing big figures.

This topic is closed for new posts.

Other stories you might like