Re: POWER9 performance?
The style of rant is very very Unique.
479 posts • joined 8 Oct 2008
Actually, the cheapest way of generating electricity in DK is windmills. The problem is that the electricity is there when the Wind blows, so you have to pay for traditional electricity plants also, cause the energy grid is not up to the task of routing the electricty to where it is needed.
So when the Wind really really picks up in the western part of Denmark... then the windmills puts on the brakes, cause they can't get rid of it.
But 75% of your electricity bill is tax anyway.
I don't get it "Trying to port" ? It's there already. AFAIR their hadoop 'appliance' is LInux on POWER based.
The big mistake that IBM made was not merging the z and i/p hardware.
Although the z13 is a nice processor it's not POWER8, which is IMHO is quite a bit bigger and bad'er than the z13.
Only data on 1500 persons ? Here in Denmark we do Things the hard way.
That is 900.000 social security numbers, out of a population of less than 6 million people.
And did anyone get held responsible ? Naahh... nobody got fired.. ... ...
Well the problem for the author is that there really isn't as one to one mapping between containers and the 'as a service' abbreviations.
There is a rather big gap between the lower part in the technology stack of Paas and the higher parts of the solution stack of Saas.
That is why, I normally internally where I work (I am an enterprise architect at an outsourcing provider) use the terms CaaS (Container as a Service) DBasS (Database as a Service) and MaaS (Middleware as a Service) and put those ontop PaaS and Below SaaS. That kind of makes things a bit easier.
Jup, I can only agree.
When I used to Work for IBM, then I did together with a another infrastructure architect, the 'local' standard design for multi-tenant Flex infrastructure. And by using CN adapters in the nodes, then with 2x 4 port CN adapters in the x86 nodes and x2 8 port CN (6 ports available) in the POWER nodes we were actually able to cut the 'capacity as a service' price quite a lot, compared to the stupid design that well... IBM server Group wanted us to use.
This was an environment that relied on a huge shared cheap SAN infrastructure for storage, so FC was a must.
So using 2 CN adapters gave us full redundancy on a x240 node you didn't have to step up to the more expensiv x440 node to get this, it also gave us plenty of IO 8x10 GBit hence we upped the RAM in the nodes giving us higher Virtual machine density, it gave us simpler cabeling, for low performance nodes we could actually Do away with 2 switches in the chassis, And for the optional TOR switches, you didn't need 2 for Network and 2 for FC you just needed 2 Converged switch'es where you then could break out your FC.
This was actually quite big saving while still meeting the design goals of the solution, which included some rather serious security considerations.
And from what I know of HP blades, you could do pretty much a design along the same line using HP products.
So dead ? I hope not.
I think you haven't really got it.
I don't blame you, I didn't for many years either. Back in the days when I was a *NIX consultant I killed off many a bit mini or mainframe systems. But one thing I learned was that whenever you killed of a mainframe or mini the scale out system you replaced it with always exploded in size. And most often were 2-4 times bigger than originally sized.
So as a *NIX guy that today mostly are involved when it's refrigerator sized machines, I have to say to all the mainframers and mini people out there.
Sorry you were right, scale up is for a huge chunk of the workloads out there, the only way to go.
And when it comes to scale up, then IMHO POWER is most likely the best platform out there right now, but I wouldn't buy the boxes that IBM is peddling here. I'd go for the larger more expensive boxes. I know that in TCA when just looking at the hardware then these machines are hugely expensive. But IMHO they are worth it, if you have enough need for capacity.
And I must admit I think it is a bit of a joke, when people talk about commodity boxes having RAS that is in the general same range as scale up servers.
And you mention Oracle RAC as a good way of securing RAS, it sure is, but when you buy an extreeeem expensive piece of software like RAC, then you don't want to deploy it on commodity hardware, you want something that is fast and reliable.
One small problem Keb' people aren't really buying these systems. Check out the latest Q1 marked share numbers:
Where as both HP and IBM are loosing market share, then Oracle is in a catastrophic downwards spiral, and is soon to be overtaken by Cisco.
Well actually I'd rather call the POWER7+ accelerators, coprocessors after reading a bit more about them, they are aren't actually inside the core but on chip.
With regards to POWER7 versus POWER7+, then it depends on the workload.
If we look at the SPECJBB2005 results, same machine POWER7+ at 3.5GHz POWER7 at 3.55 GHz.
Now that is a 46% increase with less clock frequency.
We can also look at the POWER 740, 3.55 GHz POWER7 from 2010 compared with a POWER 740 with 4.2 Hz POWER7+ from here recently
Now that is an increase 53% with an 18% increase in frequency.
Now according to those benchmarks then POWER7+ is actually a good upgrade. So there is no doubt that there is quite a potential in POWER7+ compared to POWER7. But if you look at something like SAP2 Tier benchmarks, then the improvements aren't that great, again most likely cause the software stack cannot take advantage of POWER7+ specifics yet. Nothing new in that.
I actually think you have a very valid point. There are very clear and relevant differences between T5/M5 and SPARC64 X servers, when it comes to how your workload will perform.
For example the SPARC64 X has a decimal floating point execution unit, the T5 doesn't. Again that will mean that various code will execute very very differently.
The POWER 780-MHC is a POWER7 based machine, it is not sold anymore, furthermore it's a product that is classified as a highend product, with RAS features that puts it in a total different league than the T5-8.
The current POWER 780 machine that is POWER7+ based is called a 780-MHD.
So Larry is taking his best midrange server, and constructs a comparison that favours him the most. And where as this is a commonly used practise, then Larry is way beyond pushing the envelope.
IMHO he is just making himself look like a dork to everyone but the most diehard Sunshiners.
You are peddling. Hello, Larry is comparing a last generation (although only a + generation in POWER terms) of his competitors product to his own brand new sparkling product. And furthermore a competitor product that is not sold any more.
It would be just as unfair as IBM putting out a brand new product today and comparing it to the T4.
Now if he had peddled a story about replacing for example ageing POWER5/6 or 3 year old Generation 1 of POWER7 servers or Itanium servers for that matter, with new T5 equipment and how this project could pay for itself in savings.. and be much cheaper.. then sure ok no problem. The problem is, what would be one of the major contributors in paying for such a project would be savings in SW Licenses.. In Larry's brain ORACLE LICENSES.
Now a vendor like HP, Fujitsu or even IBM (unless you talked to their software division) would have no problems doing something like that, cause they deliver the whole package, and have different sources of revenue. Not Larry.. Software is pretty much it for him.
But he is doing a kind of bang for the cost of the server buck comparison, and that is bullsh*t when using a server from the competitor that is no longer sold.
If he really wanted to compare against a current POWER product why didn't he compare the four socket T5-4 against the four socket POWER 760 ?
The POWER 760 does 2170 specint_rate2006, what does the T5-4 do ? Most likely in the 1900 range, judging from the slides of the T5-8 result. So why didn't he do that comparison ?
Again, even though the T5 series of product are really good products, he still needs to load his launch up with Bullsh*t, and that is IMHO sad, cause the T5 products look like good solid products.
Now if you compare the POWER 760 against it's predecessor the POWER 750, then it's 2170 specint_rate2006 versus 1,150 specint_rate2006 for the POWER 750 using POWER7@3.6GHz. So as you can see, there is a big difference between POWER7 and POWER7+ products.
Although again I think the T5-8 is a bit of a niche product, I'd even go so far as to call it a marketing products.
It lacks the RAS features that is needed of a product with that kind of throughput, IMHO it makes it kind of irrelevant. And that is also what I've put in the presentation I will give to my colleges in the individual countries in my region. The T5-8 is not a product that we should put in our data centers. The SPARC servers of choice should be the T5-2, T5-4 and if needed the M5-32.
Larry is misleading, the server that he is comparing with, the POWER 780-MHC is not a product that is sold any more, so it's marketing bull.
Actually IBM is not selling a 8 socket POWER server right now as the POWER 780 and 770 have both moved to 16 sockets.
Yes VMware isn't cheap. We can easily agree upon that. I've doing 'virtualization' for what.... 14 years or so. My personal favourite is clearly POWERVM, although I've done quite a bit of zVM, VMware,VirtualBox and KVM. I have absolutely no problem doing huge virtual machines, on POWERVM. I think the largest we have here is around 32 cores and half a terrabyte of RAM. I have no problem mixing test/production different clients, different firewall zones different OS versions etc etc. on the same physical machine. A machine that runs below 50% average utilization measured on a weekly basis is IMHO not configured right.
And this is not something new. This is how things have been on POWER since before SUN started shipping T2 based systems.
Well I need to write up a presentation on how T5 fits into our strategy. Although I can see that I don't have to change our strategic roadmap for the SPARC platform. But now I think I'll have a beer.
Now there is a big difference between "available on the internet", and then it being listed on the official list that is posted by Oracle. Get over it, it's no biggie. We all make mistakes.
And honestly Exa-XXX has very little to do with SPARC, it's basically x86 only. The SPARC supercluster really isn't branded under the EXA product name. At least IBM appliances are not x86 only. Not that this makes me like IBM's appliances more than Oracle's.
But you do have a point, IBM is pushing appliances, just like Oracle. Appliances surfaces once in a while, but if you think that the EXA-data and related products are something new and exciting and a first in the industry, you are wrong. It has been tried and done many times, with different degrees of success. The difference is the amount of marketing muscle that Oracle is putting behind their effort compared to what has been done in the past.
As for overcommitment. Are you serious ? Why do you think that a product like VMware is so popular ?
And performance of multi-tasking operating systems tank when they run more than one process ?
Sure if you are happy buying x2-4 times the hardware to run your apps, then sure.... use LDOMs only, my advice to you is to use it in conjunction with containers where it's appropriate.
Honestly even the most dense of our Wintel entry level tech guys in offshoring countries that I have problems spelling the names off, know the value of overcommitment in a virtualized environment.
I have no problem with people being fanatic about their platform of choice. But there are limits.
Phil, it's kind of hard to figure out your numbers when you mix POWER 780 and POWER 760 and then write full disclosures and point to the TPC-C page. There isn't a TPC-C submission for the POWER 760.
Now as for the disclosure reports, then reading the report for the T5-8 was as usual, when reading an Oracle submission, like watching a pickpocket in action.
Distractions that are targeted at drawing your attention while he/she empties your pockets. (sorry for the harsh language Oracle, but as so many others I think you are not playing the Benchmark game fairly).
Again the Oracle licenses you don't buy, you lease it, for 3 years, and the support is a web only support.
Now if you really had to pay for the licenses and have 3 years worth of REAL Oracle support the cost would be 64 x 47500 USD + 3 x 64 x 10450 USD or 5046400 USD - DIscounts.
That would lift the $/tpmC to 0,79 USD, for the T5-8, again more expensive than the POWER 780 on a per transaction basis. (not that comparing a 1.2 million submission and a 8.5 million is that fair, but you started it.)
Furthermore you seem really pleased with the price of the T5-8, the list prices listed in the TPC-C submission is 651.458 USD and 0 USD in maintenance. 0 USD in maintenance ? That must mean 3 years of warranty. BUT NO, last on the report there is a
"Oracle Premier Hardware Support" that cost a whopping 1.001.118 USD.
Now not putting that expense where it should be up with the server is IMHO purposely misleading people. I guess that Oracle, as usual, will get a slap over their fingers for that one.
This is typical Larry, we'll charge you for the server, and then TURBO charge you for service.
So basically in your little comparison with the POWER 760, you can add a million USD for the T5-8 and 0 USD for the POWER 760 as it comes with 3 years of warrenty.
Now that kind of screws up your whole argument doesn't it ?
Car magazine economics is something completely different from doing TCO calculations in RL.
Ehh.. Again you don't get it. Why run a costly benchmark when you are already number 1 ?
The rest of your post is just unsubstantiated speculation, without any merit in reality. Why don't you just focus on the good products that Oracle have released and stop the FUD'ing. You should be glad, Oracle just revitalized the UNIX marked, with some really nice products.
No, I am just saying that perhaps you should "read the manual" before starting rambling about 0.25 licenses on the T5. It would have taken you around 15 seconds to google the right answer. Again most serious people who actually work with Oracle does have these lists as bookmarks.
And with regards to virtalization versus partitioning. Well in real life workloads fluctuate, through the day/week/month/year/hour/minute/second a workload might have very different needs for processor resources.
On a machine like for example a POWER server using POWERVM I can simply reflect this by allocating different amounts of virtual capacity and guaranteed physical capacity.
So if I have a combined workload on a production system, which averages between having 3 core average usage and for example peaks at 9 cores, several times through the day. Then I can simply allocate 9 (or 10 to be on the safe side) cores of virtual capacity, and 3 cores of guaranteed physical capacity.
On your T5-X machine you would allocate something like a whole chip with 16 cores.
Now 8 of these virtual machines would fill up your T5-8, but for example a power 760 would still only be half full, when it comes to processor resources.
The difference between physical and virtual capacity you normally call the overcommitment factor. And depending on your workloads, it normally ranges between 2-5 on POWER.
Basically making the machine 2-5 times bigger, you just need to be able to absorb the combined peaks at any time.
Now surely there is operating system-level virtualization like zones, Wpars etc. But these do have their limits and do not provide good enough separation IMHO to mix different types of landscapes, and certainly not, if you are a hosing provider, different clients.
"The M5 processor has me curious, what is the use-case for big-cache with fewer cores?"
Real Life workloads. Again the T5-X series of machines are what they are, entry to lower midrange servers, again I think that the T5-8 is one bridge to far. But the M5-32 looks like a proper highend server with the appropriate features etc etc. A good replacement for the M[9|8]00 and older highend SPARC's out there.
Again clustered results versus non clustered results is not really a fair way to compare a server to another server.
But there is no doubt that Oracle currently has the highest scoring non clustered and clustered TPC-C results. It's a fact.
The result you really need to compare against is the POWER 595 result, from 2008
6,085,166 tpm-c with 128 Threads/64 Cores/32 Chips. Which means that the T5 chip delivers 5.6 times the throughput, but that POWER6 was 5.7 times faster per thread and 1.4 times faster per core.
What would be nice was a POWER 7[7|8]0 result to compare against. I have no doubt that a POWER 780-FHD would beat the T5-8 pretty easily. So the question is how would a POWER 770-MMD fare, it would most likely rather close. But again IMHO this forces IBM to upgrade their midrange line again.
Competition is always nice!
Matt has a point, one thing that all our clients, that run Oracle, is talking about is Oracle license cost reduction, some of the projects we have done, have actually paid for themselves, we are talking Hardware, labour etc etc. in Oracle license reduction cost, over 2,3,4 or 5 years.
Which kind of like puts Oracle in the same position as IBM 20-25 years ago, but again that is also what Larry wanted, to make Oracle into a copy of the old monopoly IBM. He just didn't learn read what happened to the IBM of that period.
And in those PCIe G3 slots you'll put PCIe Generation 2 adapters, which is basically what Oracle is shipping, nice for future adapters.. irrelevant for now.
Furthermore where do you get your prices, I haven't seen any anywhere, not even from the Industry analyst tools we use ? Well you most likely use internal Oracle prices.
And did you include Services in your comparison ?
Again for a highend server the Oracle warrenty is 1-Year Hardware Warranty with 2 Business Day On-Site Response, you've gotta be kidding, both SD2 and POWER 795 has same day, and for the POWER 795 it's even 24x7.
And from my response to your post in another thread, you now should have learned that the Oracle Service on their servers cost more than then actual servers themselves. Again 3 years support on a T5-4 cost 1.5 times that of the actual server cost.
Again Oracle sells you the server cheap, and then turbo charges you for the service.
Larry wasn't kidding when he claimed that he wanted to turn Oracle into the IBM of the 70/80'ies, he most likely plans to dump his stocks in good time before Oracle goes the way of IBM in the 90ies.
No IMHO you are wrong, the M5-32 is the more sensible system. This should really allow clients with legacy Mseries and older highend servers to migrate to a platform that makes sense for them. Again the M5 is 42 percent faster at same clock than the T5 on SAP2Tier app benchmark. That is a lot.
Look at the Oracle UNIX markedshare.. it's been dropping with catastrophic speed these last many years. The M5-32 is a step in the right direction for Oracle, but is it to little to late ?
It's brand spanking new, and still not able to beat the competitions rather ageing POWER7 (2010 technology) based POWER 795.
No, they do not shatter the competition, and you do sound like a fanboi, when using such wording :)
One thing that is good that Oracle have fixed, is putting hotswap adapters back in their servers. That was nice.
But IMHO, the T5-8 might be a bridge to far when seen as a compute node.
IMHO it's might be to big for a server that does not support hotswap/upgrades of component other than adapters and power. I know how much time it takes to empty large servers for virtual machines, either through moving running virtual machines, cluster failovers or simply closing down virtual machines.
Thanks for the link.
Well ignoring the normal the normal Oracle Marketing bull. Then the M5-32 looks like a solid machine, although not really a match for the POWER 795, even though that is machine that IMHO needs a speed bump.
The T5's look solid, and it's nice that see that someone finally is taking on IBM in the UNIX marked, it's been a pretty onesided story for the last 3 years.
So hopefully we'll see some answers from the other vendors, although I have my doubts about HP.
When he is proven wrong, he can't acknowledge that he is wrong but quickly grabs after straws.
"Can you address more than one fibre switch so you can split your backup SAN traffic (which the IBM offering needs as you can't do a SAS tape drive) so that it does not use the same ports and switch modules as your production SAN traffic? No, you can't"
Sure 2 adapters with will address all four switches in the back. Try reading a manual.
And keep repeating something that is clearly wrong "IO starved like the IBM designs", doesn't make it more right. Again a half length node has more IO bandwidth, more memory and more processing power than a full hight HP blade.
You aren't fun discussing things with anymore. You do know that, don't you ? That is why people mostly ignore you.
The BL860c i4 -> 3 x PCIe 2.x x8 slots. + 4 onboard CNA ports. (up to 10 ports)
The x240 -> 2 x PCIe 3.0 x16 slots,
The p260 -> 2 x PCIe 2.0 x16 slots, (up to 16 CNA ports)
And Oh.. yes the BL860c i4 is a full length blade 8 per c7000, the p260 a half length, hence 14 per chasis. Oh, and then there is memory, where the BL860c i4 only supports 384GB RAM.
You are rather clueless aren't you ? Let me guess HP marketing drone ?
You are taking a full length double wide BL680C, which surely is a superb blade server, that you can plug 192Gbit worth of IO into. But again being full hight double wide blade you can only have 4 in a C7000. And that you are comparing with the smallest size node in competitors solution.
Now a Pureflex node if we take the toughest one like the p260 will house 1TB of RAM, and can have 16x10Gbit=160Gbit worth of IO+management, and you can have 14 of those in a chasis.
BLEH. If you want to ditch other vendors products, then please do at least the basic homework, and not just consult the marketing material and take the best for your own company and the worst from company X you want to compare to.
If you had bothered reading the oracle presentation from hotchips, you would have seen that it uses OLTP workloads, when it put forth it's claim that the throughput of the T4 chip is more or less equal to the T3.
The really big feat was that Oracle managed to increase single threaded throughput by almost a factor of five, hence actually taking the T4 out of the niche where the T3 was stuck.
Now nobody likes using specint/fp. it's not a particular good benchmark as your rant about. Historical for example POWER servers have normally been much better at OLTP and IO heavy benchmarks, as this is what the servers have been geared at.
But it still doesn't change that the chip throughput of the T4 is roughly the same as the T3.
Now for example if you compare the T4 and the T3 you can actually use the Enterprise2010 benchmark, where you have can see that 16 T4 chips give you aprox x4 more throughput than 4 T3 chips does.
As for all the T4 world records.. again .. we've been over this several times. T4 is the fastest chip in the world on Oracle product benchmarks where the only competing chip is the T4. The only industry standard benchmarks that have been done on the T4 is the spec Enterprise2010, where it now gets trashed, as other vendors turn to the benchmark, and figures out how to tune for it. Just like I predicted.
And then there is the tpc-h benchmark, again a benchmark that Oracle cracked some years ago, and if you look at how the setup is on the Oracle benchmarks, it's very very different from all others. Again Great work by Oracle, but it doesn't really say that much about the superiority of the T4. Again one measuring doesn't really make a trend now does it.
And funny to see you echoing the newest Oracle marketing message. Power have bad I/O. It's so Carl Rove. Attack your opponents strongest side with FUD.
Just to counter that, then IBM did a real nice benchmark some years ago.. a SPC1 benchmark, where they rather than having a LOT of hardware with a lot of RAM for caching, and IO processors and and and.. simply used a virtualized solution with the storage attached to the Virtual IO servers, that then virtualized the storage and shared it to the virtual machine that ran the benchmark. The machine used was a partitioned POWER6 based power 595. Again an almost 5 year old machine.
Now.... the link to the benchmark is here:
As you can see the IO setup uses 14 ancient PCI-X adapters and 2 old PCI-X drawers, that are cabled for max connectivity and not performance (4 cables not 8) and it runs virtualized POWER solution, with VIO servers and all. Furthermore the setup uses Virtual SCSI. Old and slow compared to what you could do today with NPIV:
At the time of submission this was the World Record for the SPC1 benchmark. And again... this is a pretty standard setup, that isn't even extreem in any way.
Where it gets fun is to compare it to a Oracle Sun ZFS Storage 7420c Appliance benchmark, with a setup that isn't that different what Oracle could do today.
The SUN benchmark uses 2 storage servers with a total of 1TB of RAM and 64 X7550 Xeon processors. and a shitload of IO adapters. The host system driving the benchmark uses 6 PCI-e Gen2 dual port 8Gbit adapters. And even though the old POWER machine only uses to old PCI-X based IO with 14 pci-x SAS adapters, the drawers are even cabled for max connectivity and not performance (4 cables not 8) and it runs virtualized POWER solution, with trashes it with a factor of 2 both on throughput and on response time.
Now how you can think that a POWER7+ based server, with a IO system that is 3 generations newer than what is in the old POWER6 based p595, can have IO problems compared to a T4 based machine is IMHO a riddle. Again the T3-1 that is used in the Oracle benchmark above has the same IO system that the T4-1 has, and the T3-1/T4-1 IO system is IMHO better per chip, than the T4-4 has.
To be quite honest... you should try to read a manual. And try to understand what is going on under the covers rather than just echoing marketing material.
Phil, you are starting to sound like Kebbabbert. You really have to do some homework.
Again as I've told you before. Oracle's own Chip development people rate the throughput of the T4 only to be slightly ahead of the T3 when comparing these two with regards to throughput.
Slide 10 here.
That puts even the 3.6GHz version of the POWER 740 ahead of the T4 chip, and to cut the T4 some slack, lets call it even. Even though the T3-4 documented does 666 specintrate2006 at 4 sockets and a 4.2Ghz POWER7+ Power 740 does 884 specintrate2006 at 2 sockets.
And I can compare what I like, specially when it has merit. Furthermore you forget that both the POWER and the SPARC submissions both have several JEE server instances running. HENCE they are really both clustered submissions. the POWER submission uses 8 instances the T4 uses 16. So it's not that different.
Now the T4 submissions uses a whooping 2TB of RAM compared to the 256GB of the POWER7+ submission. So the T4 submission actually uses 2.2 times the memory per BOBS, compared to the POWER7+'submission. Again as this is a benchmark where price is not really an issue Oracle is throwing a lot of HW after it where it doesn't really show to the untrained eye.
Now you ask why the POWER7+ submission is so much faster than the POWER7, it's cause the + submission uses 8Gbit Fiber, 10Gbit ethernet, and SDD which the POWER7 benchmark didn't. Again network performance is a significant factor in the benchmark, or so the FAQ says.
And surely that POWER7+ now have much the same accelerators as the TX processors have had for some time, which does give a good boost on this particular benchmark, also helps. Again this is nothing new, and as I and many others have said, Oracle have only submittet benchmarks for the T4 where it could exploid it's accelerators or that it was a software stack that Oracle owned totally themselves.
Furthermore the POWER submission is a virtualized benchmark, this is not bare metal as the T4 submission, but a fully virtualized environment, ok with dedicated adapters.
And talking about memory.. again Oracle uses 2.2 times the memory per transaction, and the POWER servers do support memory compression in hardware, which btw isn't used on the benchmark. So your whole memory capacity argument is kind of hollow.
And you are looking for benchmarks, to compare.. the only one that is really there is the SPECjEnterprise2010 benchmark, besides TPC-H. The later is one is one that oracle cracked years ago. Before T4 or T3 was introduced, so that one you really can't contribute to the T4.
And why don't you try with your 14 world records again, that one I just need to link to the last time I debunked it.
And I don't know where you get your pricing numbers from but from the Oracle website their Large Config on a T4-4 is 297,664.00 USD, again...without any maintenance which this time is US$35,719.68.
Again the yearly maintenance for the Oracle box is more than what you pay for a POWER 740 (HW only) with 64GB of RAM and 16 x 3.6GHz cores again with 3 years of warranty.
It's hilarious. As anybody who have ever been an Oracle customer knows, then Oracle support is extreemly expensive compared to other vendors like HP and IBM.
Larry want's Oracle to become the new IBM of the 80ies, and IMHO that is not something that is good for customers.
Why would you want to compare a POWER server core to core with a Oracle T4 server ?
POWER7+ does run circles arount the T4, a POWER 740 seems like a good match. Again if you take the SPECjEnterprise2010 benchmark you'll see that a POWER7+ core is aprox x2 of a T4 core.
That gives you a a price of 94787$ for a machine that is exactly like the Medium configuration of the T4-4 that will set you back , besides the fact that machine has a price tag of 96.656$.
Now the real difference here is that the POWER 740 will have 3 years of warrenty and 4 years of SWMA on POWERVM and AIX in that price, where as the Oracle solution only has 1 year of HW warranty... nothing else..
And then we haven't even started talking about the soft issues, like that the T4-4 in the medium configuration uses 2265 Watt@100% load and the POWER 740 will use 810 Watt@100% load.
So much for cool threads btw... they kind of turned hot.
And that POWERVM will do overcommitment, where as LDOMS basically is partitioning on a thread level, and that the POWER 740 will do HW memory compression.......
Phil, you are betting on the wrong horse.....
And let me save you some time.. doing the math with a 750 rather than the 740, adds aprox 40KUSD to the price, and it'll do 1,267 Watts@100% load.
But then you'll also have serious mismatch, and that is in the POWER 750's favor.
Weeehh... easy... you sure have been drinking the Oracle Cool aid.
Sure for example going from POWER 595 -> POWER 795 is a major overhaul, and requires that you move the workload off the machine, which doesn't require downtime, if you exploit powervm. Again live partition mobility is something that is BAU on POWER. But again you still save a shitload of money on upgrading rather than forklifting.
And I must admit that I can hardly see it's a problem that I can choose lower clocked processors for a cheaper price as a problem, or chips with fewer cores but much higher frequency for optimizing cost on expensive software. Like Oracle's for example.
No sure lets throw money after extra software licenses, that give no business value what so ever.
Again if I buy a POWER 770 with 48 cores running at 4.42GHz I'll get roughly the same throughput as a POWER 779 with 64 cores running at 3.3 GHz. Now if I run something like an EE Oracle DB with some addon's that little move will basically pay for the Server over 4-5 years.
So being able to tailor your server depending on workload is.. well.. surely a bad thing.
And as for what Fujitsu/Oracle/SUN have been selling, then you if you go back 5-6 years, which is the expected lifetime of a server, then Fujitsu/Oracle/SUN have been selling servers based upon:
UltraSPARC T1, UltraSPARC T2, UltraSPARC T2+,UltraSPARC T3,UltraSPARC T4, SPARC64 V+,SPARC64 VI,SPARC64 VII,SPARC64 VII+,SPARC64 X, UltraSPARC IV, UltraSPARC IV+,UltraSPARC IIIi and UltraSPARC-III Cu.
Sure they are binary compatible, but ... tuning wise and planning wise and and and .. the servers are very different. This is why we have had a jungle of different SPARC systems, with very few models being the same, an administrative nightmare.
As for the performance of the SPARC64 X in the M10-4S. Then it's funny to see you get all excited over a 1024 core machine that is barely released, being twice as fast as a 3 year old POWER sytem with 25% of the number of cores on an embarrassingly parallel CPU stressing only benchmark.
Lets see how it does on more taxing benchmarks, and lets see what new POWER servers that according to TPM is just around the corner.
Again take deep breath and try to look at reality.
Now when that is said, the M10-4S looks like a great server.
Come on Phil 4.
The whole SPARC server roadmap have been frustrating for us customers for years.
First it's UltraSPARC wait for Rock, then it's APL Mseries machines with T series for lowend, with a roadmap for APL2 for M-series. Then all of a sudden APL2 is out and is replaced by M4 a modified T core. But Fujitsu will still sell APL2.
One of the most important things in Highend is continuity, and that haven't really been the case with Highend SPARC UNIX servers.
At least for HP and IBM Unix servers you've had a steady upgrade path for years.
"Still not seing that magic 18% you claimed! Try again!"
Again, it's not me.. it HP themselves, why don't you write the guy who writes the manual ?
"But they still sit on top of one great big SPOF, the hypervisor."
Again the hypervisor, is mirrored in memory. And it's not an independent program that is running beneath all the virtual machines. It doesn't function that way.
" If your hypervisor needs patching then all the VIOS and all the PowerVMs come down. "
Not for all patches, and again with the VIO part in independent virtual machines running besides the normal virtual machines, you have taken the most changing part out of the equation, and thus greatly reduced the amount of patching needed to do. Again POWERVM is included in the firmware for the server.
"Yes, you need a service window for ALL the hosted VMs. "
Again the hypervisor is a part of the microcode for the system. With the parts that gets changed the most located in the VIO servers, you actually only patch the microcode of the system when you have the system down anyway. Which is very seldom, you basically only do it to fix critical errors. Yearly.. perhaps. And you do have the option of just moving the virtual machines to another machine, while they run without any downtime, works like a charm and been there for years.
"With hp Integrity there is the much neater option of using nPars, hardware partitions, which can each host their own vPar or IVM system. "
Neater ? Why buy a big SMP machine and carve it up into hardware partitions ? Why not buy individual slammer servers.
"If I want to patch or reboot or completely power off one nPar it has no effect on the others. That's called hardware partitioning, you may have heard of it? "
Sure, I've heard about it, it was something that Mainframe did in the 60ies and 70ies, and that some UNIX vendors started doing 30 years ago or so.. and never really moved on.
"Then again, probably not seeing as you can't do it on IBM p770s or p780s."
No, it has moved on, and actually the idea with putting in a abstraction layer between the physical hardware and the virtual machine executing on it actually isolates the virtual machine from hardware failures, where as the idea with hardware partitioning just limits the damage done, and also adds a hell of a lot of overhead, in the form of wasted space that cannot be used. Again if you have to buy into the HW partitioning, you need to do n+1. and do n+1 on levels.
"What, for not the latest version, again?"
YES for the latest version. Again if I am to construct a brand new spanking solution (being an infrastructure architect) I will consult the manual, the amount of memory I will buy in my HP bl890c i4 will be based upon the recommendations in the sizing guidelines. Again the most likely memory overhead if I use 64K pages will be in the 12% range. If I am to do an upgrade of an earlier version of HPVM, I almost certainly wouldn't change a parameter like the vsp page size, so as not to touch to many variables in a migration.
Doing so is pure cowboy IT. And my overhead would be in the 18% range depending on how big my installation is. If's just smaller light, the overhead won't reach 18%, but be somewhere inbetween 12-18 percent. And I actually sent a mail to one of our sysadmins to ask him what the overhead was on one of our clients smaller test HPVM installations it was 14%, that is on HPVM 6.0.
But again back to the essense, the administrative overhead on IVM is much bigger than it is on powervm.
"<Sigh> You're just demonstrating how much you DON'T know about IVM - it wouldn't let you over-commit. Try again!"
That is not what I am trying to do. Are you telling me that the sum of ram_dyn_max values has to be able to be less that then total amount of RAM physically in the machine ? This value is usually there to tell your hypervisor how much administrative overhead it should setup for a virtual machine, when it's started up.
Hence artificially increasing this to to high levels can normally allocate unrealistic amounts of memory for overhead. Seen it done by external consultants that didn't know sh*t..
"Nope, the 10Gb mezz would be handled by the host OS, not the IVM layer which would simply manage the virtual LAN switch and connections to the hosted VMs and the host OS."
That is actually a valid argument. Again I wouldn't use non hotswappeble network adapter for something important/usefull. But again.. there are no hotswapable cards on a bl890c i4 blade. But if you are using the internal virtual network on the VSP, then I have a point.
".....the specintrate2006 measurement I referred to did not use turbo core...." True, that one didn't, but the majority of them do.
No that simply isn't correct.
11 of the submitted specint/fp/rate non rate are turbo core, out of 95 submittet POWER7 results, and on specintrate2006 specific it's 5 out of 47. So ..no Matt.
"But, seeing as the test just uses one core anyway, and IBM deliberately game it by channelling all the cache and memory to that one core"
No...it's rate. come one.
"it's just as bad as Turbocore and just as unrepresentative of a real World setup. Please try and pretend anyone would pay $17m to run one core."
Again.. no your are totally off.
"Sorry, was that aimed at me or TPM seeing as he also stated Power7+ in his articel (which you have ignored, again, again, boringly yet again). Maybe you should go have a lie down, Mrs Potter."
No. There is no POWER7+ TurboCore. RTFMSF. And I'll just wait a bit with going to bed, needs to see Patriots beat the sh*t out of New York jets.
"I've ref'd the Administration Guide, go follow the link and learn."
Again I did and understood what it said. You obvious didn't.
"In addition to the VSP memory overhead, individual vPars and VMs have a memory overhead
depending on their size. A rough estimation of the individual guest memory overhead can be done
using the following formula:
Guest memory overhead = cpu_count * (guest_mem * 0.4% + 64M)"
"Oh, by the way, I'm not going to claim the IVM host is "optional" as you did with VIOS, though I don't need to use four of them per host system as you eventually admitted you do!"
Again .. you don't get that IVM and the VIO servers are not the same thing. What the VIO server do is a subset of what IVM does. They server IO. If you choose to do so.
And you still didn't get the reason for having 4, availability. In our design 2 for network and 2 for SAN, you can also do 2, if you want to use converged. But 4 for a large machine is IMHO apropriate. Not a single point of failure as your VSP is. 4 VIO servers will for example let me do a full memory upgrade of a system without closing it down. First shutting down one vio server, fencing the system unit it had it's IO in. doing my memory upgrade, reintegrating the system unit, and booting up the VIO server, etc etc.
To be able to do this for a midrange system like the POWER 770, it's actually pretty cool. And it saves a shitload of money cause you don't have 'pork layer' midlevel managers and the like having to talk to clients and try to find out when they can make a service window and and and...
Again, our industry have way way to many middle 'pork layer' managers, and way to few technical people who actually know what they are talking about.
"So NOT the latest version, 6.1, as you insisted. "
Again, the sizing information says 18%, which IMHO seems reasonable, given how HPVM works.
Again you could most likely construct scenarios where 100% of the physical memory on the blade was consumed by simply doing a stupid setup. For example making a lot of small virtual machines that have the maximum memory allowed for dynamic allocation set to equal that of the total of physical memory.
I have never claimed that or even tried to construct such an unrealistic scenario. Again I simply consulted the manual. If you have problem with the manual.. contact HP and get it changed.
And just to add fuel to the fire, we haven't started adding memory to handle IO devices on the VSP, you are so fond of you 10Gbit mezanine cards... they also require Quite a few GB of memory on the VSP.
But again it's always nice to hear how much more clever and skilled you are than HP themselves.
"and did not represent overall system performance (and also ignored how IBM gamed the benchmark with TurboCore and $17m of discounted kit), you simply insisted it represented how p780 would outperform a BL890c i4 in real World use."
Again, the specintrate2006 measurement I referred to did not use turbo core. That is all something that goes on inside your head. Please check your facts.
"Erm, no! Again I pointed out that the IBM docs say"
But it doesn't, what you liked to is this:
It has nothing to do with power7, it only talks about the POWER 780-MHB, a product that you haven't been able to buy since August.
You are actually getting kind of boring.
And amazing you still try to spin the Oracle lawsuit into something positive for HP's Itanium products.
"The really funny bit is this pathetic troll probably thinks he's helping Jesper! With him, Jesper and Alli, it's a bit like the Marx Brothers! Jesper's dim-witted and evasive answers certainly make him like Chico, and this AC troll is definately Harpo Marx. Oh dear, that makes Alli the Groucho - that moustache!"
So.. you have now failed so many times in your arguments that the only thing you have left is personal attacks.
Amazing you are not even capable of reading a manual.
".T ypically , about 92% of free memory available at the Integrity VM product start time (after HP-UX"
Yes ? You forget that each time you start a virtual machine.. then there is an overhead associated with that virtual machine. Again I've already referenced the parts of the manual where the formulas are listed. You know... I actually read the manual.
"Default, 12%. Special and unusual requirment, 18%.."
Again, every version of HPVM prior to 6.1 ,the default value of of the memory page size of VSP has been 4K. The sizing guidelines says 18%. If I were to state something it would most like be something like minimum overhead is ~12% Maximum is ~21% again this makes the 18% of the sizing guidelines seem like a good recommendation. But surely the Great Matt knows better. And so surely you should use the minimum requirements.
"Yeah, I know, how silly, using an actual system and real apps! Surely hp should have instead done a totally unrealistic benchmark using just one core in a 128-core server, where they could fudge the figures by using all the cores' cache and memory.
specintrate2006 is what it is, an audited industry standard benchmark, which surely has limited usability for estimation anything but raw processor power but compared to HP's UP TO x3 performance is... well an undocumented claim in some "marketing material" it's still a hell of a lot better.
So lets surely go with the undocumented marketing claim of HP. Right...
"Jesper, you haven't spent the whole thread dodging issues, denying and then having to admit to features (and then denying them again!),"
No the problem is your lack of technical understanding, and your amazing double standards. You just have to have vague indication of something that fits into your frame of mind and it's the truth. When everybody else clearly document things from the vendors own manuals. It's wrong.
Again if we take your hilarious TurboCore rant, then I've tried to explain to your how you misunderstood things. Again you have claimed TurboCore can be used for POWER7+, it can't You have claimed that There is a 4.4GHz Turbo core solution there isn't. The only product you have been able to hook your whole twisted argumentation up upon is the POWER 780-MHB.
Btw. a product that isn't sold anymore, (http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS912-109). Hence using your own words "So, like I said, you were wrong to simply rabbit on about an old version."
"After your blanket claim of 18%, which turned out to be wrong."
Nope, it was right that is still what the sizing guideline for HPVM states for 6.1. Clearly documented. Again RTFMSF.
"".....worst case would have been 20 virtual cores per physical cores...." Oh don't be pathetic, there is simply no real World application for such tiny VMs. As I said before, try and keep at least one foot in reality."
Which was why I didn't do the calculations the way I did them. Again as I wrote I could have if I wanted the worst possible result.. but I didn't.. so honestly you are just being childish.
"So, like I said, you were wrong to simply rabbit on about an old version."
Again the sizing guidelines still says 18%. And if you need/want to run with 4K pages.. again what have been the standard in every.. single.. version.. of HPVM since the start.. but for the last.. then the value used by HP themselves in their own sizing documents are really really valid.
So it's really not me you have a beef with.. it's HP. Sorry for just doing a RTFM. But again you obvious knows better than the manuals.. or..
"When hp claimed 3x performance that was on REAL SYSTEMS IN THE LAB, not a simulation. I'd try explaining the difference but you seem to have a problem with discerning the real from the imaginary."
Up to 3 for some unspecified performance measurement on an unspecified server, in a marketing anouncement.. compared to very specific numbers for very specific workloads provided by Intel themselves. (Yes based upon simulations). Brrrrr... it's not really a clear cut case. If HP had just released some benchmark numbers that people could relate to... but no.
And you are still trying to dodge the fact that the HPVM 6.1 manual clearly referes to a whitepaper that states that the sizing overhead of RAM on HPVM, should be 18%.
Yes, you liked to a documentation about TurboCore. and It's somthing that only existed on the 780-MHB with 3,86GHz processors and the 795 using 4GHz processors.
So your references to TurboCore on POWER7+ is wrong, it simply does not exist.
Your reference to TurboCore on 4.4GHz processors is wrong.
Your original claims were:
"Let's see - turn of four cores of an eight-core CPU so you can boost the clock per core to 4.4GHz, but still have to pay the licensing for all eight cores"
I ask again where is the processor that will run 4.4GHz in TurboCore mode ?
"So if you have a requirement for 256 threads, and each Poer7+ core can run four threads, that means you need 64 active core but have to pay 128 core licenses...."
I ask again where is your documentation about TurboCore on POWER7+ ?
It's like saying Ford sell a convertible ford focus, hence all Ford focuses have the attributes associated with a convertible
Biting the hand that feeds IT © 1998–2019