* Posts by Jesper Frimann

478 publicly visible posts • joined 8 Oct 2008

Page:

IBM touts Power Systems prowess on SAP tests

Jesper Frimann

RE:Compare Power550 against Sun T5440

Well sure they are comparable, sort of.

I mean you would need a 42 inch monitor and a real small font to just to be able to see all the threads in top SMP view, on the T5440. I mean 256 Threads versus 16 for the power 550. That is a huge difference, and it's not a coincidence that the benchmarked T5440 has 256GB of RAM, all those threads need huge amounts of memory, to be able to do some work.

What I don't like about the T5XXX it is that you get the complexity of a high end box, with all those threads, without actually getting any of the benefits.

// Jesper

Sun killing 'Rock' Sparc chip?

Jesper Frimann

AC on POWER roadmaps.

You should really try to get your so called facts from other sites than BMSEER.

POWER6+ was 'promised' for 2008, and you know what they just made it. If you check out the power 560, you will see that it packs a power6+ processor.

Now we can then agree that power6+ is a bit of a disappointment. But hey it's not like SUN or HP have anything that can match it.

Well there is Niagara, but hey it needs 64 slow moving threads to do it, which is a big enough problem in it's own right.

// Jesper

Sun shareholders act to block Oracle deal

Jesper Frimann

RE:AC not just sun

Nahh... the question "How can you lose the whole value of Sun plus $2B in three years and get paid tens of millions of dollars?"

Is a fair enough question.

SUN always wanted to compare itself against HP and IBM, and if you compare the stock price of these three companies to their stock price 3 years ago.. you get HP up 10%, IBM up 25% and SUN down 50%, and they were down 80%+ before the takeover plans were announced.

You can also see that Dell have lost almost 50% of their marked cap.

So the (almost) pure hardware vendors have lost out, and the broader Service/SW/HW etc. companies have profited. And SUN top management have failed to transform SUN into something that could survive on it's own. And now that task has landed on the table of Larry. Now lets see if he can do it....

// Jesper

Big Blue shipped Power6+ last fall

Jesper Frimann

Interesting article

Just a few points.

You need a 2 drawer power 560 to house 16 cores and 384 GB of RAM, the power 560 is basically a neutered power 570-32.

@Matt Bryant on POWER6 power usage.

Yes it is irritating that power usage from one processor to another isn't using the same numbers.

But try to go to one of the IBM system energy estimator http://www-912.ibm.com/see/EnergyEstimator

And try to punch in a 4@4.2GHz way power 570 (2-16) and then a 8@4.2GHz way power 570 (4-32).

Now the difference in estimated power is 120 Watt that is for 2 additional POWER6 chips and 2 additional L3 Modules. That's not that bad.

// Jesper

Sun and IBM - What price Bigger Indigo?

Jesper Frimann

RE Link to download AIX (anonymous coward)

Ehh..

Here is why you can't download AIX...

Who was the first to finance SCO's rampage against AIX/Linux ? SUN.

So don't use the fact that you cannot download a OpenAIX version as an argument

that IBM is against opening up UNIX, and SUN for it.

Cause it was SUN who paid SCO to block any opening up of AIX.

But if IBM buys SUN that might be the ting that saves OpenSolaris, rather than killing it

off. Why ? Cause SUN bought the right to open up Solaris from SCO, and you know what ?

SCO did not own that right, so OpenSolaris lives on the grace of Novel.

// Jesper

Virtualization soars on Big Blue Power boxes

Jesper Frimann

Well

"I agree. POWER looks very competitive on the horizon."

Lets correct that to POWER is very competitive.

"Sun has been spot-on with every release of the CoolThreads processors, releasing early instead of late."

Jup as far as I know this is correct. but that hasn't been the case with other SUN processors, for example Rock.

"Any of the vendors could have a silicon failure, causing...."

Now that is what I call speculation. You could say that HP have suffered from setbacks on Tukwila, it's still not here but their UNIX sales numbers are decent. They havent dropped as much as SUN's have. So you can still have a success full UNIX business without really renewing you products.

With regards to POWER I would say that IBM have quite a few nobs to turn on power6 that could extend it's life. And then cover up a failure of power7. First of all they could enable all memory controllers and all the memory channels on each chip. Or they could simply make a QCM version like they did with POWER5+. The 1.8Ghz p560q is still faster than the POWER6 power 560 on a per socket basis. U need a 5.0Ghz power 570 to get better per socket performance.

So for example 4-6 core QCM/HCM module running at 3 GHz would still be pretty competitive. And I must admit I simply don't understand why IBM haven't made a QCM version of POWER6.

// Jesper

Jesper Frimann

Prices on powervm

"Has IBM shipped it for free, or has IBM been charging people for it since 2001?"

Brrr.. that was a good question. I cannot say for sure. But I have looked at some of the old configurator output I got from IBM Sales for solutions I've done in the past.

And on all the POWER4 solutions p690'es there is no virtualization charge, and for the power5 solutions I have on the HD there isn't either.

Now for the power6 there are charges after POVERVM became a products, but on blades you get a standard edition of POVERVM for free.

As far as I know and can see from the SUN and IBM homepages, then the features that are a part of the hypervisor on the T5XXX series are also free on a POWER servers. You can do logical partitioning virtual networks. So the free part on power servers are at least equal to the free part on T5XX servers. Now wpars, which is akin to containers are also included for free in the OS.

But when you want shared pool lpar, Virtual IO servers, partition migration wpar dynamic movement of wpars etc. then on most of the servers you will have to put money on the table.

It looks to me, from my understanding of things, that IBM have made sure that what you get for free on T5XXX'es you also get for free on power boxes. And then the fancy stuff you have to pay for, not really surprising. And I have no problem paying for Shared Pool Logical Partitioning, it pays for itself many times over.

"SUN predicted that virtualization was going to hurt their hardware sales, but they also thought it would drive their hardware sales, in the low end. I personally think virtualization hurt Sun's sales more than they expected."

I must admit I have trouble seeing where SUN will make the big bucks on their current strategy.

"If RCK shows up in 2009, great, if not, the 4xT2+ will be enough for my applications, running LDom's and Containers."

'good enough' is the key word here. You have to remember that the other hardware vendors also move forward. If Rock comes out in late 2009, early 2010 it will be facing POVER7. Which looks a bit like a monster.

// jesper

Jesper Frimann

Well

deadmonst:

Well we are using it, everywhere we can. I mean we are replacing p690's with power 560/ 550'es where just savings in maintenance is paying for the damn migration project. And yes the POWERVM is a new marketing name but the basic stuff have been on servers since 2001, and the main functionality of the current powervm have been around since 2004.

Ron Skoog:

Well, today VMS doesn't even support vpars on Integrity servers afair, which is kind of a bugger. And watch out a Mainframe guy might hear you and start telling about how they virtualized back in the 60'es before even I was born.

David Halko:

The style hypervisor that is shipped with Niagara based boxes is a logical partitioning technology hence you divide the machine into smaller bits, allocating CPU threads to different domains. That type of hypervisor have been shipping on POWER servers since 2001. Now the hypervisor that started shipping with the POWER5 boxes in.. 2004 I think it was, is what is using what Big Fat Blue is calling shared pool logical partitioning (SPLPAR), here you just allocate shares of CPU ressources, and when a partition isn't using them others can. The same functionality that you would implement using containers on a solaris system. In my book virtualization on power is years in front of the, just as clear number 2, which is SUN and HP really have to get things going with regards to virtualization.

I don't really think people realize that by embracing virtualization like people that build POWER and also Solaris solutions have done, is actually hurting IBM and SUN hardware sales numbers.

Gartner and IDC have called this effect the virtualization effect, and it's also hurting the x86 sales bigtime.

And it really isn't that strange, cause you are pumping up the utilization of the physical hardware when you are using virtualization. So if you had a workload that ran on a 24 core system using partitions and with a 20% average utilization. If you now can get a system with cores of twice the speed and you can use virtualization to raise the average utilization to 60% by letting the partitions that don't use 'their' CPU ressources, let other partitions use it.

Then you are actually reducing your needs for compute power from 24/(2x3)=4cores, which means that you can jump to a p550, if we use POWER as an example.. Now if you had just made a 1-1 migration (lpar->lpar) to a new box with cores that were twice as fast you would have needed 12 cores. Then you would most likely have bought a power 570. And the price of a 4 core power550 is something completely different to a 12 core power 570.

// Jesper

Unisys threatens Itanium with death

Jesper Frimann

Re:David Halko

What a nice cherry picking, why don't you do the same lookup and compare equal number of threads or equal number of cores, or even equal price ?

1 CPU socket performance

Today Niagara is being trashed (like everyone else) by Intel Core i7. All the niagara results use 63 Threads, which means that unless your app can utilize many threads you won't really get to the throughput in the benchmark. Furthermore running 63 threads compared to for example 8 to get the same work done will use a fair chunk more memory. Anywhere from 8 times to 1.5 times.

2 CPU socket performance

This is pretty much the same as for the 1 CPU socket performance, there aren't really Core i7 results, but the hex core Xeons are pretty much equal to Niagara.

4 CPU socket performance

This is pretty much the same as for the 1 CPU socket performance, there aren't really Core i7 results, but the hex core Xeons are pretty much equal to Niagara.

8 CPU socket performance

The M5000 is trashed by a factor of 2 by a power 570 with half the cores.

16 CPU socket performance

The M8000 gets beaten by both power 570 (with half the cores) and a hex Xeon based Unisys by nearly 50%.

32 CPU socket performance

The M9000 gets beaten by the power 595 with half the number of cores with 70%.

64 CPU socket performance

Well the M9000 with 256 cores and 64 sockets manages to beat a 64 core 32 socket p595 with a astonishing 8%. That is 4 times the cores to get 8% extra. And it gets a solid trashing by a Altrix with equal number of cores.

And You should really check out the reality behind the SUN benchmark quotes that you present, I mean it's hard to take SUN marketing posts seriously when they quote old stream benchmarks where they benchmark themselves up against old systems. Try having a look at the real facts:

http://www.cs.virginia.edu/stream/top20/Bandwidth.html

// Jesper

Sun christens once and future Supernovas

Jesper Frimann

RE:Matt Bryant

Well I agree with you on TM. They need something to address, one of the main weaknesses of the Niagara based servers, and I would also presume ROCK, their locking problems.

On your comment to my post. I think that my post was a little unclear. What I meant was that Virtualization is bad for the Server vendors Hardware revenue.

If you take a partitioned physical server running at around 20% seen from a whole physical box perspective, and upgrade it to a new virtualized server where perhaps the cores are twice as fast and runs at 60% utilization, due to reuse of unused CPU resources, then you will actually only need 1/6th of the number of cores to run the same workload, compared to before. Hence you cut the number of resources you need and you will normally also be able to go down one or two sizes in type of server. Eg from a 32 way (highend) to a 8 way server (entry level).

This will mean that the server vendor that is selling you the server will see a significant drop in revenue, compared to if it had sold you a new box that would run at the same utilization.

And to be fair to SUN, they are using a combination semi fine grained partitioning on their Niagara boxes and containers to achive a higher utilization on the servers they sell. So they have some of the virtualization effect. But no where to the same degree as for example people using VMware or POWERVM are.

// Jesper

Jesper Frimann

RE:They are not full function cores

Well virtualization is not good for business. Going from partitioning your server, like on a M class machine to a shared pool of processor resources, like on POWER systems would mean that the average utilization of boxes sold would jump quite significantly. Hence you would sell fewer, and possible smaller servers to replace old ones.

Sure using containers helps, but that glove doesn't fit all IMHO.

// Jesper

Sun will Rock in 2009

Jesper Frimann

RE: @Jesper re: HANS on SAP

Well as for the SAP, I simply think that the problem was that the upgrade was to be a simple hw swap, so that there weren't put manhours aside for serious reconfiguration, and retuning.

And what angered the architect was that he would have liked to have known this up front, cause then he would have chosen a MXXXX box. I personally think that he should have known, that things would perform differently. Getting the user generated part of the workload retuned should be quite simple, and is most likely enough. But you never know if there is some abab code somewhere that doesn't scale with more threads.

As for the DBA that I went to university with, then it was a locking problem, that couldn't be resolved on the niagara box.And they will now prob' have moved the Database to either a MXXXX or I think they also have some p595's.

// Jesper

Jesper Frimann

@HANS on SAP

You are right Hans. SAP is crap seen from an IT perspective. But if things start to run MUCH slower when you upgrade, to a box that SUN have advised you to use. And the box you are upgrading from is 5 years old.

And it is not only SAP, to quote the comments that a former fellow student, which I have worked with when I was a consultant, had on Niagara based boxes. "It is crap that they sold me such a box and it is clearly not made for this kind of workloads" He tried to get a RAC cluster to perform on a handfull of T5120. He is one of the most Hardcore Solaris and Sparc fans I know, and also a very skilled DBA.

Now the truth behind these stories is that not all workloads are well suited for the Niagara based boxes.

And migrating a workload that have been written/tuned/setup for running with few threads, on a few fast processors is not always that easy. Not without doing a rewrite, reconfiguration and new tuning.

Not to mention locking problems, memory use for all those threads etc.

Niagara is *great++* for what it was designed for, but IMHO SUN is trying to sell it in where it doesn't fit.

// Jesper

Jesper Frimann

Well...

isn't the problem that it's gonna be a little to little and much to late.

I mean if SUN get rock based systems out in 2H, about the same time as Tukwila, and then AFAIK POWER7 will be just around the corner, so IBM will most likely already have benchmark results out.

And that will mean a hard time for a system that was originally slated to compete with power5+ and Montecito based systems.

And still they will need single threaded performance, this is the main issue that I hear/encounter as being a problem, with the current Niagara based servers.

Just a while ago I heard the lead SAP architect where I work, complain that it was the second time that he had been involved in migrating a customer from an oldish SUN server to a new niagara and that the throughput had sucked. The new system actually ran slower than the old one.

So yes it is good for SUN that if they get Rock out of the door in 2H, buuuutt.. still it's late to the game.

// jesper

So what will happen to Sun?

Jesper Frimann

Oracle on a 12 core chip

Well David.

I think what the anonymous poster meant was that if Rock is 30 times faster than the 1.2GHz UltraSPARC III. Then it'll still only be 2.5 times faster per core. Now that isn't very impressive thinking that for example the 4.7 POWER6 launched in August 2007 is almost 6 times faster than the UltraSPARC III, on specINT 2006. And I would imagine it's even worse on DB OLTP loads.

So when ROCK arrives if it's in late 2009, then it will be facing for example a power7 almost right from the start. And well I guess that POWER7 will be faster than POWER6 on a per core basis, so you will STILL have to pay 3-4 times the license cost to for example Oracle for the same performance, as you will on SUN's fastest competitors.

Which for a workload that corresponds to one ROCK processor would be a difference in 235KUSD-282KUSD, 8 versus 2/3 licenses. And on 4 sockets this would exceed 1MUSD.

// Jesper

Jesper Frimann

ROCK is coming...

at a time when the other UNIX vendors will be shipping Tukwila and POWER7. But it is hard to know. I mean the messages given from SUN was that it would arrive in late 2006, early 2007, then it became 2008, then mid 2008, then it was 2009 and now it's late 2009

I wonder how a chip originally scheduled for 2007 will fare against the competition. I mean there is big difference between competing against POWER5+ and last iteration of POWER6 or first generation of POWER7.

// Jesper

Jesper Frimann

Well, it is not only a economical problem

Well IMHO their problems are more fundamental than many think, I personally find that SUN is not looking after their customer base. Which is very loyal, some to the point of fanatics. It seems to me that they have raised prices on the boxes that they normally sell to their loyal customer base. And this will surely generate more revenue, for each box they sell, but it will also scare some away.

IMHO SUN's loyal base is the lowend SPARC servers that run Solaris. There are A LOT of those out there, and you used to be able to get a decent performing box, to a price that was lower than it would cost you to get one from HP or IBM.

Sure the HP/IBM box would be faster. But the difference in price in the favour of SUN and the fact that the IBM/HP server wasn't that much faster, meant that there really wasn't any reasons to migrate to something else.

That have IMHO changed.

Now if you today have a v215 or V125 or even an oldish v4X0 you want to replace with another SPARC server, then You basically have 2 different choices Buy a Niagara based server/Blade or buy a M3000.

The same kind of money that gave you a v215 will now give you a a 4core 1.2GHz T5120

And the money that gave you a v125 will now get you a 4 core 1.0Ghz T1000.

This is not an ideal fit for many SUN customers, as you with the Niagara based servers might end up having worse single threaded performance than on your old server. And for many workloads that will mean worse throughput.

That leaves the M3000 as the other option.

Now the M3000 is 15K USD and it's not that fast. SUN just released a SAP2Tier benchmark of 4130 SAPS. That is a good deal less than old submissions from HP and IBM.

The rx2660 using Montecito and the p505Q using Power5+ both does around 5500 SAPS.

Now the rx2660 would be a good deal cheaper, if you choose to use lower clocked Montvales rather than the highest clocked ones, and would most likely cost around 10K and a p505Q is 5.5KUSD.

This means that low end servers from HP and IBM now aren't just faster but also cheaper than their SUN counterparts.

This is bad for SUN, and we haven't even started to talk about Linux taking marked share yet.

And what about their server product lines ?

There is a install base out there with SUN Enterprise and SUN Fire servers who have been waiting for a replacement, in the form of Rock based servers. And if you've ever talked to SUN sales people then many told their customers to jump over APL servers and go directly to Rock based servers.

And well now it looks like SUN will try to make these people move to MX000 systems:

http://www.sun.com/promotions/campaigns/index.jsp?pid=ssepromo&intcmp=2218

And I would imagine that this would mean that HP and IBM will unleash the sales dogs of war, on the customers that have oldish SUN equipment, and either take business away or force SUN to discount more than it would have liked to do.

So what will happen to SUN ? Hopefully the will survive in some form as the IT industry will be a poorer business without them. But they need to change things. I agree with the with Cris Mellor (the writer of the article) that I don't think that neither HP or IBM would pick up SUN. But a alliance between Fujitsu and IBM might, where Fujitsu took over the Hardware and IBM some of the software division, but I don't know how much money IBM would actually pay for getting their hands on Java.

// Jesper

HP missionaries paddled over mainframe convert claims

Jesper Frimann

HP wannabe

You don't get it do you ?

Power6 is backwards binary compatible with power5 which is binary compatible with power4...

Hardware isolation my butt, HP is desperately trying to catch up in the virtualization layer

with IVM, and sorry they are years behind. IVM reminds me more of Containers and

Wpars than powervm.

A hypervisor virtualization layer that catches hardware faults and isolates them,

so that they do not affect the virtual machines that run on top of it is far superior

to a hardware isolation, where you 'only' loose one partition.

It's like "Sorry you lost your head but we managed to save the rest of you limps".

As for management tools, both HP and IBM IMHO have strong offerings there,

but I do think that some of the integration between platforms on the IBM side

haven't been good enough. But it has become much better with the new director,

even the v7 HMC interface is now easy to use.

And your comment about IBM Global services is just FUD.

I mean I don't know where we would

be if we hadn't had access to skilled AIX people to fill in when our skilled people quit/got sick or we just got another customer in, and couldn't hire and train new ones quickly enough.

And the same goes to a lesser degree for HP, but they seem to be more interested in

pushing x86 blades, than Integrity. SUN is mostly a sales office here.

Sure they all would like to sell you a Project manager also and a... and a.... but hey that no

different than any other bodyshop.

// Jesper

IBM gets into server transit business

Jesper Frimann

What ?

>I went to Transitive, and speculated either Intel or IBM had their hand in. Its clever stuff, but

> really falls far short of buying new Sun kit (and lets face it, a low end Niagara box is cheaper

>than a QuickTransit license to run some old app on an E4500).

Well I would clearly rather run things natively on Solaris rather than having them emulated on some other platform. That should be a nobrainer. But, Solaris and AIX both being POSIX 'nice guys' then a port might be easy enough.

And if we are talking something like running SAP, Oracle, DB2 etc then the problem should be that big.

>Run it natively you loser, and if your app isn't available for AIX remind yourself its because you're

>running a second tier OS in the eyes of most vendors! If you desperately wanted off Sun,

>I guess its an option - although the idea of running Solaris apps on IBM's gimp masked AIX

>really makes my skin crawl.

Well calling AIX a second tier OS is bull, and you should know it.

If you want to know why many software vendors will try to sweet talk you into running their app on Solaris then check out what they will be charging you in licenses fee. This includes IBM's own software sales people. They all love Solaris, even more than Windows.

I mean just running Oracle on a T5140 is bloody is 285KUSD for the Enterprise edition in list price.

>It really really really depends what you are doing and how you measure it, there are lies,

>damned lies and benchmarks.... Power6 @ 4Ghz is certainly not faster than a 3Ghz Xeon and

>is substantially more expensive and power hungry.

What are you talking about ? When pointing a finger at someone, there are always three fingers pointing back at yourself. Lets take a benchmark where the results are actually used for input to RL sizing data, SAP.

http://www.sap.com/solutions/benchmark/sd3tier.epx

JS12 with 1 chip with 2 cores POWER6 at 3.8GHz 35160 SAPS per Core.

bl680c with4 chip with 4 cores Xeon @2.4 GHz 10638 SAPS per Core

Now if we correct for going to 6 cores per chip on SAP we get

bl680c with 4 chips with 6 cores Xeon @2.4GHz 8963 SAPS per Core

Now if I want to do 70K saps I need a js12 with one chip and I need a two chip bl680c

Now the BL680c costs somewhere between 10319$ and 11.839 $:

http://h71016.www7.hp.com/dstore/ctoBases.asp?oi=E9CED&BEID=19701&SBLID=&ProductLineId=431&FamilyId=2063&LowBaseId=&LowPrice=&familyviewgroup=1454&viewtype=Matrix&Matrix=

The js12 in a similar configuration is 4687$:

http://www-03.ibm.com/systems/bladecenter/hardware/servers/js12/browse.html

So it's cheaper. And no matter how you try to do your calculations, then one POWER6 is not producing more heat than 2 Xeon chips.

// Jesper

Jesper Frimann

Re:Re:RE:Re:hRE:Efficiency

Hmm.. Bill you don't get it.

Let me try to explain, it is not SPLPARS versus containers, AIX has it's own version of containers

called WPARS which is basically a clever extension of the workload manager. Furthermore WPARS have the ability to be able to move between machines, much like the vmotion of vmware.

So there is nothing wrong with the 'container/jail/WPAR/IVM' concept, of running semi isolated containers inside an OS image, but one size doesn't fit all.

For example I would never run a SAP test environment inside a container/wpar inside the same OS image that also held the production environment. It would be madness.. not that I haven't seen it done.

Now the overhead, where I btw in my 4 years of working with SPLPARS have never seen anything near 10% (4-5% I've seen), and it is really not an issue. You have to compare it to LPARS/domains/v/nPars where you have the same problem as you have with smaller rack stack and pack'em machines. If you aren't using the CPU's capacity inside a partition, then nobody else can use it and it goes to waste. On a machine running SPLPAR you can reuse the CPU resources if they aren't used by the virtual machine they are allocated to.

On a machine that runs SPLPAR you can easily reach 60-70% average utilization of the physical machine, compared to perhaps 15-25% on a machine that runs partitioned. And the 4-5% you are paying in overhead you get back ten fold.

Let me give you an example from the real world, I'm currently in the process of analysing what we need to buy to replace 8xp690'es with around 192 power4+ cores. And some of those machines will still do 30K+ TPMC per core. So they are still with todays eyes fast. But I figure I can squeeze those 8 refrigerators into 2x16/3x12/4x8 ways power 570'es, and still have room left for other consolidations.

And if you really really don't want to loose those few percent then run your virtual machine in LPAR mode. Then you have the same limitations as domains and vpars.

But I've never experienced a 2+ socket power server that is power5 or newer that have run out of

cpu resources. They will always run out of memory first, or IO first, cause people size those resources to low.

I would say, depending on your application, if you for example are running SAP with an Oracle Database, and test, education, prod and development on the same machine, then having around 32-48 GB of RAM per CPU (or core) is not a bad idea on eg a power 570.

Now is the POWERVM hypervisor a single point of failure ? Yes sure it is, but it is a lot lot lot more stable than a OS, it's not a program or oslike thingie that executes under the virtual machines, I would more compare it to a read only shared library. Sure an error in the library could bring down a virtual machine, but to bring down the whole machine is very unlikely.

Also I would have no problem having virtual machines running on the same physical machine that were on all four different zones of a firewall, besides having to neuter the HMC network :)

// Jesper

Jesper Frimann

RE:Re:hRE:Efficiency

Well... overhead can be calculated in many ways.

Well if you take SUN domains, HP's v/nPars and IBM's LPARS is that what they basically do is to divide the server into smaller pieces.

This is good if you do not want to pay for 8 CPU's , that your server has, running Oracle when you only need 4 CPU's, hence you can make a partition that only runs on 4 and on the rest of the CPU's you can then install another partition on, the remaining 4 CPU's

Now the bad thing is that all the CPU time that isn't used is still wasted, so if you have an average utilization of 20% of both your 4 CPU's partitions. Then you actually have an overhead of ... 80%.

In comes SPLPAR, shared pool LPAR, which you can run on the power5(+) and power6 hypervisor. Here you truly virtualize your CPU's and CPU resources is seen as a pool from where you can get what you are entitled to. which have been set up by fairly simple rules. This means that I can keep on making virtual machines on the server and tap into that 80% idle CPU time from the example above, simply by the fact that if no virtual machine is using the CPU resources then another virtual machine can. Now this is virtualization, not like LPARS,v/nPar's or domains.

// Jesper

IBM smells Sun red ink

Jesper Frimann

No fud here

> Guys please stop commenting about products and features you have no experience with and

> frankly have no clue about them.

I am sorry, judging from the comments I am going to make and the comments made by other I'd rather say it's the other way around.

>To Jesper and Bill: a ZFS clone and a disk clone are 2 different things!!!

Yes it is. That is why I used the term functionality, and it was in the context of liveupgrade.

> cloning a ZFS filesystem is imediate (no file duplications involved)

> cloning a ZFS filesystem is extremely space efficient, allowing you to have hundreds of clones on a single disk.

> ZFS has checksums, compression, raid ....

> ZFS IS FREE

ZFS is great, ok perhaps a bit overhyped, but shouldn't we just stop at that, I've never said that it wasn't great product.

>Please also stop spreading the FUD about Oracle licensing on sun systems, you can manage your

>licensing costs using zones with resource caps, so you only need to pay for as much Oracle as you

>need.

That is not FUD. It is facts. if you deal with SUN sales people you would know as soon as Oracle and licensing pops up their eyebrows start to twitch. And no matter what you say then it doesn't change the fact that for a SPARC server it is 0.75 license per core. Sure you can limit the number of cores that Oracle can run on by partitioning your server into smaller bits, but it does not change the fact that you'll end up paying the same in license for a SPARC core as you do for a power6 core.

Now you mentioned that you could limit your license fee by partitioning up your server, and here is another advantage of the virtualization on POWER.

Let me try to explain, by using an example. You have 4 different DB servers. That have a total

maximum peak usage of 700K TPMC, at the same point in time. But as they peak at different points in time they have local peaks at 300K, 400K, 500k and 600K.

Now on a traditional partitioned server like a M8000 you would make one partition that could handle a peak of 300k, one of 400K etc. Hence the capacity you would allocate would be 1800 TPMC.

Now on a POWER server you would simply make a processor pool that could handle the maximum peak load, and a little bit more just to be on the safe side. Lets say 800K. On for example a p570 that would be 8 cores x 4.7/5.0Ghz CPU's. Now this means that you would never have to pay for more than 8 cores -> 6 licenses for Oracle.

Inside this pool you would then allocate resources to your virtual machines, you would then normally give the virtual machines more virtual CPU's than they needed and uncap them, hence allowing them to use more CPU power than they needed. For example 5 virtual cores,6 virtual cores,7 virtual cores and 8 virtual cores, at 100K TPMC per core. Hence the virtual machine that ran the 600K TPMC workload could actually peak up to 800K if need be.

Now on your traditional partitioned M8000 you would end up with something like this:

One partition with 9 cores.

One partition with 12 cores

One partition with 15 cores

and last

One partition with 18 cores

for a sum of 54 cores or a sum of 41 Licenses. If we assume 33K tpmc per core. (or 54 if we assume 25K tpmc per core which btw would mean that we had to use a M9000)

So to sum up 8 core p570 versus 54 Core M8000 and 6 licenses versus 41 licenses.

Now do you understand why I as an architect like IBM pSeries ?

> I have experience with AIX and HPUX, and frankly my favorite is Solaris and Dtrace, ZFS,

>ZONES+ BRANDZ have no equivalent in HPUX and AIX.

As stated otherwise you are wrong with regards to Dtrace.

>And about th IBM power hardware, as with anything non X86 it will linger around, while x86

>systems will take away market from them ....

Yes and the mainframe is dead, and so is Java and..............

> I am no fan of X86 architecture, and nobody with a minimal hardware knowledge can be,

>however economics will always triumph ...

No, that is where you are wrong. There are not many people that understand the economics of computer infrastructure cost. I read a study, think it was either Gartner or IDC, on the buying pattern of heads of IT. And the single most important factor was momentum in the marked place. Or as you would call it in other businesses 'What's in Fashion'. And you would be surprised on how often a Linux/UNIX on Sparc/Itanium/Power turns out to be cheaper than a x86 solution in TCO, if you have to make solutions that can live up to the same specs.

// Jesper

Jesper Frimann

Dear Bill

Dear Bill.

The functionality you talk about there have been available in AIX since AIX 4.3 and is called Alternate disk install. Now let me point out to you that AIX 4.3 is more than 10 years old.

The Original feature let you clone the OS of a running system to a new set of disks and you could then patch do a full os upgrade whatever on the OS image on these disks, and then boot from them and then boot back to the original disks if you wished so. You could also use this feature to clone systems, and that could be useful for systems where you hadn't any network on them yet. Have had to do that quite a few times.

Now the current version of alternate disk install is quite a bit more advanced, and doesn't need disks to clone on.

I must admit I'm getting a bit tired of hearing smart fancy Solaris marketing names like Jumpstart, liveupdate for things that have been on AIX for years. Sure Network Installation Manager (NIM) isn't as fancy a name as 'jumpstart' and liveupdate sounds much much better than alternate disk install. But that is AIX, no catchy names and fluff. It's just a really really good UNIX, and under it you have a hypervisor that's perhaps only triumphed by zSeries, and under that again you have the damn best hardware you can get.

It's like the A10 warthog: ugly, effective and almost impossible to bring down.

It's not that Solaris is a bad OS, I'd just think that AIX is a little better. But if you factor the hypervisor and the POWER hardware into the equation. Then it's really not a race.

// Jesper

Jesper Frimann

First of all

the newest AIX version is AIX 6.1 not 5.3.

ZFS looks nice but many of the features I see pointed out have been in AIX for years Snapshots compressed filesystems or have been added in AIX 6.1 cryped filesystems etc.

Not that ZFS doesn't seem more advanced than JFS2 cause it does.

As for patch management then AIX have always been good at that. I mean I accidental patched some key libs that a running Oracle database used many years ago, now that is bad. But after just doing a rollback of the patches I could actually close down what should be closed down and repatch the system.

I also remember being able to backup a filesystem where there weren't any disks below it, straight from filecache. Saved a whole development departments from some red ears.

But still it's the whole package that counts, and much of the work done on 5.3 and 6.1 seems to have gone into virtualization, or rather to integrate AIX with the virtualization done by POWERVM.

I mean they must have rewritten a lot of code, just the accounting part must have been a nightmare. Imagine you have 100 virtual machines on a machine using 200 virtual processors that run ontop of 32 actual cores, and all those virtual machines all run at more than 100% utilization if you use standard sar commands. Brrr...

// jesper

Jesper Frimann

Solaris and AIX

Well I think that both Solaris and AIX are the 'good guys' when it comes to following POSIX, You might argue that AIX typically uses the POSIX values by default and you sometimes have to specify it with Solaris, not like HPUX *SIGH*.

I do think that Solaris and AIX are a bit different. Solaris being the favourite of the app. developers, and AIX being the favourite of the Sysadmins. I had to debug some IO performance problems on a Solaris 9 box some years ago. And I was amazed at how primitive the tools were compared the the ones I was used to from AIX 5.3.

And when you talk to AIX and Solaris fanatics you usually notice that the AIX fanatics are almost all Sysadmins, but the Solaris fanatics have a very high percentage of DB admins and developers among them.

The reason that I think that 'AIX servers' are the best, is the combination of hardware, virtualization layer (POWERVM) and the OS.

Which means that IMHO there really aren't any clear weaknesses when you compare an AIX solution to one based upon Solaris or HPUX.

With regards to Oracle licensing, If you move from SPARC to POWER6, then you'll end up with your licensing savings paying for your new servers, if you do it right. It takes a little design work to harvest the savings but the POWER hardware, POWERVM and AIX will give you the tools to do it.

Sure you have to fight off Oracle sales people (or some other software company) who will try to trick you, cause they don't like loosing licensing fees.

I once had an Oracle sales guy who wanted to have licensing fee for x3 the number of cores in the system, because we used uncapped micropartitions. But luckily the Oracle licensing are pretty clear.

And remember there is NO EXTRA VALUE in paying for 12 licenses compared to paying for 5 if they are of the same type. But having better hardware does give you value.

And just to close the Oracle bit. Do not.. do not.. not not not run Oracle on the T5X40 servers, they make amazing (but far to expensive) webservers, but license cost will ruin you.

// Jesper

Sun, Fujitsu launches entry quad-core Sparc box

Jesper Frimann

narwhal is a whale with one long tusk.

And being Danish, then Narwhal is translated to Narhval. Which directly translated would be Jester Whale, well it's funny if you are Danish.

This was a product that SUN needed, but where is the 2 socked server ?

And must admit that I think it is to expensive, it is about the same price as a p520 with equal cores, disk RAM. And the p520 will beat the M3000 to a pulp on every benchmark.

// Jesper

IBM doubles Power cores

Jesper Frimann

Nopes

Well we won't know. And isn't this like comparing a hummer to a AEGIS Cruiser ?

But This guy did manage to get a very important windows program running on one of these boxes:

http://download.boulder.ibm.com/ibmdl/pub/systems/power/community/aix/AIX_Movies/PowerVM_Lx86.wmv

Just fun.

Jesper Frimann

A few errors in this article.

First I wouldn't compare the chipset of the x3950 to the power 570 and power 560. The power servers do not have a external chipset in the same sense as the x3950 do.

Try to have a look at this redbook that deals with the big brother of the power 570 the power 595:

http://www.redbooks.ibm.com/abstracts/redp4440.html?Open

Furthermore the new 4.2GHz power 570 with a maximum of 32 Cores is a 4 CEC (building block) system, hence each CEC have 2 CPU boards which each has 2 'sockets', giving it a total of 8 cores per building block.

The only problem I see with these new systems (power 560 and power 570) is that they have can house less memory than the power 550 and the 16 core power 570 per core.

I do think that the power 560 have been overdue for some time. But it is a nice upgrade from the p560Q which could house 16 1.8GHz power5+ cores and only 128GB of RAM.

// Jesper

Page: