* Posts by Jesper Frimann

478 publicly visible posts • joined 8 Oct 2008

Page:

IBM punts first z196 mainframes

Jesper Frimann
Thumb Down

BLEH

"These z196 cpus are really slow dinousaur cpus that should never have left the lab. An abomination indeed."

Care to back that claim up ? with more than just Oracle marketing bullsh*t.

Saying that a x86 CPU is 5-10 times faster than a CPU in the z196 is so wrong, that it's ridiculous.

I am no great fan of Mainframes, I do respect them, cause I've done migrate off mainframe projects where they dorks that did the sizing didn't understand what the F*** they were dealing with.

So one thing is Standard SAP software or Oracle databases or... another thing is when all the different accelerators inside such an old dinosaur gets something to tear at. And being such a great fan of the SUN T3 processor, I would have expected you to understand that. But.. I fear you don't really get it..

// Jesper

Larry Ellison's first Sparc chip and server

Jesper Frimann
Coat

wake up.

"with over 10x better performance without any optimizations at all, you also objected as "cherrypicked by Sun" etc etc etc."

What benchmarks are those ?

"Or, maybe the POWER7 does not beat T3? I can not wait until the T3+ with 3-5x better thread performance arrives next year."

Oracle claims up to x3 better thread performance... But Yosemite falls, as the processor is called have half the number of cores compared to the T3, and surprise it runs at 50% greater frequency. So .. the chip is not 3-5 times faster. It's more likely 1.5% faster.

You have been constantly peddling T3 against all other processors, and now it's here and then you start peddling T3+. Man you sound like a Itanium sales guy.

// Jesper

Jesper Frimann
Thumb Down

Welcome to the Real World

"IBM is free to respond to these world records, Jesper. Let us see what POWER7 can do. If people claims it is the fastest cpu in the world, then it should have no problems?"

Listen Keb. Why would anyone use time and money to try to beat benchmarks that cannot be compared public, cause it's Oracle almost internal benchmarks.

It would be just as pointless as Oracle posting rPerf number for T3, as for IBM or HP to try to figure out what obscure benchmark Oracle/SUN now have chosen to publish.

The only ones that listen are people like you. "Decision makers" listen to Industry standard benchmarks, and sorry Oracle haven't published one yet for the T3 systems.

// jesper

Jesper Frimann
FAIL

Get real.

8 World records ?

*CACKLE*

Lets see... from Oracle's benchmark site.

1) SPARC T3-4 Server with Oracle Fusion Middleware 11g Achieves World Record Single-Node Result on SPECjEnterprise2010 Benchmark

Yet another obscure SPEC benchmark that nobody has heard about. Where there have been submittet a stunning 13 entries. And the only way that they could manage to get in.. second as there is no separate clustered category. Is to divide the T3-4 up into 8 domains and run like a single box cluster. *cough* *cough*

2) SPARC T3 Servers Offer an Attractive Platform for Virtualization and Consolidation

So we ran some tests in the lab, with some old x86 hardware at low utilization and then on our biggest T3 box did the same.. showing that we are better. Yeah World Record!!!

Get real.

3) SPARC T3-1 Server Delivers New Record Score on Oracle's JDEdwards EnterpriseOne Benchmark

Again one of Oracles own benchmarks. Kind of hard to not give them this one. As there are no official list or tables to really compare against anything else. Again an obscure benchmark made by Oracle themselves... What's that smell...

4) Oracle Publishes New Result on Consumer E-Commerce Site Benchmark Using SPARC T3-1 Server.

Yes again an benchmark made by Oracle themselves that haven't got an official site that you can compare results against each other.

5) SPARC T3-1 Server Posts a High Score on New Siebel CRM 8.1.1 Benchmark

Yes again an benchmark made by Oracle themselves that haven't got an official site that you can compare results against each other.

6) Oracle's SPARC T3-2 Server Shines on Industry-Standard, General-Purpose Java Benchmark

Now this is the SPECJVM2008, a benchmark that has ever been posted 6 results for.

http://www.spec.org/jvm2008/results/jvm2008.html

These results include 3 SUN x86 results and 3 iMac results....

And with 2 processors with 32 cores the T3-2's PEAK value of 323 surpassed another SUN BASE entry of 317 made with an x6270 with 2 Nehalem-EP processors.

Yeah right. Good Oracle you found yet another obscure benchmark to feed to your followers. Come one.. honestly.

7) SPARC T3-1 Server Shines on PeopleSoft Enterprise Financials 9.0 (Day in the Life) Benchmark

"This is the first publication of this benchmark by any hardware vendor world-wide."

Yes we make the benchmark ourselves and make a new version, that we can be the first to publish in and claim a world record.

8) Oracle's Newest SPARC T3 Servers Deliver Outstanding Results with Oracle Communications Order and Service Management

Yes again an benchmark made by Oracle themselves that haven't got an official site that you can compare results against each other.

9) Oracle Publishes First Result Using Online Component of Extra-Large Workload on E-Business Suite R12.1.2 Benchmark

Yes we make the benchmark ourselves, and just created a new category with nobody else in it, so we can call it a World Record. Get real. man...

Sorry it's pathetic.

// Jesper

Oracle on Unix biz: We can rebuild it

Jesper Frimann
Badgers

hmm

"Maybe you missed the newly released world records that the "slow" T3 has?"

You are always sure to win when you join a race where you are the only participant

"How can the "T3 be much slower than x86" if T3 is fastest in the world today? "

Hmm.. Lets see on this benchmark.

http://www.spec.org/jvm2008/results/jvm2008.html

2 x Nehalem-EP's do 317,3 base.

http://www.oracle.com/us/solutions/performance-scalability/t3-2-specjvm-92010-bmark-172820.html

2x T3 do 323 peak.

So peak values barely beat base values for a an 1 1/2 year old Nehalem-EP. Wonder how it would match up to a Westmere-EP or a Nehalem-EX...

"I suggest you check up things before you talk nonsense."

The one talking nonsense is you. Oracle's marketing messages is your law.

Oracle spins own Linux for mega hardware

Jesper Frimann
Headmaster

Bleh.

"What are you trying to say with those benchmarks? Could you explain?"

Very simple, tom 99 said there weren't any benchmarks released, you said extrapolate, I said why not look at some real benchmark data and I listed some for you.

Just trying to be helpful, sorry if they were to good.... It's not my fault that the power 795 eats M9000's for breakfast.

"Here are IBM executives saying AIX will be killed. Can I have more credible links?"

*CACKLE* Here is another DORK IBM link for you:

http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquote

We only need 5 computers in the world :=)

Actually the point that Steve Mills makes is a valid one, merging Linux with AIX. But SCO funded by SUN and Microsoft quickly put a stop to that.

"I think that 35 million USD is a bit more than 1 million USD for an ExaLogic machine."

Again you continue.. comparing a whole non discounted solution to the discounted price of a server. Man... man .. man.. U know that this what was Larry was fined by TPC for doing.

Do you also compare the price of an engine to the price of a whole car when you go shopping for a new car ?

"because next year there will be T3+ with new improved cores "

Ohhh yes.. just wait for the next product.. just like we waited for Rock and and and and..

// Jesper

Jesper Frimann
Troll

Failure... again..

"It isnt too difficult to extrapolate some results from smaller IBM machines."

http://www.ideasinternational.com/benchmark/ben020.aspx?b=d54ab8c2-4e36-4395-986a-e1ae5f29aacc

SPECint2006rate

256 Core POWER 795 -> 11,200

256 Core M9000 -> 2,590

64 Core T3-4 -> ???

Damn that is a factor of 4...

SPECfp2006rate

256 Core POWER 795 -> 10,500

256 Core M9000 -> 2.100

64 Core T3-4 -> ???

Damn that is a factor of 5.

SpecJBB2005

256 Core POWER 795 -> 21,058,767

256 Core M9000 -> 1,757,035 *

64 Core T3-4 -> ???

*This is with only 1 JVM. So it's a pretty cool result.

Now the M9000 is not even in the same class as the POWER 795. And sure you can cluster a few racks of T3-4 together in a rack and call it a sexy name, and try to sell it as a single system image, but it's still just a bunch of servers in a cluster.

"But if you are stuck on AIX, and when AIX is killed by IBM (IBM has offically has said that they will kill AIX in favour of Linux on x86) it can be difficult to migrate away. And let us not talk about dog slow IBM Mainframe lock-in. IBM loves vendor lockin, everyone knows that. You are hilarious."

Funny the last NDA AIX roadmap I saw had current AIX versions running with support into 202X something. And the whole "IBM is killing off AIX" is about as serious as claiming to have been abducted by Sexy Venus Vixens Aliens. I've heard that from guys like you for 15 years.

And SUN and IBM on the UNIX side have always been the nice POSIX guys IMHO, so drop the vendor lockin crap. Both Solaris and AIX have been fairly easy to port to and from.

"I promise you, if Oracle invented some new cool tech that let them always win the TPC-C benchmark, IBM would very soon declare that "TPC-C is a meaningless benchmark, out of reality" and stop."

So Oracle doesn't have cool new tech, so that is why they aren't running TPC-C benchmarks and claiming it's irrelevant ? *CACKLE*

"Why are you talking about high prices? ExaLogic costs 1 million dollar, far below IBMs POWER 795. Or, if you want to talk about big price tags, IBM's power 595 used for the former TPC-C record, costed 35 million USD. That is hilarious of you to talk about high prices, and not consider IBMs overcharged prices."

You are amazing.. don't you have any decency ? You compare some imaginary price tag on a ExaLogic to the total _list_ price of all the components from the switches to the DB software used to run a whole benchmark. Man did you flunk second grade math or something ?

Machines like M9000, SD and POWER 5/795 are all hugely expensive, but they are formidable tools for those who needs them, and buys them. Currently a machine like the 795 is to big for our clients. So they go for the 770/780.

And I am getting tired of hearing about strange Oracle benchmarks that has no public listings and and ... BLEH

// Jesper

Oracle promises fresh approach to storage

Jesper Frimann
Thumb Down

*CACKLE*

Ehhh.. Exalogic is a cluster of machines. You can put net stocking, a dress and lipstick on a pig to make it look smart, but it's still a pig.

Cheap commodity, yeah.. ask Morgan chase about that

http://www.theregister.co.uk/2010/09/20/chase_oracle/

The only reason Larry likes cluster is that it gives him almost money for nothing.

You don't get any value from having to buy x10 the software licenses to run a workload, and with the cheap commodity server aproach, you even get crap hardware.

I simply don't get it why people want to pay more money for a product that basically consists of expensive cluster licens version of Oracle products that cost a fortune in maintenance and then only get cheap community also ? It simply does not make sence.

No thanx. A big Iron box any day over that. It's cheaper in TCO and much much stabler. If you really want a big SPARC box then buy a M9K.

Now an Exalogic is just as much in the same class as a POWER 795 as your home PC is to an M9000.

// Jesper

Morgan Chase blames Oracle for online bank crash

Jesper Frimann
Headmaster

Well one thing is what the article says..

Well it's a cluster of 8x T5420's running Oracle, that means that it's most likely a RAC cluster. Or perhaps even one of those new Exadata boxes.

// Jesper

OpenSolaris spork ready for Oracle challenge

Jesper Frimann
Headmaster

Linux and Scalability.

Linux scales just nicely.

http://www.ideasinternational.com/benchmark/ben020.aspx?b=d54ab8c2-4e36-4395-986a-e1ae5f29aacc

Number 2, 3, 4 and 5 are LINUX submissions. And that is on 3 different processor architectures. So saying Linux doesn't scale is simply not correct, I actually think it's pretty impressive to be in the top on so many different processor architectures.

Also notice that the fastest Linux submission has almost 4 times the throughput of the fastest Solaris submission.

Or what about Java, a place where Oracle owns the whole stack.

http://www.ideasinternational.com/benchmark/ben020.aspx?b=40939518-fdf1-4ba8-855c-d8974a41c323

Best Linux submission is 20,499,538@256 Cores the best SPARC result is 1,757,035@256 Cores. (http://www.spec.org/osg/jbb2005/results/res2008q4/jbb2005-20081027-00552.html)

Now that is almost a factor of 12 in throughput.. read my lips NO MORE TAX.. I mean sorry.. It's still early and I haven't finished the fist cup of Mokka.

And please don't start with that it is the number of threads that counts.. The linux JBB2005 submission uses 1024 threads vs the 512 of the Solaris on SPARC submission.

// Jesper

IBM completes Power7 server arsenal

Jesper Frimann
Headmaster

Der er en nr 2 men der er langt derned det er mit kølvand der køler dem ned

"And when we slice up Power servers we do not get 100% of the CPU power available to the instances, so please tell me where it's going if not in the virtualisation?"

You still don't get it do you. Let me try with a little picture/example.

We have an older machine. It could be a E25K,SD or a p690, it really doesn't matter. On all machines we can statically carve the (or parts of) machines up into 8 chunks of lets say 4 Processor cores.

We then have 8 applications that happily crunch along inside each little partition with their normal lousy lets say 20% average utilization.

Now we replace the machines with a SD-2 with one 32 core IVM npar with 8 guests each with 4 processors, and a p780 with 8 virtual machines each with 4 virtual processors. But hey why not exploid the fact that we can overcommit the machine. On IVM we quickly follow HP best practice (or so my HPUX guys calls it and they might be wrong) and do a 50% overcommitment adding 4 more 4 processor core guests, raising the average utilization of the machine to 30%.

On the power 780 server I quickly add 10 more virtual machines with 4 virtual cores (that is my standard overcommitment factor for that machine) each raising the utilization to 44%.

And there was much rejoice in both the HPUX group and AIX group, and they all went to drink the Wintel guys ,under the table, cause that is what they did on a friday afternoon.

And you ask what is the overhead ? ehh.. it's huge in both cases.. negative overhead that is, as I get much more work done.

Is there penalty ? Sure there is, just as there is a kernel penalty related to running more than one process on a multi processing kernel. But hey you do sound like that punch card Mainframe guy from the 60'es that insist on running a single task on a single machine. Wake up dude.

"Oh, you mean 8192 shared in one OS instance - nice isolation of any software fault there! So, one memory error and you lose 8192 applications at once - great design! "

"Memory errors have always been a problem on HP Unix machines", one of my friends who used to be work at HP's support org. I don't agree. But since you keep talking about it, then perhaps there is something about it?

WPARS is pretty good isolation, sure it's not OS software stack isolation. But is pretty good isolation. The isolation stack we normally work with is like this:

Same os, rsets isolation, Wpars, Virtual machine, Physical machine. The longer down the isolation road the better the isolation but the price also goes up.

And still HPVM is a HPUX instance with Guest running inside it. Kind of like VMware in the old days right ?

And overhead lets see... on a 2TB power 780 the memory overhead will be .. 41-77 GB for a fully loaded machine. The later number with VIO and all, and all partitions being able to grow to a factor x2 in Memory capacity (max_mem=2xdes_mem).

For a SD-2 with 2TB inside one IVM with Max memory used the memory overhead is ... 321 GB Wooohh.. man I understand why you want to talk about overhead. First 8% over head then 8,3% again.. man.. sure is a good solution. So a factor of 4-8 in overhead... sure.. IVM rulez. *CACKLE*

"Well, seeing as Pseries partitioning has just about caught up with where the Integrity range were eight years ago (and still hasn't matched Integrity on true hardware partitioning), "

No they haven't caught up with the overhead thing. And you don't get it.. we don't want hardware partitioning. We have no need for it.. it's a waste of resources. Why do we want to carve your server up into what could just as well be cheaper machines ?

"And then hp's new Integrity designs fit into those hp blade chassis that have been caning the IBM blades for years"

Yeeesss.. lets order a highend server that uses the same components as the cheapest blade system around.. Yeah right.. *cough* *cough* hopefully customers aren't that stupid.

"when will IBM catch up with hp and offer the advantages of embedded switches and tools like Virtual Connect for anything above the bottom end of the pSeries range?"

Eh, an embedded switch ? What for ? I use virtual (not to be read vlans) networks inside the machines, You know LAN in a CAN, style. If I want to go outside I'll use a SEA adapter, (software virtual switch) or a HEA (hardware virtual switch).

"No I don't want to use a punch card reader.. I have a removable hard drive".. "What are hard drives not secure cause you cannot read the bits manually"......

"do you think they'll be able to do it with a better CPU than the crippled one they had to put in the P7 blades becasue they can't make the current blade chassis handle the cooling and power required for the real P7 chips?"

*CACKLE*

It's like shooting a fish in a barrel.. at point plank, with a shotgun.

Lets see...

IBM PS702 2 sockets and 16 cores takes up 2 slots and does 520 specINTrate2006.

HP i860c i2 2 sockets and 8 cores taks up 1 slot and does 134 specINTrate2006

HP i870c i2 4 sockets and 16 cores taks up 2 slot and does 269 specINTrate2006

HP i890c i2 8 sockets and 32 cores taks up 4 slot and does 531 specINTrate2006

Yeah.. the i890c i2 wins ! But wait... you can only have 2 of those in a 10U c7000 chasis, and you can have 7 PS702's in a 9U Bladecenter H. That is a compute density of 106 specintrate2006 per U for the i890 i2 versus 404 per U for the PS702... ohh.. ohh... ...

Power usage then.. HP is good at that. Lets see i890 i2 uses ... 3184 Watt max power.. the PS702 only 700 watt.. ARGH.. what about the i870 i2 then.. 1592 Watt ? What about the i860 i2 then 796 Watt ? How can this be ?

Price then .. HP blade products are cheap !!!!!

Yeah.. PS702 with 16 cores AIX and 32 GB RAM and 2 disks is 196K Dkkr. Woo that is expensive..

lets see..hmm there it is i890 i2 with HPUX 32GB ram and 2 disks and 32 1.73 cores is 809K Dkkr, WHAT wait lets take cheaper cores... 1.33GHz that gotta be cheaper. .. what 527K Dkkr ?... basically you need the i860 i2 with 8 cores to beat the PS702 price with 30K Dkkr. But that is 2 Tukwilas versus 2 POWER7's and we all know who the faster there.

Although you'll just cook up some witch brew about benchmarks and and to cloud the issue.

Again, when your competition is so much in front of you that you can't see what is going on...

// Jesper says have a nice weekend

Jesper Frimann
Welcome

Welcome to the POWER world.

Eh.. you still have no clue what so ever on how the Hypervisor works.. No clue what so ever.

"Not much of a consolidation option when the smallest you can split your whole p795 down equally into is an eight-way CPU (assuming you only lose two complete CPUs to the PowerVM Hypervisor partition). "

There is no POWERVM Hypervisor partition... it's not IVM you know.

Well you are using two levels of virtualization npars and IVM. No problem with me. But lets do the same on POWER. Lets make a 0,1 CPU virtual machine. And inside that we can then make up to 8192 Work Load Partitions. With a CPU granularity of 1/65535 of the Virtual machines processor allocation.

So actually we can have 8192x254= 2M+ virtual partitions. with a minimum CPU granularity of less that half a millionth of a CPU.

And we can move the Virtual machines from one physical machine to another. Or we can just move the Wpar if we want. All done while the application and virtual machines are running.

That's the problem with being behind in the race, you cannot see what is going on in the front of the race.

// Jesper

IBM whips out its TPC-C...cluster

Jesper Frimann
Headmaster

Keb...

"....eans T2 is faster. But not to you IBM FUDers I suspect."

Now wipe your eyes and have a nice cup of tea and a piece of cake, my boy. Sorry you haven't been attending one of my classes here where I work, in my "Designing Solutions on UNIX", I use a 2-3 hours just explaning what the different vendors use of terms and what they mean with regards to threads,CPU's, Processors etc etc. U've been mixing it up since day one.

As for T3 crushing anything. Well it'll be the tables of Solaris sysadmins, when they have to have 500 inch monitors to be able to see all the threads. T3 is just more threads, 16 threads on 8 cores on one chip. That is 128 threads per CHIP.

A POWER 750 does 2410483 specJBB2005 on 4 sockets and 32 cores.

A Ora T5440 does 841380 specJBB2005 on 4 sockets and 32 cores.

So as for crushing.. what is that x2 the throughput ? Or just x 1.5 ?

So you are basically saying that a T3 based T5440 with 4 sockets and 32 cores will do around 3.5M-5M sepcJBB2005 ? So doubling the number of threads will give you a factor of 4-6 in throughput. You should really cut down on the psychedelic drugs.

"Ramblings about IBM is Microsoft and SUN is apple"

Listen. IBM isn't really copying anything from Niagara. Your whole premisses for your ramblins arewrong. Niagara and POWER7 are like night and day. The idea behind Niagara is many simpler cores with many threads for great efficiency. It's a good concept for the types of workloads it was designed for.

POWER7 is totally different it's 8 really really fat cores, each core has 12 execution units.

Sure IBM uses 4 threads on a POWER7 core and Niagara uses 8 (soon to be 16) but just cause they are

using >1 thread doesn't make it the same. Niagara uses statically round robin scheduled fine grained

multithreading. It's terrific when used for many light independent threads, and very efficient, for

that type of workload. But it sacrifices single threaded performance, and you need to have a lot of

threads that want to execute for it to be efficient.

POWER7 uses a SMT, with quite a few bells and whistles for example the OS can fold together

threads(virtual processors) , hence you actually don't have to have 4 threads executing at the same time, you can also have one that takes up the whole processor, the processor adapts to the workload that runs on it. So at 1-8 threads per chip it's a single threaded beast, and then as the number of threads increases up to 32 it becomes more and more of a throughput CHIP. Going from perhaps 800 units of work in throughput at 8 threads to 1440 units of work at 32 threads.

For the Niagara T2+ CHIP running the same workload it might do something like 85 units of work at 8 threads and 425 at 64 threads. With the T3 this might change to 100 units of work at 8 threads to something like 900 at 64 threads.

This little calculation is based on the 97 SPECint_rate2006 for one T2+ chip and 330 Specintrate2006 for one POWER7 chip, and from various papers that suggest that going from 1 thread per core to 8 threads give you something like a factor of 5 in throughput. And not a factor of 8.

This is a huge difference, and if you cannot see that then well... your loss.

"And all this false IBM marketing: one Mainframe can consolidate 1500 x86 servers - if they all idle. "

Problem is with servers is that most of them are idle... With a click of a mouse I have access to utilization data of thousands of thousands of servers... And guess what stand alone Wintel servers, do have lousy utilization. Often between 1-5%. Now how you then choose to consolidate and where is your own business. The premisses for their calculations are IMHO valid. I've have personally run consolidation projects where we consolidated hundreds of Wintel servers running Oracle DB's onto a single POWER server. HUGE savings.. HUGE saving.. And the fun part is when you talk with Wintel sysadmins, when they talk about utilization it's peak utilization... so 'My server is 50% utilized' normally mean that at peak times it uses 50%.

// jesper

Jesper Frimann
Thumb Down

Fail

Has no merit what so ever, he failed to read what was actually going on.

// jesper

Jesper Frimann
Linux

BLEH

Keb. There is a big difference between being fast and

"just sick reasoning. It reminds of when Jesper Frimann explained to me, that even though you need four POWER6 at 5GHz to match two Nehalem at 2.93GHz in TPC-C, the POWER6 is the faster cpu. Because "the POWER6 core is faster". Just sick reasoning."

That is because You have never understood what I said, either deliberately or due to lack of ... well.. understanding. The problem is that people use the terms fast as it was a one dimensional label. It isn't.

On the other hand then terms like "total chip throughput" is a relatively well defined metric. And here you are quite right, T2(+) is "faster" than POWER6.

"Like when IBM talked about few high clocked cores is the way forward and downtalked many lower clocked cores as Niagara has, and now with POWER7 the way forward is many lower clocked cores. Ridiculous. But I dont expect you to see that."

Eh ? A POWER7 core is faster than a POWER6 core (delivers more single threaded throughput) A POWER7 core also delivers more throughput than a POWER6 core and finally a POWER7 CHIP also delivers more throughput than a POWER6 chip. So what is your problem ? IBM managed to do all three things. Increase speed and increase throughput and throughput.

And as for your wet dreams about Great SPARC Machines.. well there is no SUN anymore, there is only Oracle. It isn't even Oracle who makes the machines it's Fujitsu. They don't even fabric the CPU's anymore, that is outsourced to TSMC.

// Jesper says BLEH.

Jesper Frimann
Headmaster

Hmm..

First kebb, now POWER has the fastest benchmarks both in the clustered and non clustered category for TPC-C. Period, so stop your ramblings.

Now this benchmark is IMHO terrible, and I cannot understand why it was even made. It doesn't make POWER look good IMHO, only in the eyes of the unenlightened, cause they beet an equal pointless benchmark made by Oracle. That is my take on this.

Now The T3 will get it's butt kicked by POWER7. The T3 boxes will still be more expensive, both in procurement and in price/performance. IMHO Adding even more slow threads will only give you so much.

Now your who won is absolutely bull. The only Oracle Niagara T2 TPC-C benchmark made is a clustered TPC-C benchmark. For single system image the POWER6 power 595 benchmark is still king.

Now for a 64 TB machine with 16.384 threads.. yeah.. righ.. sure oracle might make a cluster and slap a single serial number on it and call it a machine. But hey that still makes i a cluster.

// jesper

AIX 7.1 moves forward to Power7 iron

Jesper Frimann
Headmaster

Well

Hi Billl

A few minor errors/things I don't agree with, and that haven't been adressed by others.

6) Jup the Starfire was a great machine. You might say that it was too successful as the SPARC boxes of today is to much like the Starfire, they haven't evolved as fast as perhaps other designs.

I know that many today think that the real nice SMP POWER box was the p690. But the real beauty was the S80/S85 which were a pure breed SMP box'es. The latest versions had 24 Cores and 48 Threads running at the same time so they did get closer to the 64 of the 10K than people remember.

8) The partitioning on the p690 wasn't hardware partitioning, it was Logically partitioning in firmware, it is basically what have evolved into the hypervisor of the POWER7 hardware of today. But the real big jump came with POWER5 and it's hypervisor in 2004, that was quite a jump. And all to many people today still don't understand how to use it properly.

9) The first POWER machine to implement Threading was the RS64-IV which had an implementation of coarse grained multithreading, that was called HMT. Much like later SPARC-64 and Itanium Montecito.

POWER4 didn't have any multithreading, unless you choose to use the SUN term CMT, Chip MultiThreading, which basically means that you have more than one core per chip.

The fist POWER processor to implement SMT was POWER5, in POWER6 multithreading was enhanced allowing the core to issue instructions to two different threads at the same time, in POWER7 a 4 way SMT threading was added.

10) Well didn't get dynamic hardware allocation right ? Sure it's hard to wrestle memory away from a machine that wants to use it :)= But otherwise it IMHO the most dynamic platform when it comes to virtualization. You can actually turn SMT on and of on the fly, and adding and removing virtual processors, adapters etc takes seconds.

Now I think the whole virtualization adds overhead thing is a misunderstanding. If I carve my machine (be it a T5440 or a p690 or a SD using domains Lpars or n/vpars) up into slices I will loose all the idle capacity in all the partitions. When I use virtualization I get access to this, hence often this means that I get access to the 60-80% idle machine time that a Partitioned UNIX machine often has.

11) the G and specially the J series were terrible. I remember once, a place where I worked, we had a Highnode (J50) with 4 cores running as DB servers, the failover machine was a thin node PW2SC 160MHz, in a cluster test one of the UNIX guys forgot to fail back to the high node, next the day the people who used the machine complained that their queries were broken cause they finished to fast. The thin node was just much faster at single threaded workloads.

12) AIX has always been good at compability, only really issue was the 4.3->5.1 64 bit code. But that really was a non issue as nobody really used 64bit executables. Well besides Oracle.. and it was actually more a Oracle issue as they didn't provide a 64bit version of Oracle 8.1.7 that could run on AIX5.1

// jesper

Jesper Frimann
Troll

AIX and Oracle

".. and it meant for example, you could not move your Oracle 8i database running on an p680 on AIX 4.3 over to a p690 on AIX 5.1 without also moving from Oracle 8i to Oracle 9i. "

Yes there was a change in the 64bit execution format from 4.3.3 to 5.1 But hey it's 10 years ago, and to be honest the number of 64 bit executables back then was... well... basically Oracle. So It's not like it was a big problem.

Now the problem with Oracle version 8 was that Oracle never released a 64bit 8.1.X or even 9.0.1 for AIX 5.1. Only a 9.2 and newer. But it was no problem taking your 32bit Oracle 8.1.7 and moving it from AIX 4.3 to 5.1, as long as it was the 32 bit version of 8.1.7. I've seen lots of 8.1.7 run on AIX 5.3, I've even seen some 7.3.4 ones. It seemed that Oracle wanted IBM to force their customers that ran Oracle on POWER to do an Oracle upgrade. It's always smart to make others make money for you.

With regards to Containers, Solaris simply stole it before AIX did :)= I do think the AIX implementation where it's more or less an extension of the workload manager is nice.

Now the good thing about AIX is the fact that all the layers in the solution stack are one of the best on the marked. One component might not be the best but at least then it's second. And it's also a honest OS. It doesn't try to a lot of things, it's simply a UNIX made to run on big RISC servers.

And that is perhaps Solaris'es biggest disadvantage, that there are serious weaknesses in the stack, perhaps the biggest being Oracle. And that is a damn shame.

// jesper

Details spill on IBM's big iron Power7 servers

Jesper Frimann
Big Brother

yeah right

Yawn.. and on those you would run ? What OpenSolaris ?

Funny enough 2x8 chip 'cheap' 8-way Nehalem-E machines like for example the DL980 costs about the same as 1x8 chip very expensive 8 way power 770.

And then there is discount, and software licenses and and.. Nahh.. POWER isn't that expensive anymore.. Why don't you try to compare the new machines against your former champion the T5XXX series from Oracle.

http://www-03.ibm.com/systems/power/hardware/710/browse_aix.html

versus

https://shop.sun.com/store/product/c7288201-e2ff-11db-8c3c-080020a9ed93?intcmp=soc

https://shop.sun.com/store/product/6bfd9efc-e300-11db-8c3c-080020a9ed93?intcmp=soc

versus

http://www-03.ibm.com/systems/power/hardware/720/browse_aix.html

https://shop.sun.com/store/product/b057c0d5-f740-11db-8c3c-080020a9ed93?intcmp=soc

versus

http://www-03.ibm.com/systems/power/hardware/740/browse_aix.html

And this is at similar socket and core count, not similar performance.

// Jesper

OpenSolaris axed by Ellison

Jesper Frimann

Only Keb can make a positive spin on this story :)

As always it takes Keb to make a positive spin on this rather sad story for the OpenSolaris community.

// Jesper

InfiniBand to outpace Ethernet's unstoppable force

Jesper Frimann

yes.. but..

Well IMHO there are some potential pitfalls on POWER6, if you want to plug in infiniband adapter in the GX+ slots and use them for network traffic. On a machine like the 570, in each system unit the first GX+ buss is also connects up to the P5IOC2 chip that drives the IVE and the internal PCI slots. The second slot on the other hand should be able to run flat out.

In the new POWER7 boxes like the 770/780 each GX++ slot, is not sharing anything with the now 2 P5IOC2 chips. So each slot should be able to run flat out with a x12 DDR adapter.

Generally the IO of the 770/780 have had a good upgrade IMHO.

Not that the 570 was bad in any way.

// jesper

IBM preps AIX 7.1 for autumn Power7 harvest

Jesper Frimann
Unhappy

Bleh

You forget that a the 32nm follow on to Nehalem-EP, Westmere-EP is already here. And it just keeps up with Nehalem-EP in the per core throughput.

And if you compare Nehalem-EP with Nehalem-EX it's not like it's a huge improvement.

Lets take for example the latest SAP-2Tier benchmark

Nehalem-EX best 4 socket result:

10450 Users@4 Chips and 32 Cores@2.26GHz

Nehalem-EP best 2 socket result

3800 Users@2 Chips and 8 Cores@2.93GHz

Westmere-EP best 2 socket result

5100 Users@2 Chips and 12 Cores@3.33GHz

To compare

Nehalem-EP -> 475 Users per core

Nehalem-EX -> 327 Users per core

Westmere-EP-> 425 Users per core

So basically The core performance of Nehalem-EX is 69% of that of Nehalem-EP

Now that isn't good.

And the Westmere-EP core performance is 89% of that of Nehalem-EP.

And that isn't good either, with 14% more GHz.

Nehalem-EP -> 1900 Users per Chip

Nehalem-EX -> 2612 Users per Chip

Westmere-EP-> 2550 Users per Chip

So a Nehalem-EX chip is 1,38 times faster than Nehalem-EP chip.

That is an impressive 38% extra Throughput on x2 the number of cores.

And a Westmer-EP chip is 1.3 times faster than a Nehalem-EP chip

That is an impressive 30% times faster with 50% more cores and 14% more Frequency.

Nehalem-EP -> 162 Users per GHz/core

Nehalem-EX -> 145 Users per GHz/core

Westmere-EP-> 128 Users per GHz/core

So basically Westmere only manages 79% of the users per Ghz/Core. That's not impressive, that's... well.. a Westmere-EX doesn't really look that impressive in this light.

And POWER is slowing down in development ? Keep back on the mushrooms Keb.

If you compare POWER6 to POWER7 then the Per core users have increased with 23%.

The Per chip number of users have increased with 393%, and the per user per GHz/core has increased with 60%.

So basically you are just putting out FUD without knowing what you are talking about.

And sorry for my harsh words but it has been a really bad day.

// Jesper

Physical vs virtual: What's your poison?

Jesper Frimann
Pirate

Yes...

Did a migration project once, where the stuff was so old that we predicted a failure rate of 50% on the boot disks inside the servers, project was strangely enough put on hold :)=

// Jesper

BI benchmark outs HP Superdome 2 details

Jesper Frimann

Ehh...

Actually they, or at least some of them are pretty close i price/performance

http://www.tpc.org/results/individual_results/IBM/IBM_780_TPCC_20100412_es.pdf

http://www.tpc.org/results/individual_results/HP/hp_DL380_TPCC_051110_ES.pdf

Comparison on price/performance lowest on this page:

http://www.ideasinternational.com/benchmark/ben020.aspx?b=eb4a0fa9-0344-487d-85ef-49539f0da8f0&f=Clust%27d%3dN

Now that is pretty close.

// Jesper

HP dons blades to scale Superdome 2

Jesper Frimann
Thumb Down

This is getting tedious 1 of 2

" yeah, happiest if you leave before the project finishes more likely!"

Well I often do that, cause I am normally do a lot of putting out fires in projects, so I often leave when the problems are fixed and the project is back on schedule. But I see you have yet again brought yourself down to a level where all you can do is namecalling and insults. Again that says more about your character or rather lack of.

"Well, if you actually knew anything about hp-ux ..."

*YAWN*

"Yeah, cleaning up IBM AIX servers - what a surprise!"

The customers we have have more or less it all, so it can be everything from Windows/Linux over HPUX to Solaris or even platforms that can no longer be bought og that you can even get support for anywhere. Hence we fix it for the customer. Sure make cheap shots, it is basically what you do. Take cheap shoots.

"You admitted you had to send out for hp-ux resource so you obviously don't have the skills."

Again your lack memory and understanding of IT on a higher level is well not a credit to you. As I wrote, we don't have the ressources, and will never have to absorb peaks in demands for people with particular skills. Hence we use subcontractors, as it's cheaper and often better than having to much excess staff. It's a balance.

"... European ones, and the biggest project I worked on was government related so probably above your paygrade."

Sure Denmark doesn't revolve around me, it revolves around the right wing loonies that are keeping our current government in power. And the only Biggish Government projects I can think off of some substance that run/ran on largish HP Itanium servers are today run by the last company I 'worked' for. And I know the guy who did those personally. I can ask him if he knows you, now he is perhaps the best HPUX guy I have ever meet. He btw. now does AIX, mostly. Now the really big government SD projects, have been migrated to AIX, then there are the ones left that you don't have the security clearence to have worked on. So must have been a small insignificant project.

And as for paygrade, here in Denmark we normally measure people after skill, not some rigid obsolete paygrade system.

"As to you having met many cowboys, all I can say is you work in outsourcing, so I can't say I'm surprised you find your field so poorly skilled."

Again your maners are rude and you have just labeled everyone who works in the outsourcing business as poorly skilled. Again this says more about you and your poor attitude towards other people, than your rather pathetic point, which btw. aren'e even true.

"I have looked, actually, seeing as I have actually used the hp blades kit, unlike you. And I know the hp designs have redundant power paths, use only discrete companents on the backplane, and can survive power issues. "

*CACKLE* That is why all the half hight blades only have one midplane-connector ? And have you ever in real life seen a real c-Class power backplane ? Not just looking inside an empty c3000 chasis, but the actual part. It's not like there are two of them. Redundant my butt. Try a RTFM for a change. I mean not even the midplane is redundant.. Sure it's only a passive piece of eq., and that is often good enough, but you can make mechanical failures. Hence it's not redundant.

Have a look at figure 12 in this piece of HP documentation:

http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c00810839/c00810839.pdf

and figure 11 shows you a the single connector for the half height blades.

"You obviously just don't know anything about blades. Not surprised an IBMer and Power fanboi would want to diss them seeing as hp are the number one blades vendor."

Sure HP is the number one blade vendor. They, or rather Compaq, have been very successfull in selling people 8U x86 servers, then 6U, 4U, 2U, 1U, full height blades and here last half height blades... *CACKE* I have great respect for the HPQ selling machine, and as I wrote above it has contributed to a lot of revenue streams in the outsourcing business, when companies have nearly choaked on their x86 server sprawl, we have been able to come in and rescue them. Great! We owe a great deal of revenue to our business partner HP, but that doesn't mean that I myself buy their coolaid. I can see it, for what it is.

"So why do you have to send out for resources then? How come I was doing work hp-ux in Denmark if you're all so skilled and clever?"

Again cause sometimes we are so clever that outsourcing is actually an export business in Denmark. As we can do it smarter and cheaper than many other places. So sometimes we have so much work going on that we need outsiders to come and help.

"Your fixation with saying that the hypervisor cures all issues is just stupid, how is going to cure a power issue? "

By having a single redundant Main power supply, and a redundant DCA's ? It's called electrical isolation. And power on a POWER 595, isn't just 6 more or less stupid powersupplies that supply power. No, it's acutally one POWER supply, that is fully redundant. It even has two small buildin computers that talks to the redundant power converters that are on each book, through a redundant network. It also talks to the redundant Service processors and even the redundant HMC. So you have this whole lille grid of computers that controls and recovers from power problems. It's called SPCN, the System Power Control Network.

It's a totally different aproach than for example the SD has. It has 6 power supplies and has n+1, redundancy. That is something completely different, than basically having only 2 fully redundant power supplies, that are redundant all the way out to the Books.

The reason you are trying to make board power issues a problem on a POWER server is cause it's a problem on the board based Integrity machines you love so much, why is it a problem ? Cause the OCPB on a SD board is a nonhowswap single point of failure, that are getting their power from 'fairly stupid' n+1 redundant power supplies.

Again Welcome to 2010, Traditionel board machines are so last century.

"If the power subsystem goes on one part of an IBM system and it is not electrically isolated then it can affect other parts of the server, which means that your clever hypervisor simply dies along with all the instances running on it. "

Again. see above. And you don't get it.. the POWERVM hypervisor is not something you lose. It is not a kernel extension to a LINUX kernel like VMware, or a HPUX os instance like IVM. It's more like a shared library that you can call. So actually it is much more robust than both Vmware and IVM, but again you have to RTFM to get that. Sure it can be brought down, when you look at it on slideware it is just as non redundant as IVM and VMware. But again it's only a 'shared library', a well protected shared library, that on POWER7 will be.. how not to break the NDA .. even more well protected.

"One power issue, you lose all the servers, got that? You can talk male bovine manure to the cows come home and no-one is going

to believe that the hypervisor will survive a loss of power!"

Again.. read the above.

// Jesper

Jesper Frimann
Thumb Down

This is getting tedious 2 of 2

"What, does it run in some extra dimension where IBM magic beans keep it running? Rediculous!"

Jup, magic smoke, that is what makes it run.

But if you had actually bothered ever to read a manual or other documentation, rather than just sticking to what you know... hmm.. well ok know.. then you would know that the Books in a power 595 are electrical isolated, just as the CEC drawers in the power 5/770's are, or the IO drawers. You are echoing what 5-10 year old HP marketing FUD. Now go back to an old p690 and you have a point, and even more on a machine like the S80 and the whole RS64 series.

"This is a key high-availability feature of the cell-board designs in hp Integrity, that they can lose a cell completely and it doesn't affect the others."

Again read above.

"With Power, if you have electrical problems with one processorbook then it can affect all of them."

Nope. You should stop reading all that old HP marketing FUD. It's just as non up to date as the stuff that SUN and IBM is putting out..

"And if you're sharing I/O as you often do with a hypervisor, then the loss of one I/O component can affect all your OS instances. "

*CACKLE* Now you are using the knowledge you have from IVM or VMware to project over the POWERVM hypervisor. Just cause it is like that on IVM and Vmware doesn't mean it's like that on POWERVM. On POWERVM you can have physical devices as well as virtual devices on your Virtual machine, no problem. You can even do multipathing where you have one phyiscal and one virtual device. Or mirror across physical and virtual devices. You can let your physical devices be owned by a VIOServer, or you can do virtualization direct in HW, or you can simply just have physical devices like on a board machine.

It's quite clear from your statements that you have no skills on POWERVM, it's so bad that you refere to it as magic beans.. come on welcome to the year 2010.

"With hp Integrity the I/O is owned by each electrically isolated cell or npar, so a problem in one npar doesn't affect the others."

If you use nPars, yes. But that basically means that you have to buy a bloody SD just to run 8 partitions, well 7 cause you have to have a spare board if something fails.. at least that what I've seen done in RL: Now there is the real secret behind HP's moneymachine.

"Yeah, please do list them, otherwise we might conclude you were just talking more male bovine manure as usual.""

Books, Cec drawers, IO Drawers, System clock cards etc.. I mean the rest are just subcomponents. So no problem there.

"Apart from the fact some of it comes bundled in with the OS, it also seems to be very popular with us customers. The superior management capabilities of hp-ux is just one of the reasons it is taking the high-end deals from AIX. We seem to value an integrated stack rather than the colection of individual and often incompatible IBM tools. The first thing we do with new IBM servers is set them up to run with hp monitoring software as it just works better."

*CACKLE* Eh.. I get it.. You work in HP sales.... "taking the high-end deals from AIX", what is taking the high-end deals from AIX is the fact that you only need midrange 'AIX' to bead highend HP and SUN, or then it's Linux on x86, and even there POWER is competetive.

And your whole integrated sw stack with all your many expensive HP products, can be done by AIX+POWERVM+Director. All just standard system components. No fancy expensive software company software.

"This coming from the outsourcer that specialises in IBM Power...."

Better that than what, to be quite frank, sounds like a HP sales drone, well perhaps even a junior technical sales support or sumthing.

"This coming from the outsourcer that specialises in IBM Power...."

Better that than what, to be quite frank, sounds like a HP sales drone, well perhaps even a junior technical sales support or su

mthing.

"I doubt if you have worked with a company the size of ours for a start."

I couldn't care less. If there is one thing I have learned then it is that the size of the company has absolutely nothing to do

with the quality of their IT infrastructure. I would actually say on the contrary. I've been involved in the IT infrastructure o

f quite a few of the WW Fortune 100 companies, and their IT infrastructure sucked major. And much of their staff were under skil

led, and overpaid. And they had a megalomanic air to them, 'Cause we work for company XXX which is in the top 10 on the fortune

100, what we are doing is right.' Nahh.. IT is still 'just' there to support the real business of the companies. That is something that one should remember and be humple about.

"let alone that I can comfortably predict you never will. For a start, I do mission critical work, so out of your league already. And then, when I do look for people to work with me on projects, they actually have to know something about the technology, and not just repeat IBM FUD and quote IBM labs benchmarks."

*YAWN*, you have constantly been proven wrong. Your documentation links are to Marketing sites. Honestly... try looking in the mirror. First it's the POWER5 versus POWER6 where you quote TheRealStory.

Next it's inorder POWER6 versus OOO POWER7 and POWER5.. ever in your life read a book on Itanium tuning try to lookup what the "+DS<architecture>" on the HP compilers mean, it's not like Poulson/Montecito and classic I2 is exactly the same. Man you even called POWER7 a shrink of POWER5, I mean how little knowledge can one have on the subject.

And you haven't really demonstrated any real deep mission critical knowledge.

"And if I did come work in Denmark it would probably be on an hp-ux project, which means you also wouldn't be involved as you not only don't know squat about hp tech but you already admitted you have to send out for hp-ux skills."

Well there aren't many of those projects anymore.. EDS killed the biggest one you know. They thought it was to costly, and they wasn't able to compete if they had to use it :)= So the largest HP SuperDome customer just shifted from being a SD customer to being a x86 blade customer. Actually I meet the Best HP hardware technician, who used to do SD's the last place I worked, the other day.. guess what.. he now works at IBM. *CACKLE* True story... true story..

"Put yourself back on the shelf, then, as so far you've not only been proven wrong but also avoided most of the questions."

Not really. Proven wrong is not you saying that I am wrong, with the only documentation being a HP marketing site. Only link you have come up with yet, have been to a FUD website. Honestly...

"Yeah, in the tiny little Danish IT market, where you have to send out abroad for skilled people. Nothing like being a big fish in a tiny puddle. And I often meet consultants from Denmark and other Scandinavian countries here in London, who say they moved to London not just to get jobs but because they wanted to work in challenging roles just not available back home. I guess it was just easier for you to stay at home and work on what there was in Denmark."

Well again you can't hold your ground in the technical debate so you just go after the man, his country, or other things that can boost your ego. I actually have pity with you.

// Jesper

Jesper Frimann
Thumb Down

Ehhh ?

Your statement makes no sense what so ever.

"As I already pointed out, I only referred to The Real Story website as an example of FUD and how hp have seen so much IBM FUD they created a whole website to debunk it!"

So the HP site The Real Story is FUD.

But it's FUD that is used to debunk IBM FUD ?

You are not making sense.

// Jesper

Jesper Frimann
Paris Hilton

Mister TheRealStory strikes again

"But if you insist on including them it makes Jesper's trying to compare the 1.9GHz P5 with the top-end P6 even funnier!"

Man you are full of it. You keep insisting that a FUD/misquote on a HP TheRealStory of a IBM press release is more true than the actual press release.

No matter how you twist it, no matter how much you try to make fun of others. Your source of wisdom has been exposed as being HP's TheRealStory, which says a lot about the true level of your IT skills.

// Jesper

Jesper Frimann
Headmaster

Welcome to the year 2010.

"No, I think that some salespeople (and outsourcers) would be a lot happier .... Hey, I wonder if EDS do POCs....!"

Customers are normally very happy when I leave a project, I Did a project here for a small customer, which just finished the other day, where they ended up with having x2 - x3 the capacity without paying a single red dime for more per month, all they had to do was buy more RAM. Very simple I just redesigned the solution exploiting the capabilities of the system.

All well documented, with design documents changes etc etc. Sure they wanted to see a test first, so we made the changes to their development environment first. And when they then rolled out their whole new SAP release and increased the number of users by a factor of x3, then one change without any downtime and they were were able to run x3 users simply by increasing the utilization of the hardware, by using overprovissioning.

"Yeah, I know, I've done contract work in Denmark ... I expect EDS is going to be happilly meeting and beating you in many deals to come!"

Well I don't recall having meet you, but then again I've bump in to so many 'IT cowboys' in my time, so only the real sharp ones have made an impression.

As for EDS they have tried to pick me up several times, sure the money offered were good. But they don't have an operation here in Denmark. And I don't want to have to go to Germany or India to do projects. I work where I work, cause I can go and talk to people in person, the people we have here are highly skilled, and cause I want to be with my family.

"Worked on hp-ux long before AIX, and CUOD support was not abvailable with HACMP until November 2008 (well, announced then, I'm told it wasn't actually working until a lot later). Your feature sell just failed, try again!"

Again.. you simply don't get it the hypervisor will mask that for you. HACMP will never know, CUOD will be done to the shared pool. Man.. it's like explaining colors to a guy who sees everything in black and white.

Sure if you ran AIX on "Bare metal" on POWER back when that was done you had the problem you are talking about. But honestly who in their right mind have done that for years. Come on keep up with the technological development. You sound like the the old mainframers here who keep talking about punch cards.

"Less than five minutes on swinstall, and then can be run by single line commands or via SAM or the web interface SMH. You can even do the swinstall work via SMH."

*CACKLE* Again the "IT cowboy" favorite remark, "Less than five minutes on ..". Again you don't get it. Now that is also one of the things we do, and actually make quite good money on. Cleaning up people's systems after they have been run by 'Less than five minutes' consultant cowboys. You would be surpriced on how many customers come to us with their systems, saying please help us clean up our systems. Usually after they have had a crash, and have serious trouble recovering cause nothing is documented, cause to many "Less than five minutes" cowboys have been on the system making their expert recomendations.

Do you know what ITIL is ? Change mangement perhaps ? CMDB?

"No, you completely failed to explain any ... the IBM method, one power issue affects everything in the server. "

I have explained, it is you who don't get it.

Let me try again .. the Hypervisor makes an abstraction layer between the physical machine and the virtual machines, so You don't know what physical processors your virtual machine is running on, you don't know what memory modules it uses.

All the nasty hardware stuff is hidden by the hypervisor and the IO by the VIO servers.

So a hardware failure like for example a Processor failure will not have an impact on my virtual machines, if I setup things right, at all. Now if I on the other hand do max out the SUM of entitled capacity, or for example have to deallocate a whole Processor CHIP, so the hypervisor cannot forfil it's entitled capacity guarentees. Then the hypervisor will take the least important virtual machine and give it the hammer, hence keeping production systems running without any dangers to SLA's or customers data.

And what you don't get it that I run perhaps 10-30 virtual machines using somewhere between 40-50 Processors on a machine with 16 physical cores, like the the power 570. On our old POWER5/5+ we are perhaps running 30-60 virtual machines, using 120 Processors worth of CPU power.

And you seem to forget that POWER hardware has even more hotswapable and redundant parts that your favorite HP itanium servers, I mean a POWER7 box like the power 780 can even hotswap the system clock card.

"And seeing as IBM love chucking non-redundant and non-discrete components all over their designs (remember those old IBM blade backplanes?), it is a problem just waiting to strike."

Ehh.. have you looked inside at HP blade chassis backplane recently ? Blades are generally crap, no matter the vendor.

"Not surprised you didn't know that. After all, you live in Denmark and have to send out for consultants with hp knowledge. And Sun, and IBM. Actually, do you have any skills in Denmark?"

Actually I am so fortunate that most of my Unix sysadmins actually have at least a bachlor degree in CS, and not one of those that you buy over the internet. Again one of the reasons I like to work here. People actually have skills and know what they are talking about. It's not all hot air.

"Not the same as what the combination of IVM, PRM and WLM can offer, which allows you to avoid having to move workloads between servers. And they all work and integrate together with hp's monitoring and reporting software, not like the hodgepodge of IBM tools."

*CACKLE* Yeah right. Just buy the whole HP Infrastructure solution pack.. yeah.. Lets see.. we need to architect, setup and manage.. vPars, nPars, Ticap, IVM, PRM and WLM, to acomplish what the hypervisor does out of the box, sure you also have to architect setup and mange the hypervisor, and to be honest the hypervisor doesn't yet do cross hw workload management.

Ok, I get it now.. you like the HP Itanium solution, cause it is your meal ticket, if it wasn't this complex you would have less work. Now that is what you don't like about a AIX/POWERVM/POWER solution, it's simple and easy to manage.

With regards to the whole quote from HP's real story.

The POWER release, counting from POWER4 and forward, work like the Intel x86 Tick Tock release strategy. So I couldn't care less what you say. You are just plain... wrong.

Amazing.. You are really will go to great length to defend a HP marketing site.. amazing.. no wonder why people don't like you here. But again your arguments are flawed and to be hones not in connection with reality. I am actually a bit shocked. It's very simple, the HP Marketing site is wrong. Nobody besides you and HP.. well SUN marketing counts POWERX+ processors as a new generation. Even wikipedia got it right http://en.wikipedia.org/wiki/IBM_POWER

And simply just repeating something that is wrong doesn't make it right.

"LOL!! You even try and compare the oldest and slowest P5 chips to the fastest P6 to try and make the P6 look good! That's just dishonest."

No, it is what is mentioned in the IBM press release. The first released POWER6 was the 4.7GHz in the power 570, the fastest POWER5 was the 1.9GHz POWER5. And I even picked the same physical machine. You can twist and turn. But you are wrong, but hey you have also clearly demonstrated that you will continue to push something you clearly can see is wrong just for the .. well.. whatever makes you tick.

"So far you haven't debunked anything, just shown us all why outsourcers and other vendor puppets shouldn't be trusted."

Well we all know who is the vendor puppet here. The only quotes you can make is from HP marketing sites. It is all you have besides denying clear facts. It is in fact rather pathetic. Matt.. people are laughing at you, not with you.

"I predict an upturn in the demand for POCs in Denmark if you carry on posting your "debunkings". I can spare a few weeks this year if you need someone with actual tech knowledge to come over to do the POCs for you."

Well I love POC's done alot of those these last 20 years, in the role of everything from tech guy over architect to Technical Project Manager. But I've cut down on those. Being a family man now doesn't really allow me to spend 1-2 months somewhere in the US, Ireland or even in other parts of Denmark 3-4 times a year.

And no thanx I don't like using 'cowboys' for POC's. I know your type, worked with many of them, you might even say I was one myself 15-20 years ago. But today I am the wiser man. And I have no use for people who can't/won't admit they are wrong. Nothing wrong with making errors, we all do. But I ran a POC as the Technical responsible person what 7 years ago, where all the aprox 20 people I got to execute this projects were cowboys, from all over the world. 80% of them had nowhere the skills they should, and had been brought in for, and half of them would not tell the whole story, when they had a delivery, and would deliver undocumented. So I and a college ended up having to do most of the work ourselves, so I ended up having to be a Oracle,SAPbasis, C++ developer, AIX, Alpha/POWER, EMC DMX specialist and had to write the whole POC report myself. After that I have never used Cowboys, unless I either know them or can verify their skills.

I like to win, and do 89-90% of the time, cause I am very good at what I do. But also cause I pick the right tools and the right people to work with.

// Jesper

Jesper Frimann
Headmaster

So the proof is here you get your claims from hp's The Real Story.

"Really? So you missed the bit where hp has sold five times as many Integrity servers with Linux as IBM has iSeries then?"

Eh ? links please ?

"And you benched both bits of kit.... Oh, you haven't! So, who's talking sh*t?"

Oh I forgot you won't believe any sizing tools, benchmarks, it's all a conspiracy :)=

"Then again, actually thinking about it might do you some harm, best if you just keep those IBM blinkers on and assume everyone needs the equivalent of a year of IBM GS to get a solution implemented."

*CACLE* You forget where I work, I work in the outsourcing business in a little country, we don't always have the manpower to handle the peak requirements for AIX skills in my depardement. Nor do we have enough Oracle or DB2 skills to handle peak requirements. For me it's great that I can call IBM ITS and get a few skilled AIX consultants. It is something that makes it easier for me to rely on POWER rather than having to go and find Solaris or HPUX consultants on the grey marked. Sure this might be a purely small country problem, that I can't get people from HP or SUN here. But it is a problem for me.

"Oh I do, but I also understand the limitations of sharing hardware between OS instances, especially in mission critical roles."

Eh what are you talking about ? There are virtually no limits on POWER for this ?

"...TiCAP you don't even notice as the server simply deactivates the bad core or CPU and replaces it with an inactive one." and

"Fantastic! And how many years after hp-ux on Integrity had that capability? Three? Four? "

So you are talking about traditional deactivation of CPU's that suffers from soft errors, something that both HP and IBM servers have been able to do for many years, 10+ AFAIR. Where they have deallocated the failing CPU (based on threasholds of LPMC/Softerrors) and then configure in a TiCap or Cuod CPU. Works quite well on both platforms.

But again you don't understand what a hypervisor can do for you. there are serious limitiations on using TiCap to replace CPU's with soft errors, and the same goes for Cuod Processors on POWER, if used in the traditional way.

You do have to have Spare TiCAP capacity for each partition on your machine, as you are surely using electricical isolation, so worst that is what a socket on each board if you are using npar ?

Furthermore TiCap is only rx7640 or larger.

What about the partitions that only has one CPU (Only a Monarch/CPU 0/boot) processor?

What about stuff that is not LPMC, in practice detecteable parity errors, but HPMC errors ? On a current POWER6 or POWER7 the processor is able not only to retry a failed instruction but also to move it to another processor where it then can then execute without errors. Sure this is not lockstepping as you will do on a But on the other hand it doesn't cost you half the processor in your system, and it is available on UNIX. Not only on Tandem.

And you also have to install and manage the TiCap software.

And what if you don't have TiCap capacity, and you have a physical core that fails in a production system......

Now on a POWER sytem what you would do is you would simply make sure that the sum of entitled capacity for all virtual machines was the number of physical processors -1. This would have virtually no impact on your capacity cause the 1 Processor could still be used, but the system wouldn't guarentee you it. If you then have a core that fails, then you wouldn't even notice it. No action would have to be done besides a hot hardware replacement, if your system is a power 770 or higher.

And if you've been smart, on for example a power 770 then you have a VIO server in each CEC drawer, hence when you have to replace a CPU card in that CEC and the SUM of your entitled capacity is less than 75% of the machine you simply power the VIO server in question quice the CEC drawer replace the card and power things up again.

Now if the sum of your entitle capacity equal to the sum of your processors, or lets say a whole CEC drawer needed to be deallocated, then the hypervisor is so clever that it looks at the priority of your systems, hence it will kill off your test, development whatever systems to try to keep your productions running. Now that is pretty clever.

"In fact, seeing as hp-ux can handle it much better through tools like PRM and GWLM, which have also been available for years, "I'd have to say AIX7.x and P7 still won't have caught up."

Again it seems like you haven't really looked at what have happened with power since POWER4. I can move my workloads between physical machines on POWER6-7, without actual downtime.

I don't need the whole Tivoli package from IBM to do that. The last place I 'worked', I did a analysis of all the POWER Gear, and it could actually run inside 2 POWER6 power 595, now if we migrated all the SUN and HP stuff then the whole UNIX workload could have run inside 4-6 physical machines. You don't need GWLM or PRM for such an environment, it's all buildin to the systems... But as usual local management paniced, cause it would be 'to' effective, and they would loose power... but that is how the corporate battlefield work. And now I do the same for the competition... *grin*

So sure if you want to pay HP or IBM for PRM/GWLM/Tivoli whatever..feel free, and where did you work ? I would love to give your company an outsourcing offer.... you seem to be using ALOT of money on IT.

"Remember how IBM was telling customers that they would see "more than twice the performance" because the clock speed had gone up by 2.2 times? Yeah, right! Of course, rPerf let the cat out of the bag when it showed the actual gain was more like 40%! Just for you, Jesper, seeing as you do like your FUD, take a trip over to hp's Elmer site and look at the following:

http://h71028.www7.hp.com/enterprise/us/en/messaging/realstory-ibm-power6.html"

So your great trumpf card is 'The Real Story' from HP. *CACKLE*

Lets debunk their/your claim by quoting the HP site:

"Fact 2: IBM’s commercial performance metric for POWER (System p) servers (rPerf ) show that IBM’s POWER6 has not delivered anywhere near two-times (2X) performance per core even though the frequency has more than doubled.

Look at the rperf for the 64 core p595 (POWER5+) and 64 core POWER 595 (POWER6). The frequency of the processor increased by 2.2X but rPerf increased just 41 percent"

Now lets see what the actual IBM press release says that they are quoting.

(http://www-03.ibm.com/press/us/en/pressrelease/21580.wss):

"At 4.7 GHz, the dual-core POWER6™ processor doubles the speed of the previous generation POWER5™ while using nearly the same amount of electricity to run and cool it. This means customers can use the new processor to either increase their performance by 100 percent or cut their power consumption virtually in half."

Ok, so you've blindly quoted a HP marketing site. Without actually checking the facts. You do know that POWER5™ and POWER5+™ are two different things right ? The IBM announcement letter clearly states that it is POWER6 versus POWER5. NOT POWER5+. So. either HP can't really read or.. then it's FUD. Take you pick. It's hilarious.

And just to make the point that the terms POWER5™ and POWER5+™ are something different and that IBM uses POWER5+™ when talking about POWER5+™, then look at this annoumcent letter:

http://www-01.ibm.com/common/ssi/rep_ca/8/897/ENUS107-288/ENUS107288.PDF

And if you then look at the rPerfs of the POWER5 versus POWER6 then:

The POWER5 p570 with 16 processor at 1.9GHz does 59.57 rPerf and a POWER6 power 570 with 16 processors at 4.7 GHz does 134.35 in rPerf. 235.54 for the p595 versus 553.01 for the power 595 both at 64 cores etc etc...

*CACKLE* and funny enough almost all your favorite preachings of bad things about power are listed here. So this is where you get your 'facts'.

What more do you need debunked ?

// Jesper

Jesper Frimann
Terminator

*Cackle*

"<Yawn> Please go check whom is the leading Linux server vendor and I think you'll still find it is hp by a country mile, as it has been for many years."

Ehh.. Sure HP runs a sh*tload of Linux servers on x86, but that isn't really relevant with regards to Itanium, and Integrity. You are being evasive.

"Interesting that the IBMers are FUDing the 8-socket blade and Superdome2 so hard. By the way, when it IBM going to release an 8-socket Power7 blade? "

I think it's great a great feat that HP has put 8 sockets in a blade. Quite a feat. But it still doesn't change that Tukwila is slow compared to the competition. IMHO a 8 socket blade is a irrelevant products, that is starved on IO and memory. I mean 384GB for a 8 socket blade server, compared to for example 512GB for a 4 socket power 750 says it all.

"And will run at full frequency in the new Integrity blades, not 20% speed crippled as in the IBM Power7 blades, which will have 30% speed crippled cores anyway. "

Again who cares if the tukwila chips runs at full speed in the blades, a "20% speed crippled" POWER7 processor will still beat the living daylight out of a Tukwila processor and then still have juice to spare. Your arguments are hollow, and you know it.

"All that crippling is so IBM can keep the heat and power within the low limits that the H-chassis can cope with. If you have an alternative explanation I'm sure we'd all love to hear it."

Please document, and please do not just post links to HP's the real story.

"Not surprised you IBMers want something "cool" or "pretty" seeing as you have to spend so much more time with IBM Global Screwups fixing your "solutions"."

That is pretty low, I hardly see the fact that you can buy consultants that can help you as something bad, I mean it's not like HPUX consultants are easy to get your hands on.

".....=> Yes....Partitioning is old technology anyways...." Yes, please do go there, we'd all like to laugh if you try and compare the IBM tech to hp's Partitioning Continuum. Face it - even without vpars, hp-ux still has a massively better partitioning and virtualising solution than IBM. Please refer back to my query regarding whether Power7 will actually have real hardware partitioning with electrical isolation, I beleive you IBMers have avoided answering that one for quite a while.

You've gotta be kidding... *CACLE*

Electrical isolation my butt. So only using npars you mean that you need two whole boards to run just a single partition. You don't really understand what benefit the hypervisor brings you do you ?

When you have a CPU that fries on a HPUX box, the OS has to handle it, which quickly turns it into a incident. On a POWER box the Hypervisor handles it for you, and if you have setup things right then none of your virtual machines will be the wiser.

And HPVM which is basically HPUX running inside HPUX. Sure hp tries to pitch it against the Hypervisors from SUN and IBM. But in fact it has more in common with Wpars and containers.

HP is years behind IBM and SUN.

"....=> Yes but you can also buy what you need now..." Well, you can if you do your sizing right rather than blindly believing IBM's benchmarks. Strange that rPerf doesn't reflect IBM's own benchmark FUD.

Ehhh ? Can we get an example please ? You obvious has no idea about what you are talking about.

// Jesper

Jesper Frimann
Terminator

SD2 versus p795 isn't really a fair match now is it.

"By the way, since you're so upset about spec sheets, where is the IBM datasheet for the P795, the supposed competitor to Superdome2? I can't find it on the IBM webby, so please do something unusual for you and actually be useful and find it for us all. Don't worry, we won't miss you whilst you're gone."

Why would you try to compare the 16 socket SD2 with the p795 ? I mean the performance stuff (TPC-H)f we have heard about the SD2 looks more like it needs to be compared with machines like the power 780 and the (now with nehalem-EX ) PrimeQuest 1800E.

Surely the SD2 in a 16 socket config would be no match what so ever to the POWER 780, on performance.

// Jesper

Feeds and speeds on HP's Tukwila blades

Jesper Frimann
Headmaster

Try some history leassons

"Why is it the Danes have to always bring up the Vikings?..."

Well I see that your knowledge of Nordic history, and thus also a big chunk of western europe is about as big as your knowledge of Unix solutions. It clearly reflects a world seen from only one perspective from someone sitting on a little island.

"We do a POC for every major project. I wouldn't care if you'd been supplying solutions to us for ten years, I still wouldn't go with vendor benchmarks as a guide, I'd expect you to prove your solution. Maybe we're just less guillible in the UK."

Yawn. You sure live in a perfect world. In the world I live in customers don't want do do POC. It is almost always me who has to insist on a 'verification test', Pilot or POC call it what you want. Sure I am honest enough, to put the real cost to the customer. And if there is excess capacity they can usually lend it for free. But I can't give them the manhours for free. So they get what they pay for, but then they can also be assured that there are no hidden costs.

"I never pay for any POC, it is always at the vendor's expense. "

Ok, now you are just plain naive. Do you really think that the cost of the POC isn't added to your final price, either up front or later in some way. You have never been involved in doing pricing on contracts I can see, or been involved in negotiating prices with vendors. Again Welcome to the REAL WOLRD (tm). Your naivity is IMHO shocking.

"If they are not willing to put their kit up on at least try-before-you-buy terms we don't buy."

Now that seems sensible, as mentioned above I will normally lend excess capacity to our customers for test etc, either for a symbolic price or for free. Often it's simply just extending the overprovissioning factor on their current test system, which basically is a simple little change for me, but would be quite costly for the customer. Hence it's good business for both of us.

And that is what my business is all about.. making sure that both parties make good business.

I have never understood the 'We get the vendor to bleed', and if we can get him/her to loose money then HAHAHAHAHAHAHA!. You are undermining your own investment. IMHO just plain stupid.

"I see now why EDS are beating you hollow."

EDS haven't got a presense in the country. And what do you think this will cost HP in Itanium business that they themselves star

t to underbid their own platform ? Now every time we have to go up against EDS, we have to use either POWER or x86, and try to talk the customer away from Itanium.

I just talked to a former college of mine, and it seems that the place where I used to work, now have booted HP as services vendor on their own hardware. The rumor is that EDS is trying to take share, and HP is paying the price and becoming a box mover only where I used to work. But I suspect that perhaps the real reason is that last place I worked we picked up all the 'good ones'

from when EDS closed down their operation in Denmark, so there is a lot of personal bad blood in the busines against EDS. Are EDS still on strike in the UK btw ?

"If you asked me to pay for a POC I wouldn't even bother telling you where to go, I'd just reject your proposal and go with the competitiors that were willing to fund a POC."

Damn you remind me of a place I was hired in many many years ago, true story. The Business system owners were p*ssed at the IT depardement, cause the IT department that were responsible for selecting the Infrastructure seem to come up with the most expensive solution every time x5 more expensive than the competition. And when it then was implemented they never reached the the number of transactions that they had in the POC, that were run. Turned out that one vendor was very well liked by IT. They always did the POC's at the vendors site in Southern France, good food good wine and the vendor was allowed to RAM the Base, and generally sweet talk IT.

In comes Freelancer Cowboy Jesper, no POC in Southern France this time, But a serious analysis of the RFP, and elimination of the 'we want this vendor' parts and an ajustment of the political availability demands to something that was inline with the business needs, changed the attitude in the project from "Get the vendors to work for us" to "we need to know ourselves". Then a TOC analysis of the incoming proposals, performance calculations, X visits to reference customers later without the vendors present. Result the 'other' vendor was selected 5M £ in savings later, IT was furious but the business was happy. An upgraded version of that system still runs today, but there are still people there in IT that holds a grudge, cause I took away their expensive toys.

So POC's where you pitch one vendor against another is not such a golden solution as you think.

"Especially not ones that poke holes in IBM FUD - wouldn't want the customers to see those, would you!"

What a rant.. and btw. you have only been able to find one benchmark that partially poke a tiny hole, so what's up with the plural form.

"You did. In many posts here covering everything from Xeon through SPARC to Itanium. "

Let me learn you one thing young man. There are nothing that are certain, there are only probabilities says the former student from The Niels Bohr instutute. U do know that quantum mechanics come from Denmark rigth ? I actually have a friend who is the grand something from Niels Bohr, he btw. works for IBM, in the US :)=

But it's pretty simple when you have a vendor that does sweep all the benchmarks, be that from SAP,SPECINT to Oracle app performance, then probability is that it's also faster in the REAL WORLD. But keep up the POC's.. must do great things for your project plans.

"In fact, you always are the first to start pushing selected benchmarks and claiming that they somehow PROVE that Power has better real World performance. "

Ehh.. Selected benchmarks ? again.. POWER7 is cleaning house, it wins every damn benchmark it participates in. Sure this is just some grand Illuminati IBM conspiracy....

Again, you don't understand that your rude manors, lack of perspective and "it's the world against me" attitude puts people at odds with you. I have no problem discussing things like Poulson release dates, and digging up evidence that it will actually be a shipping products, actually supporting the person that I perhaps am disagreeing with, and disproving people who claim that particular product is foilware. Why? Cause if discussions are done in a polite manor and civilized tone with the goal of enhancing the knowledge of both participans, then the road, and the knowledge gained, is more important than than the end result. But hey 10 years and you might come to the same conclusion, but I have my doubts that you will see the light.

"Read my posts. I work with Integrity and pSeries, mainframe, Wintel and Lintel and some SPARC still. Don't try and pretend you have a better view of the World than anyone else because I am sure there are readers here with twice as much experience as both of us put together, and they and I are working in live environments..."

Well I haven't really seen you just remotely displaying any what so ever skills on POWER and what a solution stack on that platform is able to do. Again all you can do, when it's not HPUX or Itanium is to link to HP's TheRealStory. Hardly a sign of skills. I've had UK Cowboy bodyshop consultants in that claimed to have 10 years of Oracle experience, that couldn't even start up a database, but funny enough when I get people in from Norway, Sweden or Denmark they can actually deliver what their CW claims that they can do. And I am pretty sure that I have a longer track record with actual Itanium hardware than you have :)

But there are always people better than you. I've learned my lesson, back when I thought that I was the master of the world, but then again I've been debating on the internet for 20+ years, my "I know everything and flame everyone" period was over 18 years ago. But I also know that I've tried more than most. I've been in enough real life tough IT critical situations. I've seen project managers running to the toilet to throw up. I've seen DBA that had to rush to the toilet so as not to soil themselves, I've even had people having to pop nitro due to chest pains, where we had to ship them to the hospital, cause they couldn't handle the situation. Not where I work now cause things are more structured here, but other places I have.

All the facts on IBM software .. bla bla second-hand car salesman.. bla bla..

Damn. You sound just like Barnes in Platoon. "Now, I got no fight... with any man who does what he's told. But when he don't, the POC breaks down. And when the POC breaks down, we break down. And I ain't gonna allow that... in any of you.".. Cause the POC is my meal ticket, and I like it.

And I don't do sales, well I keep my clients happy, so that they will buy more.. hey wait.. And the last second hand car I bought was actually from a nice guy my wifes sisters husband, an good old design classic Citroen XM 2.4 TD estate.

// Jesper

Jesper Frimann
Thumb Down

Matt Bryant now with insults to the Vikings.

"I suspect it's actually that you don't WANT to understand, or that you don't want any readers to understand. After all, it would be so much easier for you if no-one asked for POCs but just took your word on the back of some IBM labs benchmarks...."

Well, we do POC's big time when customers a new to us. It's just fair that they want to be assured, of our choice of platform and that we can execute it. And well the cost of the POC is well on their shoulders, one way or another. So actually POC's make us more money.

"Well, I suppose IBM can at least hope there really is one born every minute - probably more than that in Denmark."

Ok, so now you are so far out that you need to call people in Denmark idiots. Says a great deal more about you than people in Denmark :) You are actually not being a very nice person.

"And then you waste a whole lot of airtime getting all hot under the collar over one vendor benchmark I posted to show you P6 or P7 don't win every time. Chillax! OK, I did find it very amusing, but for your own good you really need to calm down. "

No, it is you who is off the mark. Just cause you finally find a benchmark that gives you a tiny leverage. And then cause you don't read it but rather just quotes HP marketing sites, then you don't discover that what the benchmark also says is that the DB servers used actually showed that a Montecito based SD needed 128 Cores and 512GB RAM to deliver 1.5 times that of a 40 Core 192 GB RAM POWER5+ server. It's not my fault that you shoot yourself in the foot. Go blame yourself, and actually look through the benchmark configuration and disclosures, rather than just quoting HP's the real story.

"You forgot, I also stated quite clearly that vendor benchmaks HAVE NO BEARING ON REAL WORLD PERFORMANCE! I have run POCs in Montecito Itanium kit with Java apps and got very good results, but not ones as good as hp labs did because mine were real world apps. Your total fixation with trying to disprove anything that threatens your worldview of IBM Power as the unchallenged performance king is much more revealling than any benchmark."

Who ever said that Benchmark results equals Real World Performance ? If you had bothered reading what I have written you would have seen that I have always said that Sizing tools and methods are _based_upon_ benchmark results. That is two whole different things. Listen I live in the real world, I have to work with everything from M9000/SD/p595 to old DS20,V210's or old SP2 nodes. You on the other hand seem to only know HPUX,PARISC and Itanium.

"Here's a big hint for your next salespitch - don't diss the opposition, just stick to explaining your proposal. Us customers will think much more of you for it. Sure, if your customer says vendor X has said some FUD then defend your kit, but don't try and do it by FUDing theirs - the customer is usually not stupid and will know there is actually very little chance of you having evs sly enough to have predicted your canned response and already fed the customer a neat counter. "He will say this" and you do! - the customer thinks your competitor is a genius and that you can only repeat IBM soundbites! Guess who wins the deal."

I never ditch the competition, you never know when you might need a new job. And HP's hardware and software Business, not to mention their big gadget depardement, are not my competitors. They are my business partners, just as Oracle is a very big business partner. And IBM HW and SW business also are my partners.

And for having experience of UNIX. I grew up on UNIX wise on BSD, SUN OS and HPUX v2-8. We had a 340 with a M68040@40Mhz, as the machine that could go on the internet. But the real darling was Embla a 735 with a whopping 64MB of RAM featuring a PARISC RISC processor WOOO ! And then also a farm of 320'ies. So I've been a big fan of HP hardware, for many years.

For me personally it changed with Itanium, I am tired of having to explain to my customers why the HP/Intel roadmaps we presented to them as part of the agreed development plans for their infrastructure, doesn't hold water.

".....Software sales people is what some of the inner circles of hell is reserved for...." I'm not surprised an IBMer would say that. After all, for years it's been a fact that IBM software sell more software on hp hardware than IBM's own, something that really p*sses off the IBM hardware guys! Many a time IBM Software resisted calls from IBM Hardware to bump up the license costs on PA-RISC and Itanium compared to Power, and many times IBM Software refused because it made more market sense to keep them even. Nothing like your own software division admitting their marketshare is dependent on a competitor's kit!"

Have you ever even meet a Software sales guy ? He will try to convince you to run your apps on scale out SPARC64 solaris boxes. Why ? Cause he'll make more money that way. I've heard that story both from Oracle and IBM software sales people time and time again. I've been in on doing deals, last place I worked, where the whole outsourcing deal payed for itself in savings on software licenses.

// Jesper

Jesper Frimann
Headmaster

More stuff from Matt's place of wisdom.

Again.. I don't get it you don't want to look at vendor benchmarks...... Unless they come from HP's 'The Real Story' ?

You do know what this comparison is all about, it's a IBM software Group how to use alot of cheap hardware to generate software revenue for them versus a benchmarked solution versus a 'get the most you can out of the hardware with any means cause prices aren't listed in the benchmark submission', by HP.

Software sales people is what some of the inner circles of hell is reserved for.

If you look at the actual benchmark disclosure reports (rather than just quoting marketing bull)

http://www.spec.org/osg/jAppServer2004/results/res2008q4/jAppServer2004-20081202-00125.html

and

http://www.spec.org/osg/jAppServer2004/results/res2008q1/jAppServer2004-20080115-00098.html

Each of the js22 blades cost what.. 5779$+559$+1200$ for the RAM and AIX that is 7538$ per blade. Check it out yourself on the web.

Now the BL870c used in the HP submission costs 52397$ from (the list price list I have cause it's not like HP wants you to know what their products costs).

So what is that.. a factor of 2.34 in price performance advantage to the js22... So if you want to throw a lot of money after a HP solution then feel free... I don't care.

Note how HP has only focused on the Blades that are used for driving the app software, not on the Database which is also a part of the benchmarked configuration (SUT hardware)

They haven't at all.. mentioned the fact that on the DB side of the benchmark HP is using a 128 Core Superdome with 512 GB memory to drive the database, where as the IBM solution uses 40 cores in a p595.. and a 2.1 GHz POWER5+ based p595 that is, with 192GB memory.

That means that on the database site you need 1 Itanium Core to drive 169 BOBS on the app side.

Where as you need 1 POWER5+ 2.1GHz Core to drive 350 BOBS on the apps side.

To put it in other words you've just quoted a benchmark that shows that POWER5+ (notice that this is power5+.. not power5) is more than x2 factor faster that Itanium on the Database side.

I wonder how much POWER6 would have been faster if they had used a POWER6 based power 595 ?

Was that what you wanted to show, that Itanium is only able to do half that of POWER5+ on DB loads ?

Again.. check your facts and do your math.. rather than just quoting marketing Bull.

// Jesper

IBM flashes 1.2 million TPC-C result

Jesper Frimann
Pint

and then some..

If it's as rumored will scale to 256 cores. Then a fully loaded system would be 25M+ tpmc at 100K tpmc per core. But lets see how it scales.

// Jesper

Jesper Frimann
Megaphone

no

"Ok. I went to the TPC website. IBM is doing a one-server, 8 socket, 4 core/socket "turbo" 4.14 GHz server. Total is 1 server, 32 cores, 128 threads, score 1.2 million."

No, the system is a drawer system, hence there is only one drawer. Read the config.

8 processor activations that is the featurecode 5469 doesn' t mean 8 processors in Intel/SPARC sense but 8 cores. So this system is a 8 core 512GB system.

"That is a max-configured version of 780, with half of its cores turned off and raising the clock-rate."

No, you can also see it on the number of software licenses.

And honestly, a 16 core POWER6 system made 1.6Million tpcc, do you then really think at 64 core POWER7 system would only do 1.2 Millon tpcc ?

// Jesper

Jesper Frimann
Pint

two sockets..

Well perhaps it's easier just to look at the disclosure report :

http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=110041301

It's a 8 socket box, with 2 sockets each having a 4 core 4.14 GHz POWER7

// Jesper

Jesper Frimann
Headmaster

Hmm..

"But we will surely see systems with much more flash breaking a 10 million TPC-C barrier by the end of 2011"

Well most likely we'll see an upgrade of the POWER6 power 595 this year I would presume. And if it just keeps it's socket count, then getting x4 the number of faster POWER7 cores, would mean that it would go well beyond the 10 million mark, as the power 595 does 6 Million tpmc.

So IMHO the 2011number is a bit pessimistic

// Jesper

HP: last Itanium man standing

Jesper Frimann
Pint

Big Ink.

HP makes a sh*tload of money on INK. That is why HP is normally referred to as BIG Ink :)=

And no I don't buy HP ink cartridges, I use buy refurbished ones as I have a HP scanner, HP Printer, HP scientific calculator....

Well who cares anyway, there is football on the TV and beer in the fridge.

// Jesper

Jesper Frimann
Jobs Halo

Those were the days

Jup I remember having access to a HP 9000 735 back in the days, damn it was fast. Things were different then, 30-40 X-terminals running Xwindows on a little HP9000 340 with 64MB RAM was no problem.

Those were the days...

// Jesper

Jesper Frimann
Pint

10 years you mean

"and I'll bet more than half will tell you it's the short support lifecycles of AIX.."

You've gotta be kidding. It's always a balance. You want you OS to move forward but not so fast that you can't keep up. Aix versions have had a lifespan of around 10-12 years, that is IMHO ok.

And funny enough that is exactly the same lifespan as HPUX'

http://h20338.www2.hp.com/hpux11i/downloads/public_hp-ux_systems_support.pdf?jumpid=go/hpuxservermatrix

But IMHO AIX is better at versioning.

I mean the number of times in my last job I have had to deal with stupid project managers and architects that panicked, cause their software stack didn't work. cause "the support matrix said HPUX 11i ", and you've had to dig through manuals and foras just to find out that the supported versions were 11v1 and and...

And while you might complain that AIX is evolving to fast, then yes it is IMHO going a bit perhaps to fast, but that also due to the hardware and virtualization.

I mean back in 2H 2001 a 2 socket Dell Itanium workstationwith a 800 Mhz Itanium processor had a base score of 7.15 on specint2000_rate versus 7,2 for a 2 socket POWER3 450 Mhz POWER3 IBM workstation. Now that was pretty close, back then. Today the brand new Tukwila will do 134 specint_rate2006 on 2 sockets and POWER7 will do 652 also on 2 sockets. Now that is a factor of almost 5 on 9 years that POWER has take a lead with.. so..

Where I work we are slowly starting to move from 5.3 -> 6.1

But I still once in a while struggle with customers in who are running 4.3 or 5.1, and then it's migration time.

// jesper

IBM sharpens Power7 blades

Jesper Frimann
Alien

Boxer is a swedish TV company and they are also to expensive..

Cool it Keb.

As for what is fair game to compare with regards to systems.

IMHO If you are shipping (and it is not a legacy product), or have announced priced/benchmarks etc of a product/system Then it is fair game to compare against it.

And what do I mean with a legacy product, well IMHO it wouldn't be fair to compare Nehalem-EX, Westmere -EP, POWER7 based products to for example the SUN Netra T2000, which is still a product you can buy from Oracle. Then there is the T5120, that wouldn't really be fair either. You could argue that on price/performance it might be fair but otherwise no.

But it is IMHO fair game to compare with T2+ products, and your insistence on comparing with a product (T3) which isn't here and to which we only have very little information is not serious.

And you have NO WHAT SO EVER, problem with comparing to T2+ products when it suits you, like the SUN T5440 cluster TPC-C benchmark. sooo..

Now, comparing against legacy products and mismatched systems, that is what the FUD machines of the Big Server Vendors do. Cherry pick, trying to find weaknesses etc etc. That is why you should always check the context of a comparison on a vendor site. And this goes for all vendors, that is why I constantly say to you that when you pick case studies, or link to comparisons on vendor sites, that this is not the way to do it.

I have no problems with vendor sites that list their own benchmarks, no problem, but when they start to compare and glorify their own products I say no. And cause you have in the past mainly picked from sources like, I use the terms cherrypicked by SUN or the like. If you had done the same from an IBM or HP site I would have used just the same terms.

Don't blame it on me that you can't tell marketing from

But comparing against T2+ systems against current x86 systems and POWER and Itanium is totally fair. You on the other hand with your hand waving about T3, and that it's another generation than the T2+, and that it is only fair to compare equal generations .. bla bla..

T2+, SPARC64 VII, Nehalem-EX, Westmere-EP, Itanium Montvale, Tukwila POWER7/POWER6 are shipping products. And fair Game.

Sure when we start to get benchmarks on T3 then it's also fair enough to compare with that. But we haven't yet. And with the whole ROCK history of SUN then I only think it's fair that we get some semi hard facts on the table before you start to go into a religious frenzy of worship.

I mean it's just like hearing the people who really like Itanium, who are already preaching Poulson.. I mean come on, Tukwila isn't even shipping yet.

And for your whole compare "POWER6 to Nelahem EP and claiming that POWER6 CHIP is faster than Nehalem EP." I'd like to have a link to where I've said that.

It seems to me you often forget that things are different seen in different contexts.

For example which CPU is the fastest Database platform ?

Well it sure depends on the database, Neither T2+ or POWER will run SQL server. So it will be a match between Itanium and x86.

Which CPU is the fastest ?

It depends. Is that seen in the context of a fully loaded Enterprise class system ?

Or is it in a cheap whitebox 1-2 socket server ?

Or Or ...

For me it is quite simple, I need both single threaded throughput, Core throughput and scalability, cause that is what I we need where I work.

And that is the basis for my look upon the world...

// Jesper

Jesper Frimann
Headmaster

even more of the same

"Jesper, you really need to take a chill pill!"

No, I don't do drugs. But some homebrewed beer later on helps with the allergies, will be needed.

".....Wake up.. smell the new world order....." Yeah, I think you'll find that's called x64, not Power. You've overdosed on the IBM marketting and need to really sit down and think before saying such laughable nonsense as Power7 being "the new world order".

No I am not overdosed on IBM Marketing, they are just as big dorks as SUN and HP marketing.. perhaps even a bit dorkier.

As for the x86 new world order. Well I've been hearing that for the last 10 years. It still hasn't happned. There have never been sold as much UNIX capacity as there is now. It's kind of like the "Mainframe is Dead", in my last job we broke the companies record for buying mainframes 2 years ago, in the mids of the recession.

"Yes, but you won't get the best out of Power7 without AIX7.x."

Sure it won't. But that doesn't change that what you are getting right now on AIX 6.1 is more than good. And your argument is stupid, don't buy this product cause it'll get better with time. I don't think I need to say more.

"Benchmarks and sizing"

Matt. Sizing isn't an exact science. I've been doing it for 10 years+. The problem with sizing is that you have to know what you are doing to get a pretty decent result. And normally no platform will deliver what a benchmark promises. But you can't run vendor benchmarks all the time, or perhaps your company has. Fine with me, if it makes you happy then it makes you happy.

I've done a shitload of benchmarks, as a consultant. I've done EMC benchmarks in Cork to showed the 'flight envelope of DMX 2000", I've benchmarked some of the very first x86-64 machines versus SGI versus p595. I have even benchmarked code on some of the whiteboxes Merced beta machines from Intel.

And sure as an IT manager/CIO you get a CMA report that you can use to say "I did everything I could it isn't my fault". But when that is said, I've never been in a situation where we/I didn't hit the expected result +/-10-15%. Which was based upon what.. a sizing

But we are others that doesn't have that luxury. We don't have 3 months for benchmarking, so we actually have to sit down and try to understand how things work, this is how it is with most companies. And besides 80% of all sizing is upgrades anyway, hence you have a source system and a target system which makes things much easier. But for real new workload sizing, that hasn't become easier. Today some of the processor architectures have what the IT press have called accelerators. If your application can utilize those then you get the 'benchmark' performance, if it can't you don't.

Does POWER7 have accelerators that you need to be aware of ?

Well according to you, No. Cause POWER7 is just a shrink of POWER5 Quote "Please also try and deny that P7 pics look just like lots of P5 cores squeezed onto a die. Why "please"? Because I need a good laugh."

Well sure it has, I can understand why you need to do benchmarks, cause you don't know what you are talking about. POWER7 have several, what the IT press calls, "accelerators", it has a Vector execution unit that is now fused into the FPU unit. In POWER6 it was a separate unit. Furthermore POWER7 has a DPU unit. A unit that do decimal floating point math, which will make your financial software go like a rocket if the software uses it.

Now the same goes for x86, and for the Venus SPARC processor.

And btw POWER5 have neither of those, nor did it have a recovery unit like POWER6 has which is now worked into the individual execution units in POWER7. Now from the looks of it, haven't really seen any hard confirmation, POWER7 can issue two instruction groups, yes instruction groups just like on Itanium, per clock cycle just like POWER6 compared to one group on POWER5.

Statements like the above only gets you put in the same box as Kebbabert.

And I must admit that you scares me that you can't accept a request on your system for additional capacity without doing a vendor benchmark. But hey looks like your IT depardement is populated by BOFH's. Which is nice++, only problem is that depardements like that usually gets outsourced, at some point in time cause Management wants to get rid of them. And that usually has a tragic ending, seen that way to many times.

"As I said before, you should be worried because that took five minutes work by an amateur, and..."

Yes you are right it took an amateur 5 minutes to find those, hence the validity of the claims.

And as for other vendors finding FUD, you know the problem with FUD is that if it is desperate enough it backfires. And that goes for every vendor IBM,HP,Fujitsu or whatever.

// Jesper

Jesper Frimann

The same and the same and the same

No matter how many times you keep saying the same mantras over and over again, it won't make them true. Wake up.. smell the new world order. You repeat the same and the same and the same and the same and the same things again and again. It's like you are trying to convince yourself against your better judgement.

"You need AIX version 7 that is not here to get good performance out of POWER7"

Power7 doesn't need AIX version 7 to run fast. Sure AIX 5.3 is not going to give you the performance, that AIX 6.1 has, but it's not like that is a secret. But with the kick butt benchmarks it does right now with AIX 6.1, where it is absolutely cleaning house. Then I am really looking forward to AIX version 7. If you are right then .. MAN is it going to run fast with AIX version 7... Maaannn..

"Benchmarks are irrelevant "

No matter how much you keep disbelieving benchmarks. The facts are that this is what sizing data used to sell solution are made upon. This is how it's done almost everywhere. But I guess where you work is different.. I can just imagine.....

HR: Hello is this IT ?

IT: Yes it is IT, what can we do for you?

HR: I would like to have 4000 Extra SAPS allocated for our HR system.

IT: Ok, it will take 3 months, then we will be finished.

HR: WHAT ? 3 Months.. but we need it in a week.

IT: Yes, but we have to get the vendors in and do benchmarks, we don't thrust sizing tools, and benchmarks.

HR: You, gotta be kidding.

IT: No, 3 Month minus one day to benchmark, and then one day to setup the system on the winning platform.

HR: Well if you say so. <CLICK>

And what is fun is that when benchmarks suit your purpose you have no problem quoting them.

Like here:

http://forums.theregister.co.uk/forum/1/2009/10/12/ellison_mcnealy_openworld/#c_600336

But wait this is even better, you really like to link to HP's case study website:

http://forums.theregister.co.uk/forum/1/2010/02/08/itanium_9300_rollout/#c_692150

http://forums.theregister.co.uk/forum/1/2010/04/05/microsoft_pulls_plug_itanium/#c_734918

Which surely is much much better than benchmarks. Yeaahh.. right. In Denmark we have an expression that goes like this:

You shouldn't throw with stones when you live in a glass house.

*CACKLE* Damn if you weren't so desperate you would be funny. Lets see what you found.

1) Single Memory controller.

That is actually kind of bull, as the blades has better per core/per GHz throughput than the POWER 780, who has both enabled.

2) That the Memory modules were angled for better space utilization. and is special and will be more expensive...

Yeaaaahhh.. right. Again.. the POWER7 blades are much cheaper than Itanium ones so what's the problem.

3) Lack of hotswap disks.

Now that is a valid point, if you want to boot from external disks. But hardly something worth paying many times the price performance for. But then again most people don't boot from internal disks today. Or they use partition migration and move the workloads to another blade when they want to service the blade.

And then some strange quotes from someone that POWER blades are hot.

Let me tell you young man, that blade systems are hot. Damn hot, that be either HP or IBM. You don't put blade lots of blade chasis in a single rack unless you have a cooling system that can deal with the heat flux. That is why the nice people at HP and IBM make really nice manuals that you should read about how the air flow and cooling capacity.

// jesper

Oracle tunes Solaris for Intel's big Xeons

Jesper Frimann
Grenade

Funny.

Have the exact opposite experience where I work.

But well Keb' you do like your AC posts that ditch POWER.

But sure you can have performance problems.. like the one I am looking at right now.. UUUHHH.. SAP BW runs slow !? Jesper looks.. why the F*** do you only use 11 GB for SAP and Oracle when there is 60GB of RAM in the machine ?

Lesson 1 on POWER, DAMN YOU NEED MORE RAM, cause use your normal 2-4GB per CPU core from other platforms, rules of thump and you will have CPU's idling.

// Jesper

Jesper Frimann
Heart

yes...

Now Starfire that was a cool name for a server. The E10K rocked.

// jesper

Jesper Frimann
Jobs Halo

8 Sockets ?

You mean 4 socket Nehalem-EX, right ?

I mean Nehalem scaling isn't really that nice going above 4 sockets. Not from the data that has been released up until now.

// Jesper

IBM's Power 780 pushes the value envelope

Jesper Frimann
Pint

Jup..

I agree. As much as we love to see Oracle and IBM slug it out, and hopefully lower prices. Then we need benchmarks with multiple DB's on hardware.

So lets have some Oracle on POWER7.

But if you look at it then then POWER is more or less the only UNIX iron that has multiple DB (DB2/Oracle/Sybase) submissions on TPC-C. HP and SUN is all oracle.

// Jesper

Jesper Frimann
Thumb Up

Well what I think is nice..

is that 'one' of the UNIX vendors, finally made a benchmark that shows that doing things right, and using some big *ss UNIX iron is only slightly more expensive, than buying cheap x86 iron with restricted software licenses.

// Jesper

Page: