* Posts by Jesper Frimann

478 publicly visible posts • joined 8 Oct 2008

Page:

Assange vows to drop 'insurance' files on Rupert Murdoch

Jesper Frimann

It is maing a difference here in Denmark.

Actually some of the stuff released about Denmark is creating quite a stir.

The government was mandated by parliament to investigate the CIA flights going through Denmark, but communication between the American ambassador, shows that the investigation wasn't actually done, on what seems to be orders by the Danish foreign minister. This could cost him an impeachment. Not to mention that the current danish government is made to look like fools in most of the telegrams. So at least they will play a role in toppling the current administration.

It's good that it gets out in the light when government is breaking the law. Cause we have way to little transparency here. So honestly I don't give a f-word about this being an embarrassment.

// jesper

// Jesper

Oracle revisits Sparc T processor roadmap

Jesper Frimann
Coat

Cause it takes 5 years.

No FUD Bill, I think you are unfair. It's very easy to cry FUD, rather than trying to counter arguments. And If I was spreading FUD I wouldn't call Niagara 'genial' and 'Brilliant' for the workloads that it was designed for now would I ?

Bill that is cause it takes many years from design work starts on a processor until it hits the marked, and your argument just strengthens mine. That when the T1 was actually used for real workloads then SUN quickly realized it's shortcomings, and thus started taking the processor in another direction, the same as everyone else.

// Jesper

Jesper Frimann
Coat

Well

In what way is Tukwila,POWER7, Nehalem-EX, Mangy-cours or SPARC64 VII not CMT processors ?

In what way does the same above processors not implement Multi-threading (if you count Mangy-Cours out) ?

I don't really think you understand why people say Niagara is cache starved. If used for what it was originality designed for, running many threads that all execute the same code, which uses data with a relative small memory footprint, then Niagara will really excel.

The whole concept of using the 'Niagara' CoolThread concept to for example running webservers is a very good idea. This is the niche that the Niagara style processors fit into.

BUT, running 8 different programs on each of the HW threads on a single Core, specially if this code uses data with a big footprint, is a terrible idea.

There is no magic here that allows Niagara to disregard the laws of physics. Although there are a lot of posters here who think so.

// jesper

Jesper Frimann
Pint

RE:Bill

Jup, we sit and plan the downfall of Oracle....

Intel and IBM have been pretty consistent the last many years with beefy cores, with good single threaded throughput.

And yes the next logical evolution step of the TX processors is to get beefier cores OOE etc etc. Just like Xeon and POWER. But in doing so you are basically abandoning the whole coolthread concept.

And if you disagree, then perhaps it's cause you never really understood the CoolThread concept.

// Jesper

Jesper Frimann
Troll

Coolthread

Lets see... T1, T2, T2+ and T3 all implement simple in-order SPARC processors that rely on fine grained multi-threading, and other techniques, to hide memory latencies. Another technique is actually having more L1 and L2 cache per thread per throughput unit, than for example POWER7.

From what I've been able to dig up on T4 then it's going to be a more complex core that implements OO and perhaps even SMT, that sounds a lot more like Nehalem-EX and POWER7, rather than the original CoolThread concept.

So you haven't even technically understood the whole coolthread concept, and actually also the beauty of the design. Cause for Niagara is a brilliant concept for what it was designed for. But again again the problem is that now it is peddled for workloads that it wasn't designed for.

And also the power usage of the T2+ and T3 are in the same range as for example Nehalem-EX.

// Jesper

Jesper Frimann
Coffee/keyboard

Cache.

"Some processor architectures are impacted more than others with a small cache. ". Unless "Other processor architectures" only refers to AMD's x86, then your statement is wrong.

POWER implements SMT, with POWER7 implementing 4 way SMT.

Intel x86 implements 2 way SMT.

SPARC64 implements 2 way SMT.

Niagara style processors have up until now implemented fine grained multi-threading, which of cause will help greatly on throughput when hardware threads encounter a cache miss.

Now a T series machine running the workloads it was designed for, like for example a single application with a fairly small code footprint and possible many instances or the app using many threads, will not be as affected by it's small cache, as it will running for example a myriad of different applications with a lot of updates.

// jesper

Jesper Frimann
Grenade

So ...

Basically admitting that the CoolThread concept was wrong.

// Jesper

Oracle defies HP and IBM with 47% revenue leap

Jesper Frimann
Pint

Yeahh..

You seem to forget that it's HP and IBM that are cutting their prices not Oracle. And it's SUN that traditional have had the biggest UNIX marked share.

And I don't think your comparison of obsolete quarter/half filled POWER 8 socket servers versus fully filled obsolete Oracle servers with 2/4 sockets, is particular relevant.

Try to compare T3 versus POWER7, these are the servers that people are buying, today. People are not buying POWER6, why should they, when they can get x5 the price performance by buying a POWER7 machine.

And you still don't get the difference between a cluster and a single SMP machine.

// Jesper

Jesper Frimann
Troll

hmm

"Maybe Niagara sales are lower than Xeon, because of the same reason Itanium sales are lower? "

What HP have done is to drastic reduce the price of SuperDome's with a factor of 4-5 for the same performance. I tell you that is something that can be seen on sales numbers.

IBM have done much the same, as they are selling POWER7 systems that are 4-5 times faster than their POWER6 counterparts at the same price or less.

Oracle are basically selling servers at the same price, so for the M series it's ... well.. 20% more bang for the same buck. IMHO M series just isn't competitive any more with HP and IBM offerings.

T series is basically sold at the same price as before, so it's a factor of 2 better in price performance.

"Also, recently, the TPC-C world record of 30 million tmpc was done on Niagara running "Slow-laris"."

It's a cluster... CLUSTER. says nothing about solaris scalability or performance. Yes each server in the cluster does around 1M tpmc for a 64 core server.. That is about 25% of the nodes that IBM or HP could have used.. 3 1/2 years ago. The only real good story about the submission is that the software could scale to 27 nodes, which I find really terrific. But lets substitute those 27 nodes with POWER 780's or SuperDome2's.. and then you'll see throughput. Wroom Wroom.

Now if you could administrate 27 T3-4 as a single system image, then you'd have to use the same tools as used on BlueGene or SGI's Altix'es. Cause there would be 13.824 hw threads to keep track of. *BOGGLE*

// jesper

Jesper Frimann
Boffin

It's not just..

ZFS is not a shareable filesystem.

See under Limitations # 3.

http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases

// jesper

Jesper Frimann

No it hasn't begun, it has stabilized itself.

Well, as AC stated above, it's not like Oracle haven't put a lot of ships in the sea to make more money of it's SPARC product line, and it seems that it's paying off. BUT, according to the IDC numbers I've seen then the Oracle UNIX revenue have only risen 1.4% from Q3 2009 to Q3 2010.

And 1.4% is not in the same league as 46% on the software side, so sure it's good for Oracle, and for UNIX customers, that it's still a kind of 3 horse race.

BUT it's not a momentum change IMHO.

// jesper

IBM tosses in freebie Linux with Power servers

Jesper Frimann
Gates Halo

Get real.

Sure Linux is stable. Specially running in a virtualized environment where the device driver part is pretty constant.

I'll call it stable and I've got 15 years of UNIX sysadmin/technical expert under my belt.

// jesper

IBM's blue bigness: A heftier systems bulge than you'd expect

Jesper Frimann
Big Brother

Well.

Although your conclusions are dead wrong there is, as with many myths there is a grain of truth inside the fairy tale you are trying push.

The executive(s) in question is IBM Software groups Steve Mills, who in january in 2003 (8 years ago), said that he saw LINUX as the natural successor to UNIX, and thus also AIX, simply cause the sheer amount of developers behind LINUX would mean that it in time would overtake the UNIX'es. At that time IBM was involved in porting for example the OS/2 Logical volume manager to LInux. Now I don't think there were many AIX admins back then that didn't see LINUX enhanced with AIX technology as a really really good thing. I did.

What then happened was that a company called SCO, funded by SUN money and later Microsoft also, launched an attack on LINUX by dragging IBM to court, causing a potential marriage between UNIX and LINUX to be put on ice.

The Steve Mills statements and the whole SCO court case, have been a story that SUN marketing and sales have been brewing and brewing on for the last 8 years. And to be honest as an old Aix/Linux admin I've been tired of that story for 8 years.

AIX version 7 have just been launched, and as normally there is a 10+ year lifespan of an AIX version. So at least 20 years after Steve Mills statement, there will still be an AIX version. Now would I today think that an unification between UNIX and Linux would be a great idea ?

Hell Yes, and I imagine the majority of Unix and Linux admins would agree with me. It would make my job a lot easier, I could simply choose the best HW to run software stack upon.

Now regards the story about the death of AIX and attack on Linux, then just follow the money. SUN and Microsoft in an unholy marriage.

// Jesper

IBM Power Systems deals get stingier

Jesper Frimann
Coat

Old hw.

Jup we also have a sh*t lot of .. well old POWER crap here. I don't think there is more POWER2 SP stuff left, but I've seen a single PPC604e based machine in the HW inventory. And I am helping our DataCenter depardement getting rid of the last RS64 machines. That then leaves all to many POWER4's and a lot of POWER5 stuff.

Problem with the POWER5's is that there is no upgrade path to POWER7. So it's basically replacement there. Which is stupid as the POWER5->POWER6 upgrade really was a good one, as there were a lot of stuff that could be reused. Going from POWER6->POWER7 it's mostly Software licenses, cards and chasis that are reusable, so the savings aren't that big. But on the other hand, a new POWER7 770 fully loaded costs the same as a fully loaded POWER6 570.

// Jesper

Jesper Frimann

If you are doing trade-ins of power how for new power hw you aren't doing it right

Having done some analysis myself on how to do hardware lifecycle management... then I have to say if you are doing trade-in's of POWER hardware for POWER hardware then you aren't doing things right.

A good example is for example if you have a POWER6 based 4.2GHz POWER 570, rather than trading such a machine in, upgrade it. The upgrade path of a POWER6 based 4.2GHz POWER 570, is a POWER 770 with the 3.5GHz POWER7 chip with 6 cores.

It's much cheaper to upgrade and much easier also.

// Jesper

Oracle slashes software prices on own iron

Jesper Frimann
Megaphone

RE SPLITBRAIN.

"If Oracle Sell you an completely integrated stack, tested at every level, which outperforms anything else for a given price point both in terms of cap-ex or support costs, "

Well, the problem is that the product that Oracle is benchmarking and the products they are selling are not the same. The recent Clustered TPC-C benchmark for example, is not an EXASCALE it isn't a EXALOGIC solution either. It's a RAC cluster of standalone machines, using Solaris as a Intelligent Disk system. It's a brilliant executed benchmark but.. it has nothing to do with the products that Oracle is peddling to their highend clients. Although they will, just as you are right now, put an equal sign between the price of the solution in this benchmark and a Exascale/exalogic solution. The problem is just

1) you aren't buying the software in the benchmark. You are leasing it for 3 years. If you were to buy it and pay 3 years of support the _list_ price of the oracle software would be 59MUSD rather than 24MUSD.

2) Furthermore judging from the price that oracle is charging for the EXADATA x86 hardware compared to the actual price of a rack of SUN x86 servers, then the price of an EXADATA solution based upon T3-4 would be significantly more expensive than the price listed.

3) You would also have to add the cost of the 10.000USD list per disk for the Exadata Storage Server software. (that is 12MUSD (listprice) over 3 years if the benchmarked solution were an exadata solution)

So your claim about cap-ex and support costs being low might very well be right for the solution that Oracle has benchmarked. Although I doubt that it's an easy/cheap solution to support with 27 nodes and 97 "storage systems", but that is another story.

But the price of the benchmarked solution has NOTHING to do with the price of the "completely integrated stack, tested at every level" that Oracle selling under the Exalogic and Exadata names.

So your claim is IMHO not right.

"who the hell care's if the thing is is using a larger number of weaker single thread cpu's than an IBM/HP box which has a smaller number of meatier cpu's???"

One thing is an TPC-C benchmark. The problem is that RL code is often serial. And we have been listening to parallelization being the holy grail for 25 years now, and time and time again in the real world we are facing single CPU throughput problems.

And I am not only talking about code here, running a IT infrastructure on FAT cores is different than running it on thin ones. For example installation of software needs to be multi threaded also,. One of my wife's good friends who is an Oracle DBA actually quit her job out of frustration with their UNIX departments inability to understand the differences between SPARC64 and T systems.

// Jesper

Ellison: Sparc T4 due next year

Jesper Frimann
Headmaster

Matt has a point.

Just cause the changes to the T processors have been on the drawing board for quite some time doesn't mean that the overall observation that Matt has made isn't right.

It looks like the T4 is moving towards fewer fatter cores with better single threaded throughput. You might even say that the T4 core is moving in the same direction as POWER7 and Tukwila cores. With Tukwila being the pure single threaded monster it is and POWER7 being the throughput/single threaded hybrid.

And the fact that the T-core is going that way, rather than going to 24/32 or whatever cores on a chip, might very well be that SPARC64 is going to be replaced by T5/6.

I don't think that Venus looks like replacement for the current 4 core SPARC64 VII+ who has SMT and runs at 3GHz at a 65 nm. process. Venus is 8 cores with no SMT and it only clocks in at 2GHz at a 45 nm process. Which basically means that for commercial throughput workloads then it would more or less tie with the current VII+ for throughput.

Again it only strengthens Matt's point.

// Jesper

Jesper Frimann
Thumb Down

Perspective.

"Your point? So? It makes sense to protect your customers investments. I think Oracle should be applauded."

Sure, I am not complaining. I am simply sharing an observation with the readers here that might save them a lot of money. One that I am going to use in my role as an architect that does Oracle solutions.

Hence if you are buying a new Oracle License and are going to put it on a Itanium machine, then upgrade one of your existing machines to Tukwila rather than go buy a new one. I mean the savings can be easily be in the hundreds of thousands USD.

"Seriously? You don't understand? Only a marketing droid would have trouble understanding this..."

Well software licenses usage is meant to reflect how much capacity or clicks or whatever term you want to use, you are using the software for.

What doesn't make sense to me, as an oracle customer/partner, is that there haven't been a consistent mapping between T-processor throughput/capacity and the number of licenses you'd have to pay, for the five years that the T series machines have existed.

Sure if I were a 'marketing drone' I'd say Sure it's cause they wanna sell new hardware, but that is not my focus. I couldn't care less. But this constant changing of licenses, does take up to much of my time, discussing with our clients on how to protect their Oracle SW license investment, and minimize their cost. And sure that means that I am sold as consultant for quite some hours, but I should be using my time on making solutions.

// Jesper

Jesper Frimann
Coat

Well you forgot that T3

Yes, basically they made it cheaper to use their own hardware.

Although there is a loophole in the Itanium per core licensing scheme. It clearly reads:

(For servers purchased prior to Dec 1st, 2010)

and

(For servers purchased on or after Dec 1st,2010)

So basically this price raise is only valid for new _machines_, bought after Dec 1st 2010.

Which basically means if you have a current Montvale/Montecito based machine that can be 'upgraded' to use Tukwila then the actual machine have been bought before 1st of December.

So the wording suggest that upgrades of processors goes free, as you are not buying a new machine.

Furthermore the licensing scheme doesn't make much sense, just look on how the 'per core' licenses for the T series processors seem to go up and down

T1 -> 0.25

T1(+) -> 0.50

T2 -> 0.75

T2(+) -> 0.50

T3 -> 0.25

// jesper

Jesper Frimann
Megaphone

Well I think that Timothy Prickett Morgan got it wrong..

I don't think that Timothy Prickett Morgan, really nailed it.

If you compare the Old SUN roadmap:

http://regmedia.co.uk/2009/09/11/sun_sparc_roadmap.jpg

with the newer Oracle roadmap for SPARC:

http://regmedia.co.uk/2010/12/03/oracle_sparc_roadmap.jpg

T3/Rainbow Falls with x2 cores compared to Victoria Falls is clearly listed as T-Series x2 Throughput on the Oracle roadmap in 2010.

T4/Yosemity Falls with half the number of cores to Rainbow Falls (Possible x2 faster cores) and 1.5 times the frequency with gotta be the T-Series in 2011 with x3 the thread throughput.

Then on the Oracle map there is a M-Series with 64 sockets listed in 2012. This corresponds very well to the YellowStone Falls on the old SUN map.

And finally on the Oracle map there is a T-Series in 2013 with 1-8 sockets, which gotta be cascade falls, which is also listed in 2013.

The real surprise here, if my deductions are right, is that M is going CMT processors also.

But otherwise it seems that Oracle is just executing on the existing "SUN roadmap".

// Jesper

Oracle stuffs Mongolian clusters with Sparc T3s

Jesper Frimann
Headmaster

It is a good benchmark but..it's more about the solution than the T3-4

"If we talk about pricing. IBM need 77.2 million USD to match this TPC-C record, if POWER7 scaled well enough (which I doubt it does)."

Well you could just reuse the Oracle setup and replace the T3-4 nodes with POWER servers, no problem.

IMHO the solution of the 3x780 is terrible. IBM will have to bring out DB2 Purescale to beat this excellent Oracle benchmark, on the clustered TPC-C benchmark. But there are also other options. If you look at the rperf numbers for a fully configured POWER 797 and compare it to the nonclustered TPC-C results that have been made for POWER7. Then a fully configured POWER 795 should hit around 29 Million tpmc. (This calculation have also been done by Oracle, you can be sure of that, that is why they have made it a 30M tpmc result). Personally I think the power 795 has to little RAM to hit those numbers. With 16TB RAM on the other hand it might even be able to edge past the 'Super cluster'. Specially if they went a little closer to the Response time limits.

"It seems that both IBM and Oracle used SSD drives."

Well yes and there are big differences. If you compare the two POWER 780 TPC-C benchmarks that have been done, then there is a big difference with regards to IO. The New Oracle clustered benchmark uses Solaris COMSTAR as disk servers. Which is absolutely brilliant.

You basically get much of the Intelligent disk array price without having to pay for one. I do on the other hand doubt that this is something that you'd actually connect to a machine like this in RealLife, but then again the Oracle benchmark one uses 11.040 flash modules mounted on 97 Storage Servers with 388 Nehalem-EP processors and close to 800GB of RAM, where as the IBM clustered 780 benchmark uses 224 direct attached, and the non clustered uses 60 SDD modules. So for the Oracle Benchmark we are kind of like back to the tens of thousands of 'storage' devices, now it's just SDD devices rather than spinning disks like on the SD and p595 non-clustered benchmarks.

This is most likely also the explanation for the good response times, lost of lots of storage devices.

Furthermore the way that Oracle have build up this storage system, then it's very price efficient. But then again the whole benchmark is. And I have no doubt that it'll get used to leverage a Exadata products based upon the T3-4.

The thing we as Oracle customers have to look out for is that for the Exadata products is different from the solution benchmarked here, even though the building blocks are much alike. And the prices are also quite different.

http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf

Jesper Frimann

And so the benchmark war continues

First a congrats to oracle on a well done benchmark, they have retaken the Clustered TPC-C benchmark throne !

http://www.tpc.org/results/individual_results/Oracle/Oracle_SPARC_SuperCluster_with_T3-4s_TPC-C_ES_120210.pdf

And it's actually quite a feet to get RAC to scale to 27 nodes, I look forward to seeing what they did.

But it looks like Oracle is up to their usual Software license tricks, It's again only leased software for 3 years with websupport only and you do not actually buy the licenses.

Now the prices are: (Price and Support)

Oracle 47.500 10.450

RAC 23.000 5.060

Partitioning 11.500 2.530

With 1728 processors each trickering a 0.25 license (The copy of the oracle licensing document that Firefox had cached on my HD, actually didn't have an entry for the T3, so I kind of went HMM.. but I found the updated one)

So the real prices would be 35.424.000 and 23.379.840 for 3 years support for a total of 58.803.840 now that is quite a bit more than the 24MUSD that is used when you lease the machines.

Also this time Oracle seems to be able to give some fat discounts. 50% versus the 15% that was used in the last submission.

Machine T5440 T3 T5440 T2+

# machines 27 12

tpmc/machine 1.120.359 637.207

tpmc/core 17.506 19.913

tpmc/Thread 2.188 2.489

Let the battle begin, cause IBM gotta respond to this one :)=

// jesper

IBM gloats over HP, Oracle takeouts

Jesper Frimann
Linux

Fud it is...

"Now I want to see posts where you quote me. Go ahead. Prove that I lie and FUD as frequently as you claim. Go on. I am waiting."

Woooo... that was an invitation I just couldn't resist.. problem is that it's hard to know if you are just ignorant sometimes or just a really really bad FUDSTER.

http://forums.theregister.co.uk/post/676357

"One Mainframe z10 CPU gives you 437 MIPS in native code. Software emulation is a factor 5-10x slower. One large Mainframe with 64 cpus give you 28.000MIPS:

http://en.wikipedia.org/wiki/Hercules_emulator#Performance

You need less than 16 Nehalem-EX CPUs to match 64 Mainframe CPUs. "

First of all A z10 Mainframe has 4 MCM modules each with 4 CPU's which each has 4 CPU cores. That gives a total of 64 CPU CORES. (there are more but those aren't used for processing so it's 64 for Actual workloads) NOT CPU's, hence you are either deliberately or by sheer ignorance doing the whole math wrong comparing CPU cores with CPU's. A Mainframe CPU does 1750 MIPS not 437, that is the core... CORE.

So either you are really spreading FUD or you simply just don't know what you are talking about.

Lets see what else we can find:

http://forums.theregister.co.uk/post/672926:

"Also, I have heard that POWER7 is basically a couple of stripped down POWER6. But you claim it is more similar to a...."

Now calling a 'FAT' Out of order 8 Core CPU a 'couple of stripped down POWER6'es is.. well just throwing mud at the 'competition'. Specially when POWER7 CHIP delivers someting form 4-6 times the throughput of a POWER6 CHIP.

http://forums.theregister.co.uk/post/570575:

"And you know that a high clocked CPU as the Power6 uses lots of power. Maybe 400watt? 500watt? "

Now that is a good piece of FUD. A POWER 560 with 8 Chips burns with MAX ram MAX disks MAX Adapters MAX CPU's 2,246 Watt at 100% utilization.

Now according to your calculations then just the CPU's should burn 3200-4000 Watt.

Try it our yourself here:http://www-912.ibm.com/see/EnergyEstimator. The right word for such statements are either ignorance or FUD.

Should I continue or have you had enough ?

// Jesper

Jesper Frimann
Headmaster

Try to understand this then...

"When T2 got a speed bump from 1.4GHz to 1.6GHz, it can be considered as a next gen cpu - and should be pitted against next gen cpu from IBM: POWER7".

Please don't use pseudo quotation marks on a statement that I have clearly not said, if you are not familiar with the use of quotation marks don't use them.

With regards to T2 versus T2+, some call it a speed bump, others actually lists it as a separate processor. But if I am sooooo wrong why then does the SPARC WIKI list's the T2 and the T2+ as a separate processors. Try to have a look:

http://en.wikipedia.org/wiki/SPARC

Now is that Wiki written by SUN/Oracle hostile bad guys who want to bring down the world ?

Not likely.

POWER6+ was also just a speed bump, but it's still an, to use a Intel term, *tick* *tock* generation. Just as Westmere-EP is to Nehalem-EP.

"There are numerous other weird claims from him."

Perhaps that is cause you don't understand them. Yes I am known for thinking out of the box, and coming up with creative solutions to traditional problems.

"I think it is quite fun that his statements are so extraordinary remarkable that people question the truthness in his statements! That is clearly a sign of how strange his statements are."

Since when did you start to become plural ? My claims are normally always backed up by 'facts'.

Not that I'm not wrong sometimes cause I am.

"When he claims that POWER6 is faster cpu than Niagara cpu because POWER6 has a faster core? Yes, it is true. He said that! No matter how many cores Niagara has - if the core is slow, then the entire cpu must be slow. According to Jesper."

Try to read what I wrote:

http://forums.theregister.co.uk/post/672443

I list the performance difference between T2+ and POWER6 at equal Thread count, equal Core count and Equal Chip count. You don't get more honest than that ?

And my conclusion was:

"So basically Niagara has a slim lead on POWER6 only when comparing chips to chips, and at ithe best result is a 60% lead and that is only against lower clocked POWER6 on SPECint_2006rate. On specfp_2006rate (16 way) each 8 core, 32 threaded T2+ chip is only 20% faster than 2 core 4 threaded POWER6+ chip."

If you had bothered looking after a post where I get enough of your rant and lash out at your arguments then you should have quoted this one from (http://forums.theregister.co.uk/post/673128) :

"The whole point is that it doesn't suck, power6 is damn fast. The project I am responsible for has no less that 16 power 570'es and a few p595's. And damn they are fast, and actually quite forgiving due to their high Ghz. Again if there is a CPU core that sucks then it is the T2. Now both Itanium and POWER has managed to keep One of the reasons why Itanium still sells fairly well is that it's single threaded performance actually is pretty good. And it will, IMHO, get better with Tukwila. And you still can't get it into your thick head that the one machine with the myriad of CPU cores is the T5440. It has bloody 32 cores with no less than 256 threads. Man that is half the threads of a maxed out M9000."

Again notice that I use CPU core, not CPU, not CHIP.. And I still stand by my point.

Jesper Frimann
Thumb Down

Ok it is now official...

You are mad.. simply mad.

Have it ever occurred to you that the reason you cannot find any link to me saying that the POWER6 CHIP will always do more throughput than a Nehalem-EP or than an T2+ Niagara is cause I have constantly written CPU core or core. Admit it you have misunderstood me and get over it. Look... I have constantly tried to list the facts.. Good example here:

http://forums.theregister.co.uk/post/672443

I list the performance difference between T2+ and POWER6 at equal Thread count, equal Core count and Equal Chip count. You don't get more honest than that ?

So perhaps you should duck out of the way of the Orbital Mind Control Laser next time it passes.

// Jesper

Jesper Frimann
Thumb Down

Yeah Yeah Yeah..

"Who the heck cares about scaling?"

Well I do.. Scaling is a very important. Which is why it is something that should be looked at when comparing systems and processors. Specially if you are comparing Itanium/POWER/SPARC64 versus x86. Cause Itanium/POWER/SPARC64 will do from 4-256 cores with the same chip. Intel x86-EP will only natively go to 2 chips and 8-12 Cores. Hence when you pull out your Nehalem-EP numbers.. then they are only valid for workloads that does not require more scalability than the Nehalem-EP chip can deliver. For larger workloads you have to look at Nehalem-EX. It's actually pretty simple.

If I need 300 specINT_rate2006, then it's kind of stupid to be looking at a Nehalem-EP x5570 processor cause it'll only take you to about 266. Where as the nehalem-EX X7560 will take you to.. well.. 1400 or so..

So that whole scalability comes at a price, and that is the per core throughput. Again the two above processors in 2 socket configs:

http://www.spec.org/cpu2006/results/res2010q3/cpu2006-20100621-11923.html

and

http://www.spec.org/cpu2006/results/res2010q1/cpu2006-20100315-09857.html

That is 266 and 33.25 per core specINT_rate2006 for the Nehalem-EP and 385 and 24,06 per core for the Nehalem-EX. So the 45% increase in chip throughput comes at a price of a 38% drop in per core throughput.

And you talked about TPC-C

Then

Nehalem-EP (Submit date 04/08/10 ) 631766 tpmc and 78971 tpmc/core:

http://www.tpc.org/results/individual_results/HP/HP_DL370_G6_OEL_TPCC_ES.pdf

And

Nehalem-EX (Submit date Aug 27, 2010) 1807347 tpmc and 56489 tpmc/core

http://www.tpc.org/results/individual_results/HP/HP_ProLiant_DL580G7_2.26GHz_es_100830_Energy_v2.pdf

And finally

POWER6 (Submit date May 21, 2007) 1616162 tpmc and 101010 tpmc/core

http://www.tpc.org/results/individual_results/IBM/IBM_570_20070522_ES.pdf

Again Nehalem-ex is 40% slower per core than -EP. And the picture is clear POWER6 is faster per core, (on this benchmark) than Nehalem.

And also notice that the POWER6 submission is from 2007, not the fastest POWER6 made, and the Nehalem are using SDD drives (which IMHO is a pretty big factor).

Now a current POWER7 system like this one:

http://www.tpc.org/results/individual_results/IBM/IBM_780_TPCC_20100719_es.pdf

does 1200011 tpmc and 150001 per core. Which is almost x3 per core of Nehalem-EX and almost twice that of Nehalem-EP per core performance.

And the sheer fact that you can come with such a statement, only shows that you still have much to learn.

"No, it is you that dont get it. Let me ask you, if you needed the highest performance in the world, which company would you have to go to, IBM or Oracle? Oracle! How the heck can IBM and IBMers claim they are still fastest in the world, because "IBM cores gave 4.7 times more tpmC per core"?? That is an outright lie, that technically ignorant executives might believe. FUD and lies, again. This is a lie, Jesper. Dont you see the lie?"

I wouldn't go to Oracle that's for sure. It's not the fastest machine.. it's a cluster .. CLUSTER. And if you read the pricing information on their TPC-C benchmark you'll see that you don't even buy the software... you lease it.. For the exact amount of years that the benchmark has to do TCO on. You have to pay the listed amount of money every 3 years... And there is NO upgrade protection.

So who is trying to Bull who ? On the POWER 595 benchmark you at least buy the software and then only has to pay (which can be expensive enough btw.) software maintenance and support.

So buying 2 years of extra support for DB2 on the (going from 3 year -5 year TCO) would cost you ... 203,827 USD-Discount. On the Oracle solution using the pricing scheme that they use on the benchmark it would be.... 7,872,000 USD for 3 years.

Now who is b*llsh*tting who ?

You are so drunk on the Oracle/SUN Koolaid that you don't care to read the fine print.

"In short, when I show links or benchmarks, you immediately dismiss them as FUD, lies, and amateurs. When you show links or benchmarks, I accept them. What does that tell you, about Jesper Frimann?"

That I'm much much better at coming up with links than you are ?

It's not my fault that IBM doesn't release any benchmarks on Mainframes. Go complain to them. I just know the numbers that we use. Would a 8 core Nehalem-EX chip faster than a 4 core Mainframe chip on benchmark like specINT_rate2006. Jup sure no argument from me.

But 8 times faster, is so far off the target that it can only be described as FUD, or well.. stupidity.

Also here the type of workload plays a big part. Mainframe Cores have never been known for their ability to crunch numbers, Moving data on the other hand they are pretty good at, and that is also one of the secrets why they can run at such high utilization. Which is also a factor that you have to look at in RL.

"That is hilarious. And proves the z196 is an abomination that should have been killed off and never left the laboratory."

You really really don't get it. Your fanaticism is scary.

"You make it difficult for me. Sometimes you say one thing, the other time you say the opposite.."

That is most likely cause you don't understand what I say.

Now as for response time. I've only pointed out differences in that on the benchmark and said that you can trade response time for throughput.

On the TPC-C benchmark you are quite right there is a big difference in response times, between the POWER 595 and the T5440 clustered benchmark.

On that particular benchmark then one of the key factors is surely the use of SDD on the T5440, which wasn't available back when the POWER 595 benchmark was made. It's a pretty big difference.

But the T5440 clustered benchmark do seem to have idle processing power that could have been used to increase the throughput at the cost of response time. How much is hard to say, why well according to a friend of mine who's a certified Oracle RAC dude, then it's cause they are pushing the limits for scalability on the benchmark, hence they aren't hitting the optimal per core throughput.

// Jesper

Jesper Frimann
Troll

*BING* You just used the word FUD for the 1000th time.

And your price will be ... nothing.

Perhaps a reading lesson would be in place ?

Some quick googling.

http://forums.theregister.co.uk/post/916032 this post in this thread:

http://forums.theregister.co.uk/forum/1/2010/11/10/amd_opteron_server_roadmap/

"Now I wouldn't argue against that the Nehalem-EP chip at 2.93GHz with 4 cores is faster than a POWER6 Chip with 2 cores. The POWER6 core is still faster than Nehalem. And POWER7..."

Again.. your only defence left is crying FUD FUD FUD.. when your arguments don't hold up.

// jesper

Jesper Frimann
WTF?

Damn you are scary...

"Just as you denied that Intel Nehalem was faster ...."

Yak Yak Yak... You are like a record stuck in the same loop. You never understood a word I said. That is clear.. and you just keep on repeating the same and the same again and again....I've never denied that Nehalem-EP was a faster CHIP than POWER6, but I've denied that it was a faster core.

Furthermore the scaling of Nehalem-EP was.. well.. EP like :)=

"That is maybe why you claimed that POWER6 is super fast - when in fact, it sucked badly."

Again in the myriads of benchmarks you have managed to find one multi-tiered Oracle controlled benchmark where you can claim a victory.

Now on Industry standard benchmarks like SAP2Tier, specINT_2006rate, specFP_2006rate, specJBB2005... POWER 570 wins with a good margin.. And even on your clustered, then each T5440 is 2.5 times slower than a stand alone POWER 570.

Wake up you level of fanaticism is scary.... Oracle is Great.. !!! All hail SUN I mean .. Oracle !!

"If you needed the highest performance in the world, you had no choice but to use Oracle,"

Again here you don't get it... real IT people who do sizing work and architect solutions know that if IBM submitted a 80 node POWER 750 node DB2 Purescale clustered submission, that this wouldn't make the POWER 750 the fastest machine in the World. It's just a node in a cluster.

" You forget that I have a math degree, I would never think like you do."

Jup, and you wrote your thesis in Bistromathic, which seems to be the only math you can use to prove your points.

"In short, IBM = Master of FUD. Jesper, you have been fooled by IBM. You DO believe the IBM Mainframes have fast cpus. You DO believe the POWER6 is fast."

Keb. I bear no illusions what so ever about how fast mainframe CPU's are. Both on a per core level and also on a per CHIP level. As a part of my Architect job at a Major CSI player I have access to NDA IBM sizing information. I've sized mainframe systems using that data and seen it hold water.

So, the CHIP isn't great IMHO when it comes to throughput, but they are nowhere as bad as you claim. And I couldn't give a damn how many amateur wannabe hackers you link to, it doesn't make it more real. Cause it's on the internet doesn't make it real.. hope you don't believe this guy is a great hacker just cause he's on the Internet:

http://www.youtube.com/user/NextGenHacker101#p/a/u/2/SXmv8quf_xM

Cause watching him hack, reminds me of someone...

// Jesper

Jesper Frimann
Happy

I think you need to check...

Your blood pressure Keb.. All that Marketing B*LL is full of Salt, bad cholesterol and female hormonal like substances. It's not good for U.

You can call me a liar or a fudder as much as you want, it doesn't make it more right.

// Jesper

Jesper Frimann
Thumb Down

Yeah right.. again again

".....evenue went down 13%"

Hey, I simply said you were right, Oracle does not have high margins on Hardware they are loosing money IMHO. Get over it. Don't go there if you are not prepared to face the findings.

"....Me, I dont lie and FUD like you do, I accept that IBM has higher TPC-C. I dont deny that, like you do.".

What sensible people like me are trying to hammer into the heads of people like you is that a cluster submission on the TPC-C benchmark is not a system.. it's a CLUSTER. Nobody ever denied that Oracle had the best clustered TPC-C benchmark submission. And that it was also the submission that had the biggest number of transactions...

But as it is a clustered submission.. then every sane semi skilled IT person should know that it's easily beatable by simply clustering together a few largish machines. Like IBM did with a 3 POWER 780'ies.

"...If you need six IBM POWER servers to match one Sun T5440 - how can that be FUD? Just read the white papers you linked to! I never make up things....."

Man ... you are dense and your argumentation is so flawed that it stinks.

Lets try to use your argumentation..

Yeah, the M9000 is twice as fast as the POWER 795.. and that is a fact just read the benchmarks:

POWER 795 specInt_2006rate -> 1440

M9000 specInt_2006rate -> 2590

http://www.spec.org/cpu2006/results/res2009q4/cpu2006-20091012-08891.html

So you need 2 POWER 795 machines to match one M9000, and that is a fact.

Only problem with this argument in the above example is that the POWER 795 used 4 chips out of 64 on the submission and the M9000 uses 64 chips out of 64 chips.

The real POWER 795 submission with 64 chips gives specInt_2006rate -> 11200

http://www.spec.org/cpu2006/results/res2010q3/cpu2006-20100817-12974.html

Kebbabert people are not laughing with you when you do math like that.. they are laughing at you.

And the whole mainframe CPU versus x86. When discussing this with you it's like discussing sex techniques with a virgin. You have absolutely no clue.

// Jesper

Jesper Frimann
Headmaster

Talking about FUD

"Oracle have high margins..."

Yes they have very high margins.. on Software. On the hardware I don't think they are high enough. I don't think that Oracle is actually making money on their server sales. And besides that direct hardware revenues it only accounts for 4% of Oracles revenue.

http://www.oracle.com/us/corporate/investor-relations/financials/q3fy10-detailed-financials-080347.pdf

On page 3:

Oracle made 458 Million USD on direct Hardware revenue + Support. And lets give them 10% of the services revenue that it 10% of 2,797 for a total of... 738 Million USD.

But then comes the expenses related to making that revenue.. and that is when the bad story begins. Again if we look at the above link there are direct expenses and % of other expenses (Lets be real nice and use 4% it's in reality much higher and 10% on services)

Expenses:

Hardware systems products 206 MUSD

Hardware systems support 116 MUSD

Sales and marketing 133 MUSD

Research and development 88 MUSD

General and administrative 25 MUSD

Amortization of intangible assets 55 MUSD

Services (10%) 243 MUSD

-------------------------------------------------------------------------------------

Expenses 866 MUSD

Revenue 738 MUSD

-------------------------------------------------------------------------------------

Loss 128 MUSD

===========

And then comes the restructuring cost and acquisition costs relating to the buy of SUN.

Which for Oracle as a total was a total of 517 MUSD.

Lets just say that 25% of that was related to the SUN's hw business that would then put the Oracle loss on the old SUN HW business at 257 MUSD.

And IRL the actual number is much bigger, cause it's a much more labour intensive job selling hardware than software or rather in Oracle's case software maintenance which is a stunning 56% of all Oracle revenue, so the 4% I used is most likely much higher, at least 8% which would put the loss of over the 500MUSD mark.

Now such a big loss is not something that Larry will put up with in the long run.

This reminds me when I was a Freelance consultant 13 years ago, and advised my client against making Digital Servers the strategic platform for the next 5 years, cause the company was in trouble. I wasn't popular but I was right.

This is most likely the story that IBM sales is telling consulting companies like Accenture,McKinsey, KPMG, CapG etc etc. And that will hurt Oracle, if these 'independent' advices start to put the thumb down on Oracle HW.

"lso talking about "questionable value proposition" - you needed 6 (six) IBM Power P570 servers to match ONE Sun T5440 server in SIEBEL v8 benchmarks. "

Again you keep repeating that story again and again.. it's 2/3 filled up POWER 570 and a 1/8 th filled up POWER6 p570 versus one T5440. Hence your claims are ... false.. and stink of FUD.

And rather than just quoting the now gone BMSEER you should try to look at the facts.

http://www.oracle.com/apps_benchmark/doc/sun-siebel-8-14000-pspp-on-solaris-benchmark-white-paper.pdf

and

http://www.oracle.com/apps_benchmark/doc/IBM_Siebel8_7000_PSPP_On_AIX_POWER6%20Final.pdf

Now the reason why the T5440 is on top here is the response times, which is 4 times faster for the majority of uses on POWER6/POWER5 solution. Which someone who actually worked with RL IT systems, should know.

"is really dog slow. You need 5-10 of the z196 cpus to match one modern x86 cpu."

Now that is just pure FUD, you have no what so ever facts to match that statement. And IMHO it is clearly untrue. And shows that you don't know what you are talking about.

"Earlier, when Oracle had the TPC-C world record, IBM said that because IBM has faster cores, the TPC-C world record still belonged to IBM. If you looked at no 1 on the TPC-C list, you saw Oracle. But still IBM said that they had the record, because they "used faster cores"."

BZZZZZ.. Wrong again.. Oracle had the 'clustered' TPC-C record, not the single Machine. Now they don't have the clustered record, it is also a POWER score.

"Or, the POWER6. You need four POWER6 cpus to...."

You do know that POWER7 have been shipping for a long time don't you ?

Or is the only thing you can do whine FUD, come on grow up learn to research your arguments and do some logical reasoning. It's pathetic.

Oracle erects mystery Sparc SuperCluster

Jesper Frimann
Troll

Same old....

Again nothing new from the Kebab front. Same old dressing same old meat and same old wrapping.

Bla bla bla LINK to SUN/Oracle marketing blogs Bla bla bla POWER6 bla bla bla.

Not that the Oracle Enterprise2010 benchmark isn't a really well executed benchmark, which hands down beat the POWER7 submission. But IMHO that has more to do with the poorly executed benchmark by the IBM submission rather than what the hardware is actually capable off. But that is just my oppinion.

// Jesper

Jesper Frimann
Pint

What records ?

Care to enlighten us ?

And show some comparisons to other platforms ?

// Jesper

Oracle ships Solaris 11 Express

Jesper Frimann

Ehh.. what did you miss ?

I think this post pretty much cuts it out in cardboard:

http://mail.opensolaris.org/pipermail/ogb-discuss/2010-August/008010.html

OpenSolaris in the form that is... well Open.. is dead.

// Jesper

Jesper Frimann
Big Brother

Which will mean that ...

If it's not decided if Venus will go in a server yet? Then there is at least half a year of testing and and and .. which will mean that the current Venus will be coming out at the same time as POWER7+.

And seeing how a 8 CPU POWER 780 is faster than a a 64 CPU M9000 on every single benchmark where these two machines have made submissions. Then Venus has to be many many many times faster than SPARC64 VII, just to catch up.

Not going to happen.. Sorry.

// Jesper

Jesper Frimann
Happy

Ok, Venus can be found in which Oracle server ?

Just out of curiosity...

// Jesper

My lost Cobol years: Integrating legacy management

Jesper Frimann
Troll

worthy of admiration....

APL is Yikes. I learned Miranda (The Name Miranda means worthy of admiration) many years ago at university. Think that syntax is much nicer.

Not that I've used any of that for the last 20 years :^(

As for Cobol.. then Cobol are only good for killing in a Dungeon.. or are there some things that I've misunderstood here ?

// jesper

Why you can't move a mainframe with a cloud

Jesper Frimann
Troll

...you claim to much with to little data and definitions to back it up

With CPU's you do you mean Chips ? And what kind of Chips ? Or are you talking about Processors Or cores... You happily refer to Magny Cours as a processor.. by that term the z10 processor holds a whopping 20 cores.

And if 20 cores are are 5 times slower than for example a 6 core Westmere-EP, which must be the most modern x86 CPU there is on the marked right now, then a Mainframe core is 15 times+ slower than an x86 core.

Yeah right.

// Jesper

Jesper Frimann
Headmaster

It's not that simple.

"There are VERY few work loads where you MUST have a Mainframe."

I don't think there is such a thing as "must". But Again having been involved in many 'porting off legacy platform X' projects, I would say that things aren't as simple as you try to make them look.

Now a mainframe running CICS with PL/1/Cobol exits which might even have a little assembler thrown in for fun, which have been running for 20 years, is not just something you replace like this *SNAP*.

I've seen projects where people have tried to port off "mainframes" (of various types), and where they ended up with 5% of the expected throughput at 5-10 times the response time. Why ?

1) Cause there is a big difference in running interpreted java code and natively complied code.

2) 'Mainframes' might be slowish on certain tasks, but it's not always that part that counts.

3) Don't underestimate 10+ years of tuning and customisation.

4) Often hardware vendors trying to replace a competitor, kind of use benchmark values for their own solution stack and worst case for the opposition.

5) Software stacks tend to bloat up and become slower, as software companies cut cost.

6) Usually the solution that you have to migrate to is some scale out solution that isn't nearly as efficient as a centralized solution.

So be very careful of migrating of 'mainframes', and by mainframes I am not only talking about IBM zSeries, but there are also a lot of other platforms in that category.

To some Wintel guys UNIX machines fall in the same category :)=

// Jesper

AMD: Opterons to hit 20 cores by 2012

Jesper Frimann
Paris Hilton

Yeah right.

You need to learn several things my young friend.

1) Irony

2) Something about computer history.

3) Checking your facts.

Nope I would say that the fastest CPU is POWER7. I would even say that the fastest chip is POWER7. Although I do think that the T3 might be a little faster doing something like RSA en/decryption, due to it's special accelerators for that particular workload.

Now I wouldn't argue against that the Nehalem-EP chip at 2.93GHz with 4 cores is faster than a POWER6 Chip with 2 cores. The POWER6 core is still faster than Nehalem. And POWER7 which have shown up to 150K tpmc per core is of cause in a league of it own.

Not that there are any relevant TPC-C benchmarks to compare against at 5GHz as you try to insinuate. You really should check your facts.. but hey..

// jesper

Jesper Frimann
Thumb Down

Thank you to the SUN marketing depardement.

And we will ever only need one computer per continent right ?

// Jesper

RHEL 6: serious Linux built for growth

Jesper Frimann
Troll

Keb

"I think it is funny that you doubt there will be a 16.384 thread Solaris box in 2015."

Keb, I have no doubt that there will be a Gazillion threaded Solaris box in 2015. I guess you can get one right now.. a Exadata with 32 four socket nodes.. that is 32 x 4 x 128=16.384 Threads..

"You know, when Sun did the 8-core Niagara that was shocking."

Jup it had shockingly bad single threaded throughput...

"But Sun has always been a leader, and others have followed. "

Oh they have ? In what way ? Please enlighten me ?

"Now everybody (in particular IBM) has stopped the GHz race and turned to many lower clocked cores - just like Sun did ages ago."

Nahh.. that is not the whole story. POWER7 still clocks in between 3GHz to 4.25GHz, and that is with an increased per core throughput. A POWER 595 with 5.0 GHz cores does 33.75 specint_rate2006 per core where as a POWER 795 with 4.25 GHz cores does 48.05 specint_rate2006 per core. Now that is a 15% drop in frequency for a 42% increase in throughput.

With regards to Oracle's Tx it's just more threads and more threads... so get real. Sure the increased work done per socket is good for throughput, but it still requires workloads that can be scaled horizontal. And here real life problems like locking becomes a serious problem on many workloads.

"And Solaris scales well. Which AIX does not, they had to reprogram AIX to be handle to handle as few as 256 thread machines."

First you are wrong with the threads thing, counting a problem ?

I think you have misunderstood the concept of scalability. It's not an advantage to have many threads if they don't do much work.

http://www.spec.org/osg/jbb2005/results/res2010q3/jbb2005-20100814-00910.html

versus

http://www.spec.org/osg/jbb2005/results/res2008q4/jbb2005-20081027-00552.html

or

http://download.sap.com/download.epd?context=40E2D9D5E00EEF7C6E5DBEC399C343678D96258915E03D8ED4B72641FEFF634A

versus

http://download.sap.com/download.epd?context=40E2D9D5E00EEF7C83E16881038C2A048628580F1C646F72AC52B9A6844171CA

versus

http://download.sap.com/download.epd?context=40E2D9D5E00EEF7C562D3DE849CA7106CE7F2A50688495B391C1C8EDD670D317

And that's with almost x3 better response time.

It's the work done that matters not the number of light threads.

"Solaris is the correct tool for a 16.384 threaded monster server. The performance will be shocking."

Well you are talking about something that is what 3-6 Generations into the future. Again in the future...

Jesper Frimann
Megaphone

Not a practical problem.

Yes I guess that the T3 then finally puts Linux on SPARC in the grave, as a serious platform.

And 128 threads is more than enough for the sweet spot of Linux. I mean that is a 8 Socket Nehalem-EX box with Hyperthreading enabled, a 16 Socket Itanium and a 4 Socket POWER7 box. Sure with todays virtualization, what this actually mean is that you have a max. virtual machine size of 64 cores on x86, 64 cores on Itanium and 32 cores on POWER7. Which is kind of enough.

And in 2015 we'll all have flying cars and live forever.... yeah... right..

// Jesper

Big iron makers test their metal on SAP

Jesper Frimann
Linux

Well...

"As Matt identifies, benchmarks have little relevance to the real world - I would suggest that they are the IT vendors equivalent of a comfort blanket. "

Tony S. If you work with SAP you should also know that the SAPS sizing numbers you get from the vendors are actually based upon the SAP STD 2 Tier benchmarks that are run.

Sure in the 'we wanna sell you something' they'll use benchmark values, and in the 'real sizing' phase they'll use benchmark values - some%. But it still builds upon these benchmark numbers.

// Jesper

Why is IBM declaring war on Cisco?

Jesper Frimann
Linux

Playing with the big guys.

Yes I think that CISCO have underestimated the 'pissed off' effect their entry into the server marked have had on HP, Oracle,IBM and Dell.

An to be honest I don't know if it's worth it for them.

// Jesper

Larry Ellison's first Sparc chip and server

Jesper Frimann
Big Brother

Hmm.. Well

Well you sure sound like BMSeer.

"I fail to mention that, because I didnt knew it. Again, do you have links about that? I didnt knew that."

http://regmedia.co.uk/2009/09/11/sun_sparc_roadmap.jpg

T3+ aka Yosimite falls. 8 threads and 8 cores.

"Just because the thread performance of T3+ has been increased, does not mean that the whole cpu is 3-5x faster. I know that."

Besides from your usual rant about POWER6 and what it means being a fast processor core versus throughput of a whole chip. Which basically still just showcases that you didn't really get what that discussion was all about.

I think that good increase in single threaded speed is just what the doctor ordered for the Niagara style processors. But one has to understand that this increase in single threaded speed does not mean that the processor is having flour in your mouth and whistling at the same time. The whole trick is to have both single threaded throughput AND good chip throughput at the same time.

Personally my guess is that Oracle will go from Fine grained multi-threading to SMT and then implement out of order execution, with perhaps an extra execution unit. Basically addressing the main problem with the Niagara style processors. That should pretty much account for the proposed speedup's.

This would also make the Niagara family a more useful processor, for the types of workloads that are used today, basically slowly turning a niche product around and pointing it in the same direction as POWER7 and Nehalem-EX.

But in doing that they'll basically sacrifice the whole idea behind the processor IMHO. And it is to be seen if the T3+ will do more throughput on a per chip level than the T3.

// Jesper

Jesper Frimann
Linux

BMSEER = Kebbabert ?

Interesting, I didnt knew that about T3+. Can you show links on that?

Ok what are you drinking.. it's just a few posts above:

Kebbert wrote this:

"And 2013 there will be 8-socket T3+ servers. Jesper, those new upgraded T3+ servers will kill everything that IBM can throw at them. I know even now, you will reject all those benchmarks and results, "

Funny.. Is Kebbert a new Oracle substitution for BMSEER, a alias used by slightly technical Marketing people to spread the Gospel.

The key word here is "up to". For example there is code where POWER6 is perhaps "up to" 20 times faster than POWER5, on a core to core basis. This can be code that Decimal floating point numbers, where POWER6 has a Decimal floating point unit. Or it can be code where POWER6 can use it's VMX (vector) execution unit.

But again you fail to mention that T3+ as you call it only have half the cores, of the T3. So.. well.. yes up to x3-5 times better single threaded performance doesn't translate into 5 times better throughput.. You are dreaming.

// jesper

HP gooses Integrity server virt with PA-RISC emulation

Jesper Frimann
Headmaster

As requested.

"countdown to very predictable Jesper IBM benchmark squealing in three... two... one.....)"

Zero.. Been busy got a new job, moved from operational to strategic. Actually what I found out on some of the work I've done internally to cut cost, is that we make very very little money on x86 offerings. Their short lifespan, combined with the cost of setting them up and pulling them down. Also the software costs per capacity unit is terrible.

The ones we have the best client satisfaction, and we also make a good profit on, as we can leverage the the fact that we have a lot of smart people, for example I think we have one of the best teams of Solaris sysadmins I've ever seen, is UNIX. So we like UNIX. Clients are happy, we are happy.

But we don't like the small UNIX machines, has to have a critical size, and then use a shitload of virtualization. Cheapest UNIX box in TCO, for us, is still the POWER 770 if used right. Now if the client will be content with containers on Solaris, then we can also make a bit of a sweet deal there, although the HW kind of sucks. But let's see how "T3+" turns out. If it has decent single threaded performance then that will be a good thing for our Solaris offering.

"(he even had a hosting company in Denmark that was looking for a complete PA-RISC SD32 - any ideas, Jesper?)."

Actually that could be us, we have no PA SD's on the 'Spare list '. We do on the other hand have quite a lot of Large Last generation SUN boxes like E25K's and older. Even an old Starfire. They need to go to a broker. Otherwise it's mostly massive amounts of HP/IBM/SUN/DELL blades and 1-4U servers.(And I was actually asked by a client team about the performance of the PA 8900 versus Tukwila, and what it would cost to do a migration project from PA->Itanium)

I was also asked to give my 10 cents on the new SD2. Lets just say that the our client architect was quite choked that it was based on a reengineered c-7000 BladeSystem chassis. Not that I was trying to put it down or anything, but I'll put it in the same class as the power 7[7/8]0, which in technology it actually has quite a few similarities with. So we'll have the reverse story now, IBM used to peddle the 570's against SD's, and now HP will peddle SD-2's against 795.

But IMHO it's not in the power 795 class. Sorry.

Good thing about the SD2 is that is much cheaper in TCA than the old SD, which is good. But.. still when we do proposals POWER normally just comes out on top and with POWER7 it's really a nobrainer, HP needs to get their virtualization layer up to speed. Or at least what their sales teams are ready to do with it.

But with regards to the Whole x86 versus the Big UNIX iron. then our UNIX servers normally, looking at the serial numbers, can have a life of at least 5-10 years, going through several generations of upgrades. One of the new POWER 770's in our 'cloud' actually started out as a 1.9GHz p570 back in late 2004, and had almost 3 years of life as a POWER6 machine. And we also have HP UNIX gear that I can say the same about. But x86 is a buy and throw away thing. And the TCO is just terrible, cause client teams are just interested in TCA. They push the cost in front of them, trying to push it to the guy that is going to replace them, when the cost hits them.

// Jesper

Jesper Frimann
Pint

8900

Well, The 8900 was a great Processor. IMHO better than the Itanium offering at the time. So no wonder that Aries cannot catch up on all workloads.

// Jesper

IBM punts first z196 mainframes

Jesper Frimann
Happy

Doing all the wrong calculations, but perhaps getting to a fairly ok result?

Lets see.... a Management report from Microsoft paid for by Microsoft from 2003 ?

You can't be serious.

And another 2003 link from some dude using MHz of the processor at the time. That you then try to relate to current Nehalem processors.. what in 2003 I guess you were in what kindergarden ?

It makes no sense what so ever.

And then there is the Nehalem-ex thing with TurboHercules. Bzzzzz.. Only problem is that the dude assumes that a 8 Core 2.26GHz Nehalem-EX chip is x2 faster than a 4 Core 2.93 GHz Nehalem-EP chip. Now that isn't really the case now is it ?

268 SPEC-INT for 2 Nehalem-EP chips with a total of 8 cores. The number for 376 for the same blade using Nehalem-EX. Now that is 42% increase in chip speed not x2..Ok ? You know math and checking facts, we have talked about this before. So his calculations more likely say around 2300 Mips not 3200.

And no you are not using credible sources.. they are old and you are doing strange pseudo math to. Now the worst part is that for some workloads I would say that 2x64 core Big Iron Intel boxes could most likely match a zSeries, on native x86 workload. So I actually agree with you, there.

BUT...

I would say with my experience in porting things off the mainframe, that this number would be something totally different if you were trying to port a Mainframe native stack to a native Windows stack on x86. Then you might as well go buy 10 machines. And then the mainframe all of a sudden seems like a cheap alternative.

It's all about TCO.. TCO.. not TCA.

// Jesper

Page: