Is the start of the next big battle in computing hardware?
Larry Ellison has launched the first mainframe-class machine that he can correctly say he made sure came to market, and now he is going to take a run at IBM's mainframe and Unix server businesses. What's more, it looks like he will to be able to make some credible arguments as to why customers running Oracle software – and …
Don't think many corporations buy mainframes for the specint benchmark.
Real-world tpc-* benchmarks would be more interesting.
Also people pay huge premiums for mainframes for its advantages as mentioned such as security, reliability, the hope that the seller will be there in 20 years. They are super risk averse people and corporations.
I would not bet on sparc in 20 years. It's nice that Oracle will put price pressure on IBM but I'm not sure it would make sense for IBM / HP customers to go to Oracle. They will renegociate with IBM / HP and get better pricing, lower maintenance / upgrade fees.
Let's see in 1 year if Oracle grabbed mainframe market share.
I don't think Larry actually intends this to take any share away from System z, in the sense that someone would wholesale replace System z with this Unix server. It seems to be more of a marketing tactic. People know that mainframe = really reliable, reliable high thruput (or just performance in general), really high security... generally just top of the line across the board. This is not a mainframe, it is a standard 32 socket Unix server. By calling it a mainframe, I think he hopes that people will equate it with "generally really good server.".... HP has been doing that forever with Superdome, calling it "mainframe class."
It's quite funny, really:
"Most of the claims are Oracle’s own benchmarks that are not published and audited. There are a small number of industry standard benchmarks — and of course these are ones where it is extremely difficult, if not impossible, to compare to other relevant results."
i.e. "we don't like Oracle's own tests, and it's 'too hard' to disprove the industry standard numbers, so we won't try"
"i.e. "we don't like Oracle's own tests, and it's 'too hard' to disprove the industry standard numbers, so we won't try""
That is not what the IBM engineer is explaining at all in the unofficial blog when she writes, "and of course these are ones where it is extremely difficult, if not impossible, to compare to other relevant results."
What she means is that if IBM runs a SPEC or a TPC benchmark with a Power 7 system and Oracle then runs the same test, but Oracle decides to use 4x the cores, 4x the memory, 4x the disk arms with more I/O, you cannot then say that T5/M5 is faster, slower, or anything else than Power 7. It is not a valid test for comparison purposes. Apples and oranges with the overall system performance probably having nothing to do with the procs/systems but with more memory being faster than less or more disk arms being faster than less.... This is the problem with industry standard benchmarks. SPEC, TPC, etc don't control the external variables at all so they are next to meaningless for comparing one system to the next.
Its clear from your comments that you don't follow SPARC, as this year, SPARC celebrated 25 years, so clearly SPARC has survived longer than any other processor out there. Even Solaris has been around for 20+ years. If there is any processor (and OS) that has "staying power", in the enterprise, its SPARC/Solaris. http://www.youtube.com/watch?v=IKB9zV8TXuQ
Its clear from your comments that you don't follow SPARC, as this year, SPARC celebrated 25 years, so clearly SPARC has survived longer than any other processor out there.
The MIPS architecture is two years older than SPARC (if we're going by first shipping hardware), and it's still used for new CPUs, such as China's Godson.
IBM's zArchitecture is binary backward-compatible all the way to S/360, which started to ship in 1965. That's 48 years.
One could argue that today's x86 chips are just successive improvements on the 8086 - certainly they still carry some baggage from that architecture - which would make that architecture about 35 years old. I wouldn't make that argument, but it could be made.
To phil 4...
um. lets see, x86 dates back to 1978 or so. IBM System/Z has its roots in and is still 100% binary compatible with System/360 which dates back to 1962 or something. IBM Power series has been 64 bit since 1990 (Sparc went 64bit with v9 3-4 years later), and was evolved from the 801 RISC architecture designed in the early 70s.
maybe you should step out of that hall of mirrors that Larry created. There's a whole world out here.
I don't dislike SPARC but I don't think Oracle have the same skills as the old Sun used to - engineer on site in 4 hours. Hotfix by the end of the day.
I get the impression that Oracle still cost a lot but aren't as good.
The big SPARC boxes are Fujitsu made.
I don't like some of the choices in Solaris 11 but there is other things such as the zfs boot support being a binary in the boot block and not implemented properly into openfirmware that just show to me they cannot be bothered doing things properly.
I wouldn't mind working for a big Solaris shop but there are no good SPARC workstations and there won't be either.
I used to have use a mono Classix sparc xterminal that a liked it was so easy on the eyes and nice to work on. (Bit noisy having a sparc server under your desk though.)
I don't like the Sun C++ standard library either. I used Opensolaris when you reasonably could and it was a pita. (I built most of what I wanted against the apache c++ standard library but it is not a one job man building gnome and I never had it totally right too many hacks - fortunately I don't really need much).
I preferred the look of bitstream to freetype and still do.
> Real-world tpc-* benchmarks would be more interesting.
Oracle's SPARC T5-8 server equipped with eight 3.6 GHz SPARC T5 processors achieved a world record result of 8,552,523 tpmC for a single system on the TPC-C benchmark.
The SPARC T5-8 server is 2.4 times faster per chip compared to IBM Power 780 three-node cluster results for TPC-C tpmC and is 2.5 times less expensive per $/tpmC. (1)
Again clustered results versus non clustered results is not really a fair way to compare a server to another server.
But there is no doubt that Oracle currently has the highest scoring non clustered and clustered TPC-C results. It's a fact.
The result you really need to compare against is the POWER 595 result, from 2008
6,085,166 tpm-c with 128 Threads/64 Cores/32 Chips. Which means that the T5 chip delivers 5.6 times the throughput, but that POWER6 was 5.7 times faster per thread and 1.4 times faster per core.
What would be nice was a POWER 7[7|8]0 result to compare against. I have no doubt that a POWER 780-FHD would beat the T5-8 pretty easily. So the question is how would a POWER 770-MMD fare, it would most likely rather close. But again IMHO this forces IBM to upgrade their midrange line again.
Competition is always nice!
And why hasn't IBM benchmarked a fully loaded non-clustered Power 770/780/795 or even Power7+ system? Because their per core performance would all be worse that the lowly 2-socket Turbocore Power780 config they ran a few years ago. IBM whats everyone to size from the smallest configs using rPerf which as we all know, oversizes on workloads that have heavy I/O
"The SPARC T5-8 server is 2.4 times faster per chip compared to IBM Power 780 three-node cluster results for TPC-C tpmC and is 2.5 times less expensive per $/tpmC. (1)"
Yes, but Oracle used 4x the cores, 4x the memory and many times the amount of storage arms as the IBM result, i.e. Oracle's benchmark result has nothing to do with CPU.
According to El Reg, "...T5 and M5 chips run at 3.6GHz."
Is that really a better performer than the zEC12 chip, a 5.5 GHz hexa-core processor?
Beating Power7 is great, but I saw a marketing claim that Larry was about to announce the fastest processor in the world. Am I to understand this was marketing BS as usual?
It could be, there is not always a direct relationship between mhz and core count when comparing different 'breeds' of processors.
I would be interested to see the power consumption comparison here as well. A single benchmark figure is a little suggestive of there being more to the situation than we can see right now.
SPARC T5 not only runs at 3.6GHz, but SPARC T5 has 16 x cores/CPU and 8 x threads/core-more than any CPU in the world. So theres more to performance than just GHZ. Both SPARC T5 and M5 run circles around the 5.5 GHz zEC12 chip. SPARC T5 leap frogged IBM's latest Power7+ and latest Xeon SandBridge from integer and floating point performance to SAP, OLTP and Java.
Just look at the 17 world record benchmarks just published!https://blogs.oracle.com/BestPerf/
"When you're comparing to a System Z (Mainframe) then clock speed it where it's at. "
Although it is true that the EC12 System z has the fastest clock speeds, on CISC cycles no less, the mainframe's real advantage over every other system is thruput I/O. It wasn't built for scientific computing, where clock speed is important, it was built for handling incredible amounts of incoming and outgoing bank, census, logistics, etc transactions.
So now I need 16 F*****g licences to run oracle on the server????????? No wonder its only a quarter of a mil$ for a T5 server and nearly 2 mil for the IBM product. Cheaper once off hardware costs but higher licence costs, time to kick the oracle licence addiction.
 We abandoned a plan to virtualise our servers when it emerged that moving oracle from the dual core servers to the quad-core virtual servers would have doubled the licencing costs for no advantage even though oracle was going to be configured to run on on a single cpu. A side effect of that is that it has extended the life of the big blue box.
Don't remind me of the stark raving bonkers price-setting by Oracle, list price vs. actual price or not. Do they have any other customer than the financial "industry"?
If sold at < USD 500 per node + yearly maintenance: count me in.
Actual price is USD 15'000 per CPU (whatever that is): LOLNO!
Everywhere I have seen someone say "oh virtualization raised my Oracle Licensing" it usually is a result of one of Standards being based on Foolish consistencies, which everyone knows are the hobgoblin of a small mind
...if you don't know how or are using solutions that don't allow you to squeeze the license all the way down to a single core for Oracle DB, then you are doing it wrong.
Then you need to negotiate harder with Oracle. Their position on hardware cores and VM's that are not based on Oracle VM is untenable when challenged. And in my experience they will back down every time to avoid providing ammunition to the likes of VMWare who will call restraint of trade if they could only get the evidence.
I moved large amounts of databases to bigger hardware using VMs and paid Oracle no extra money.
"Then you need to negotiate harder with Oracle." - no - they need to deliver what customers want on the menu - I moved nearly everything to SQL server clusters over the last two years due to Oracle refusing to acknowledge virtualisation. Much lower TCO and far fewer issues. New databases created instantly rather than in minutes. God knows how Oracle sell anything these days..
"I moved large amounts of databases to bigger hardware using VMs and paid Oracle no extra money."
Agree with the premise, but I would be cautious with Oracle and VMware. Did Oracle actually give you something in writing that states that VMware counts as a hard partition? I have never seen that done. If you don't have the documentation, Oracle might decide not to charge you now... but if there is ever an audit that could be a problem.
Oracle's core multiplication factor uses a .25 cpu license for 1 core of a 16-core T3 processor. It uses a .5 for 1 core of the 8-core T4 processor. I have a feeling they will use a .25 cpu license for the 16-core T5 processor (of course they might get greedy).
At .25 licences/core, a 2 socket T5 would be effectively worth 8 CPU licenses. At .5 licences/core, a 2-socket T5 would be 16 licenses.
Oracle also recognizes LDOMs with whole-core constraints to be as valid CPU boundaries. So if you don't want to use all 32 cores in a 2-socket T5, use ldoms to control how many cores are being deployed...
So what's your point? Do you concur with what I posted?
And what is wrong with partitioning? I find the entire time-sliced Virtual machine paradigm silly. Especially for big workloads that need lots of cpus etc (crunching a few TB of data as fast as possible for instance). For instance we built a 100TB data warehouse... try running that in something like vmware.
I'd prefer "partitioning" over "Classic x86 VMs" any day.
No, I am just saying that perhaps you should "read the manual" before starting rambling about 0.25 licenses on the T5. It would have taken you around 15 seconds to google the right answer. Again most serious people who actually work with Oracle does have these lists as bookmarks.
And with regards to virtalization versus partitioning. Well in real life workloads fluctuate, through the day/week/month/year/hour/minute/second a workload might have very different needs for processor resources.
On a machine like for example a POWER server using POWERVM I can simply reflect this by allocating different amounts of virtual capacity and guaranteed physical capacity.
So if I have a combined workload on a production system, which averages between having 3 core average usage and for example peaks at 9 cores, several times through the day. Then I can simply allocate 9 (or 10 to be on the safe side) cores of virtual capacity, and 3 cores of guaranteed physical capacity.
On your T5-X machine you would allocate something like a whole chip with 16 cores.
Now 8 of these virtual machines would fill up your T5-8, but for example a power 760 would still only be half full, when it comes to processor resources.
The difference between physical and virtual capacity you normally call the overcommitment factor. And depending on your workloads, it normally ranges between 2-5 on POWER.
Basically making the machine 2-5 times bigger, you just need to be able to absorb the combined peaks at any time.
Now surely there is operating system-level virtualization like zones, Wpars etc. But these do have their limits and do not provide good enough separation IMHO to mix different types of landscapes, and certainly not, if you are a hosing provider, different clients.
Not sure if you know this - but because this information is available on the internet, it seems moot to have to RTFM to post on a b!tch!ng session on an online forum.
And I had the IBM folks come down to my office and do the dog & pony...song & dance. Needless to say, I was unimpressed. A lot of what IBM is doing today seems to be in direct response to Oracle's Exa-****.
Also, anyone who has designed systems knows that you need to have capacity for peak loads. In clustered scenarios it is even more important to have 1/nth of the capacity available on each node of an n+1 cluster.
Overcommitment/oversubscription is nothing new. All multi-tasking operating systems have been doing something similar for decades. Usually when that happens on a host, it's performance tanks.
The virtualization model in LDOMs is better because it bypasses the oversubscription issue of other virtualization solutions and wysiwyg. Everything down to the IO slots can be partitioned.
Now there is a big difference between "available on the internet", and then it being listed on the official list that is posted by Oracle. Get over it, it's no biggie. We all make mistakes.
And honestly Exa-XXX has very little to do with SPARC, it's basically x86 only. The SPARC supercluster really isn't branded under the EXA product name. At least IBM appliances are not x86 only. Not that this makes me like IBM's appliances more than Oracle's.
But you do have a point, IBM is pushing appliances, just like Oracle. Appliances surfaces once in a while, but if you think that the EXA-data and related products are something new and exciting and a first in the industry, you are wrong. It has been tried and done many times, with different degrees of success. The difference is the amount of marketing muscle that Oracle is putting behind their effort compared to what has been done in the past.
As for overcommitment. Are you serious ? Why do you think that a product like VMware is so popular ?
And performance of multi-tasking operating systems tank when they run more than one process ?
Sure if you are happy buying x2-4 times the hardware to run your apps, then sure.... use LDOMs only, my advice to you is to use it in conjunction with containers where it's appropriate.
Honestly even the most dense of our Wintel entry level tech guys in offshoring countries that I have problems spelling the names off, know the value of overcommitment in a virtualized environment.
I have no problem with people being fanatic about their platform of choice. But there are limits.
I think you are missing the point. The sizes of the workloads I'm referring to prevent themselves from running on vmware-type platforms. Albeit, once we go into the realm of 8-socket 10-core intel-based servers, the price differential between them and a T4-4 disappear quite rapidly.
I have deployed hundreds of containers in Solaris and as many ldoms. Although, I've been leery of using ldoms until the T4s came along. And what's more, I've also run a shop with vmware heavily leveraged. VMware isn't bad for small virtual machines. It becomes unwieldy when you start getting into larger sized machines.
Also, I was tasked with identifying cost of ownership of a vmware-based as well as a pure oracle/sun solution (just for comparison). I was amazed at how close they were once we factored in costs for vmware enterprise licenses and support, guest os support (RHEL, windows, etc) and the hardware costs (not to mention multiple vendors complicating the support model).
It boils down to how efficiently you design your solution and how skilled your engineers are at managing the infrastructure.
Yes VMware isn't cheap. We can easily agree upon that. I've doing 'virtualization' for what.... 14 years or so. My personal favourite is clearly POWERVM, although I've done quite a bit of zVM, VMware,VirtualBox and KVM. I have absolutely no problem doing huge virtual machines, on POWERVM. I think the largest we have here is around 32 cores and half a terrabyte of RAM. I have no problem mixing test/production different clients, different firewall zones different OS versions etc etc. on the same physical machine. A machine that runs below 50% average utilization measured on a weekly basis is IMHO not configured right.
And this is not something new. This is how things have been on POWER since before SUN started shipping T2 based systems.
Well I need to write up a presentation on how T5 fits into our strategy. Although I can see that I don't have to change our strategic roadmap for the SPARC platform. But now I think I'll have a beer.
At the end of the day, a big factor in deciding what you implement depends on the incumbent(technology) in your shop and how the experience with it has been.
Too many times I've seen border-line (in)competent engineers flubbing a implementing and managing a simple design just because they didn't have the dedication to learn a new way of doing things (or because the solution didn't fit into what they considered to be "cool").
The T5 is interesting and if my micro-benchmarks of the T4 are an yardstick, the T5 (if it does provide the 20% boost in single thread performance) will outperform most intel-based gear out there. Add the ability to multi-thread across 128-256 strands of silicon and it's a potential winner. Unfortunately it seems that real admins are a rare commodity these days -- too many kiddies driving VCenter think they know infrastructure, if you know what i mean...
"A lot of what IBM is doing today seems to be in direct response to Oracle's Exa-****."
Appliances have been around forever. How long has Teradata been around? Although it wasn't owned by IBM at the time, Exa's architecture looks very similar to what Netezza developed years earlier.
Also, IBM was the original integrated systems company. Look at System i and its predecessors. That is a true appliance, where the OS/VM/DB/AS layers were custom engineered for the hardware down to the silicon and vice versa. All of this integration, appliances, etc talk are a validation of the IBM systems' architecture model, the centralized model vs. the distributed model.
"Oracle's core multiplication factor uses a .25 cpu license for 1 core of a 16-core T3 processor"
If Oracle gives the SPARC a .25 or a .5 core factor when the Power systems are 1 core factor, that means Oracle thinks a core of Power will run 4x or 2x the workload. If Oracle decided to provide SPARC with a .25 core processor when it actually can handle twice the workload of Power and four times the workload of x86 (whatever their latest benchmark claim is), for instance, then every Oracle software customer in the world would move to SPARC to cut their Oracle licensing cost to 1/8th of their current cost.... Oracle is a software company. They would never allow low margin hardware sales dictate their high margin software support revenue. Oracle tries very hard to ensure that those core factors are accurate because they don't want people reducing their software licensing by moving to one CPU or another.
Oracle's official position, when their money is at stake, is that SPARC can handle 1/2 or 1/4th, depending on the CPU, the amount of workload per core that Power 7 can handle.
No, they want to push SPARC sales. Larry's wet dream is to see you all move to Oracle DB/SPARC. In chess you call that a gambit move. SPARC needs to regain traction in the industry, Oracle's takeover meant that many SUN customers moved off of SPARC - they want them back.
Oracle provides the only real enterprise-worthy database, that is why they are so big in software ... they wanna use that to leverage more hardware sales. They do not care if they cut their margins slightly for hardware, because the main goal is to inflict pain on the competition.
Traction, traction, traction ....
To optimize oracle license cost, hunt for the best amd/intel chips with the fewest cores.
I still see very good dual-cores out there. It's too sad Oracle on purpose limits the virtualization options so that there is no viable option on x86. Running Oracle DB on anyrthing but x86 though is . Not only does x86 have the best per core speed, you pay only 0.5 processor license per x86 core.
Biting the hand that feeds IT © 1998–2019