Unix systems may not be all the rage that they were two decades ago, but in nearly eight out of 10 data centers based on them, their use is either holding steady or increasing. That's the assessment of a recent survey of the HP, IBM, and Oracle Unix customer bases by Gabriel Consulting Group, which has just finished up its …
So: Unix users use Unix (shock!)
And the ones that (still) use Unix are using more Unix.
Okaaaaaay. We generally recognise that a given number of techies can handle more Unix systems than Microsoft systems and that generally those systems are less prone to "problems" and can probably be run at higher average load factors, too.
But, this survey doesn't really give any information about the number of jobs in the Unix market. It makes it sound as if the increased number of installations in Unix shops _should_ give rise to more Unix techies, but if the number of places where Unix is still spoken has decreased (albeit with a rise in it's usage where the flame is still alight) then that doesn't bode well for the lovers of forward-slashes.
I would have gone to the original report to seek the answer, but all it did was cough some error about "TCPDF error: Font file not found" I wonder if the report was produced on a Unix box
... is written in PHP, with all that entails.
Possibly developed on linux, though I expect it to run on just about any php-capable httpd regardless of OS.
Paris, because I'm been smrt today.
Didn't look at employment...
The job market for Unix gurus is something we didn't go into with the survey. What we did see was that a decent number of shops who have commercial Unix plan to use more of it (about 45% of them said that). That doesn't necessarily mean an significant increase in employment, though, given that these systems are designed to enable fewer techies to manage more gear and workloads.
The report is just a pdf document embedded on the blog on our main site. You can also get it in our recent research section. And, no, it wasn't produced on a Unix box....lol....
Despite the similarities between Unix and various commercial-grade Linuxes (meaning they have third-party tech support from reputable companies), Linux has not become a replacement for Unix.
Well, where I work we are getting rid of our Slowaris sytstems ASAP. They are being replaced with RHEL. Why? Oracle have hiked the Maintennance charges by a frankly silly amount. Add to this the fact that we run Oracle on HP/Sux you can see that the writing is on the wall for both Oracle and HP kit.
HP might get the Blades that are going to replace them but the ball is in the air on that one. The betting in my team is that we buy from 3 or possibly 4 different vendors. That way we don't get shafted after 2 years when Maker A ramps up the costs.
Anon coz we don't want a bevvy(or should that be a phalanx?) of Oracle Salesdroids coming a calling.
Misunderstood final data point and request for further research
The last chart of your article does not pose the question "what percentage of your mission critical apps are on Unix" to which your penultimate paragraphs speaks. The chart is "how many of your UNIX apps are mission critical".
Great article, but I think it's horribly biased if you didn't go poll SuSE and Redhat customers as many of them may have very small unix install bases that aren't tracked by the unix vendors.
What would be interesting is a follow-up as to the size of the install bases of UNIX vs. Linux systems in those same companies and the percentage of their apps that run on UNIX. Half of the time my company selects UNIX over Linux is because a software vendor doesn't support Linux.
Good to know UNIX is not dead and people are still buying big iron, but there are too many variables left unexplored to get a decent picture of the unix to linux exodus that I wished this article had explored.
So "a recent survey of the HP, IBM, and Oracle Unix customer bases" found that they mostly use Unix.
Isn't that a bit like finding that owners of Renaults and Citroens mostly drive French cars?
"Linux has not become a replacement for Unix"
There is a gigantic penguin in this room. It may be hard to spot behind all the salesdroids at first...
linux all around
at least in the semiconductor / chip designing industry in which i work, linux has replaced solaris everywhere and i mean everywhere.
you just cannot compare the savings in run time on linux machines as opposed to solaris machines.
I remember some CEO of a big firm that replaced Solaris with Linux, and claimed that the Linux solution is so much faster. That was great, until I found out that he had replaced 800 old SPARC servers, with 4.000 new x86 servers with double Intel Duo Core cpus running Linux.
Do you think it would be fair if I compared Linux on an 800MHz SPARC server, to Solaris running on a x86 server with dual Intel Core Duos at 2.4GHz?
There are many cases where Solaris gives higher performance than Linux, on the same hardware. For instance, official SAP benchmarks.
'Unix for mission critical roles, Linux not so much' is misleading
While certain types of workload seem to be popularly run on Unix, often things like corporate databases, it would be misleading to characterize the Linux workloads as non mission critical.
Unix seems to be often favoured for database servers, applications servers, and often web servers.
Linux, on the other hand, is very heavily used in firewalls, filter proxies, DNS servers, DHCP servers, IP administration systems, intrusion detection, intrusion prevention, network monitoring, server monitoring, packet capture and analysis systems, FTP, syslog servers and similar infrastructure components. Obviously failures in some of these will be vastly disruptive to delivery of services. These days, scratch an appliance, and you'll find Linux under the hood.
If I had to classify the situation, it would be that Unix is currently strong at the applications level, while Linux is coming to dominate the infrastructure level.
Linux, Sun & Oracle
Sun was the reference platform for Oracle for the longest time. Then it became Linux. The idea that you can't run your "mission critical" database on Linux is silly. So is the idea that corporations generally don't do this. It's the top Oracle platform for crying out loud.
Linux isn't just for appliances and hasn't been for quite a long time now.
Although companies are adverse to change. This is especially true of large companies. Although even these are and have been moving to Linux for "real work". The whole shenanigan with Sun and Oracle has probably even accelerated this. Then there's this whole nonsense with Itanium.
Yeah, I agree...that's what we see too
What we're seeing is that commercial Unix platforms, over time, have moved into sort of a quasi-mainframe role in many enterprises. They tend to run the apps that need a high level of vertical scalability, single system high av, etc. etc.
This isn't to say that Linux (or Windows or mainframes) don't have mission critical roles too - they absolutely do. And, as you point out, infrastrucure is mission critical and Linux has become the choice platform for infrastructure.
What we were trying to find out with those particular survey questions was if customers view Linux as a direct substitute for commercial Unix and if they believe that the two are equal when it comes to playing in the current commercial Unix sweet spot.
"...Sun was the reference platform for Oracle for the longest time. Then it became Linux...It's the top Oracle platform for crying out loud."
No, this is not really correct. Solaris is the most common platform that Oracle is running on, according to Larry Ellisson, and he should know. Also, the "reference platform" to run Oracle DB on, is Solaris. It has been for a long time before Oracle bought Sun, and continue even today.
Unix users mostly use Unix
Stunning revelation that one.
I wonder what Windows users mostly use? OS X maybe?
"I wonder what Windows users mostly use?"
The reset button?
UNIX or Linux
Truth is all you need to do is flick to power switch to ON and "Fuggedaboutit". Either are better than the alternatives.
Linux *is* way bigger than Unix installs, though. Otherwise, I've been living a parallel universe for a looooong time. Me and dozens of other sysadmins.
There *are* a lot of BSD installs, but not much else these days. Linux is huge.
RE: "Careful on that last graph" and "Misunderstood final data point..."
Hi, I'm Dan Olds and my company did the survey. What we asked in that "Mission critical" question was the proportion of mission critical workloads on users current commercial Unix platforms. What we wanted to see was if these platforms were primarily hosting highly important apps vs. hosting some highly important apps plus being general purpose systems.
There are lots of mission critical apps on mainframes, Linux and even windows platforms, of course. But that question was ONLY asking about the proportion of mission critical apps on commercial Unix hosts. It wasn't asking about the distribution of mission critical apps on, for example, Unix vs. mainframe vs. Linux vs. Windows...
The problem is....
that even though Linux provides a UNIX-like programming and application environment, when it comes to enterprise features, even the best Linux distro is not as easy to keep running as the best of the UNIX platforms.
I'm biased, I admit. I earn my living supporting AIX. But if there is a problem on one of 'my' AIX systems, it reports it to me, gathers the debug information, and on the ones so configured will even call the problem in to IBM. Often, if it is a duplexed part like a power supply, fan or disk, the part can be replaced without taking the service down, and even PCI cards can be hot-swapped on many models. CPU and memory failure can even happen and the system can continue running. It's not quite Non-Stop but...
If mission criticality is an issue, it is possible to configure a system such that the partition can me migrated on the fly to another suitable system. AIX has been able to do live partition migration for a few years now.
It is just easier using AIX that trying to patch together something similar with ESX or other virtualisation technology. This may change over time, but it has not yet, and I cannot see any real evidence that any of the large distro providers are doing anything to do it.
The standard complaint I hear is that some people regard UNIX as 'backward' compared to Linux, but that is the price of stability, and I'm sure that BSD users will say the same. I would say that Linux runs the risk or stumbling while it is running forward.
I do also support SuSE systems, and run Ubuntu on my own systems, and there is no doubt in my mind that if asked (and there was no real financial hurdle), I would recommend and AIX system over a Linux one (but, of course, Linux over Windows).
When I talk to people who have grown up with Linux without having used UNIX, it is clear that without that perspective, they just cannot realize the difference, and just regard Linux as UNIX on the cheap.
The AIX admin from the alternate universe.
I wish AIX was like this. Really I do. That would mean that the AIX machines that I have been forced to use in the past would have been much more robust than they turned out in practice. Your description of AIX sounds like something out of an IBM ad rather than actual field experience.
The problem with "UNIX" is primarily that it is expensive and largely uneccessary.
The added expense of the hardware you're describing (above and beyond the mere garden variety Unix RISC servers) is why clustering has become popular in certain applications in recent years.
Big expensive box or cluster of x86-64 using app-based clustering or ESX with VMotion? I know which option most shops are choosing.
Agreed. Proprietary Unix differs from Linux in the bespoke hardware it runs on. Manufacturers go to great lengths to offer high availability, performance and scalability features with Unix,while Linux is offered more as a commodity, running on commodity hardware.
Unix is tailored to the enterprise, offering built-in support for fibre, SANs etc., and various "business continuity" features and fault management frameworks.
Linux on the other hand, though it is fit for enterprise stuff, is more commonly found in LAMP stacks and other commodity roles, usually running on off-the-shelf hardware rather than purpose built kit.
Regarding Unix jobs, Linux offers a very handy way forward for unix bods feeling the squeeze.
My 'alternative' universe. What's yours like?
I said up front that I make a living supporting AIX. As it happens, I am currently contracting for IBM on a customer site, and have in the past been an IBM employee for a number of years.
But with my 20+ years of AIX (mostly outside of IBM) and over 30 years of other UNIX experience including 10 years of Linux in fields such as banking, utility, engineering, education and government, on systems running from micro-processors through departmental minis to Amdahl mainframes, AIX really has been this easy, at least if sensible design (i.e. like the manuals say plus a bit of common sense) has been followed. And it is still improving! (no, this is not a sales pitch, merely my observations).
I will stand my UNIX experience up against anybody else's. When I started working with UNIX in 1978, there were about half-a-dozen UNIX systems in the UK, and the total number of people with any experience in the UK probably did not exceed 100. And I have worked almost continuously with UNIX ever since.
Back to AIX, and no platform is without warts, and as good as I perceive it to be, sometimes you have problems. But where I am currently we have in the area of my responsibility 300+ AIX systems, being thrashed (literally) 24 hours a day, with 10's of TB of data changing on a daily basis, managed by a team of 5 people, some of whom have other responsibilities. On the same site, we have large Linux and Windows deployments, and there is also a Mainframe doing critical work.
Our current uptime on the AIX systems is low at around 60 days (having had some global power work done in the last two months), but normally runs into the 100's of days. In that 60 days, we have had about 8 disk failures out of an estate of about 4000 all of which were handled without any outage (including system disks). In the past, we have had memory failures, with the systems continuing to run until a convenient time to move the workload, and CPU's taken out of service in the same manner. We've also replaced complete RAID adapters (in an HA RAID environment), power supplies and cooling components without losing service. This is BTW, a clustered environment.
We are just about to embark in replacing 100s of RAID adapter cache batteries, and we do not expect to take *any* service impact at all during the work.
I would suggest that if the systems you 'have been forced' to use have been a bad experience, either you are not giving the whole picture (like if you think that you need the latest and greatest Open Source products - which would really be an application problem, not a deficiency of AIX or POWER platform), or there has not been due diligence in setting them up. Get someone who knows what they are doing in on the installation!
I have often found that sites tend to be partisan. Solaris or HP/UX sites often do not embrace AIX enough to understand how to run it properly, and vice-versa. But I do try to keep an open mind, and I do appreciate that I am not as knowledgeable of more recent Solaris or HP/UX systems as I am AIX. But in recent years, I have perceived them to be less innovative than the IBM offering, and when I last has serious work to do on them they just felt like they had been left in the last century when it comes to RAS and sysadmin tasks. But that's my opinion. I'm sure there are other opinions out there.
But I would say that AIX looks destined to the the last Genetic UNIX standing, given HP and Oracle's current attitude towards their products, and Linux still has a way to go in enterprise environments to replace it. I hope so, anyway, as I would like to get to retirement age without losing my career!
Oh.. sure.. you just take your 40 core and 700GB RAM virtual machine and Vmotion it from one machine to another .. over your 4x10GBit link aggregated management network.
Now there are people who are really driving a Ford F150/F250/F350 when they say that they drive a truck, and then there are people that are driving an eighteen wheeler.
Now that doesn't mean that they aren't really both right, and that the guy who drives the eighteen wheeler doesn't have a F150 at home. But there is a difference.
And btw Linux runs just fine on POWER :)=
// Jesper writing from a Linux box.
AIX the "last Genetic UNIX "??
Mister Gathercole ... I know who you are, and I'm fairly certain you know who I am. So pardon me while I pooh-pooh your premise.
The only true remaining "genetic" UNIX is BSD ... Slackware comes close, in the Linux camp.
Solaris is free and runs on PCs. How cheap do you want to go?
RE: Solaris is free ...
Umm ... no it aint any more, not since Larry bought up Sun.
unless you want to download it every 90 days and do a complete install, and even then you're probably on dodgy ground.
We currently use a copy of Solaris from before the license changed (and when we signed up for a large number of copies every time we downloaded), but the people we've spoken to are unsure if we can install the old version on newer Sun/Dell/Oracle kit.
The problem regarding BSD as a Genetic UNIX is that there is no AT&T code in it after the huge bruhaha with regard to removing any code that was covered by the UNIX V7 educational licence that BSD relied on in the 1980's!
A UNIX educational license specifically prohibits the use of Bell Labs/AT&T UNIX code in a commercial OS offering (I actually was a Bell Labs V6 and AT&T V7 UNIX license holder for a number of years) or even for teaching purposes, and UNIX System Laboratories took the Regents of the University of California, Berkeley to court to enforce this when they (UCB) started commercialising BSD. BSD did not take out a System III or System V license to cover any code, they just replaced it, leading to BSD/Lite and FreeBSD.
My view is at odds as what Wikipedia says about BSD in the main article. I regard there to be a requirement for there to be actual code, not just design ideas in a UNIX for it to be considered as a 'Genetic' UNIX.
Also, in order to use the UNIX trademark, it is necessary for a UNIX-like OS to be subjected to, and pass the Single UNIX Specification (SUS) verification suite. AIX does, as does Solaris, HP/UX, Tru64 UNIX and SCO UNIXware. Linux and BSD do not, so cannot legally be called UNIX.
Darwin/Mac OSX falls into the same "not Genetic UNIX", even though it qualifies for the UNIX 03 branding (a point I did not realise until I researched it just now).
And as Slackware is definitely not derived from any Bell Labs/AT&T code (It's Linux, with GNUs Not UNIX code running on the top like any other Linux).
See http://www.levenez.com/unix, and try to find any feed from an AT&T UNIX into Linux. There are a couple from IRIX, and a few feeds from Plan 9, but I think that these were filesystems, GL and utilities rather than principal parts of the OS.
Don't get me wrong. I have nothing against BSD as it is a family of fine OS's. But it really is UNIX-like rather than UNIX or a Genetic UNIX.
I know. I was there.
I got to Berkeley roughly at roughly the same time as ken ... The first UNIX I used was Unix TSS (4?, 5?, Lotta water under the ol' bridge ...) and I worked on BSD through Tahoe & Reno. In my mind, the current various BSDs are more UNIX-like than the commercial variations allowed to use the term "UNIX[tm]" ... except (surprise!) Apple's OSX.
Slackware is the only fairly mainstream distribution of Linux that I, personally, consider being close to UNIX-like.
And please note that while the various BSDs don't use any commercial code, ALL of the commercial UNIX[tm]s contain BSD code.
Maybe "genetic UNIX" is the wrong term ... how about "spiritual UNIX" ;-)
I think we can agree on this
I like 'Spiritual UNIX".
On the subject of commercial UNIXes using BSD code, if you publish under a permissive license, people will use it. But that's the plan, isn't it? :-)
Thanks for the interesting dialogue.
@ Peter Gathercole
I cannot boast 30 years of unix experience, only 22, and little of that with AIX. I would say that the most advanced unix is probably Solaris, with which Sun have introduced many innovations over the years. AIX is successful because it is more proprietary in nature, and integrates closely with IBMs dedicated hardware, allowing, for example, the feats of business continuity outlined in your post.
" ...Linux still has a way to go in enterprise environments to replace [AIX]. I hope so, anyway, as I would like to get to retirement age without losing my career!"
You could be a Linux admin guru by this time next month of you want. In admin terms, AIX and Linux share some important elements, eg. LVM.
Indeed, that is, and was, the plan ...
Just don't tell Apple ... It hurts his marketroid's feelings when you remind them that, sad to say, St. Steve didn't actually invent it ;-)
Next round's on me, if you're ever in the San Francisco Bay Area's North Bay. I suspect we could bore our .sig others with war stories for a few hours^Wdays ...
"...But in recent years, I have perceived [Solaris/HP-UX] to be less innovative than the IBM offering..."
I certainly dont agree with you. I mean, IBM offerings has scaling problems and does not scale as well as Solaris (TPC-C, AIX scaling was rewritten to handle P795 with a measly 256 cores).
IBM AIX is copying Solaris DTrace and renaming it Probevue.
IBM for many years trash talked Sun Niagara and said that 1-2 cores at high clock speed such as 5-6GHz is the future, because data bases like strong cores. To use many cores at lower speeds is just a bad idea, said IBM. One strong core is the future. And now today POWER7 does not have 1 core at 8-9GHz, but instead it has many lower clocked cores, just like Niagara. Sun realized that GHz race will shift to many core race, but IBM did not understand that until POWER7. POWER6 was 5GHz and 2 cores. POWER7 is not 6-7GHz and 1-2 cores. So, the future is not 1-2 strong cores. Back in the Sun days, 8 cores in a cpu was just crazy, no one had that, except Sun. Today Oracle aim for 512 threads in one cpu, which is crazy today. But tomorrow everyone will have it. IBM will copy that many threads too.
And for instance Solaris ZFS, I dont know of any IBM storage solution that protects your data as well as ZFS does. It would not surprise me if IBM copy ZFS too, soon.
Sun first released their Container full of servers, called Black Box. And some time later, IBM also started to offer a container full of servers.
etc etc etc.
What techniques has IBM created, that Solaris copied? You talk about "recent years". Can you give an example?
"...But I would say that AIX looks destined to the the last Genetic UNIX standing, given HP and Oracle's current attitude towards their products, and Linux still has a way to go in enterprise environments to replace it...."
So what do you say about official IBM statements to media and Linux conferences, that AIX will be killed off and replaced with Linux, some time in the future? I mean, HP-UX will still support Itanium 10 years from now even if Itanium is killed today. So in the enterprise setting we talk about 10 year time horizons. Not next year.
What do you say about the trend of x86 cpus catching up on POWER performance? I mean, POWER6 servers were several times faster than x86, and costed 5-10x more.
POWER7 is 10% faster than Intel Westmere-EX and costs 3x more:
The next year Ivy Bridge will be 40% faster than Westmere-EX, according to official Intel statements. x86 is getting high performance, at a higher rate than POWER.
Does this mean that POWER8 will be slower than x86? In that case, POWER8 need to be real cheap. And we all know that IBM only does high margin business. IBM will kill off POWER it is too cheap, and will replace everything with fast and cheap x86 running Linux. Coincidentally, IBM has officially confirmed this: "AIX to be replaced with Linux". See post above. This is not something I make up, this is true. IBM has officially said this. I am not spreading false rumours, this is true.
I'm not intending to start an OS war, nor criticise Solaris (although I must admit that some statements I made could have been considered contentious). The original intention of my comments were to indicate where Linux lacks the Enterprise features other UNIXes have, and I was using AIX as the example, possibly in a rather blunt manner.
Doing a bit of digging on Solaris features, I find that Solaris and AIX both have an extensive set, and many of them are comparable on a like-for-like basis. I do not intend to do a comparison, nor do I wish to compare when things were introduced, because there were novel innovations that were copied by the other in both OS's.
I think that if we were actually to compare notes, we may find that the capabilities of both OS's are comparable, with Solaris having an edge on things like NFS implementation, ZFS and DTrace, and AIX with GPFS, some of the partitioning capabilities and possibly compiler technology.
So it is probably not possible to actually have an objective 'Most Advanced UNIX', and any distinction is likely to be subjective and open to debate. Lets agree that proprietary UNIXes continue to have a place in the datacentre, and encourage our Linux developer colleagues to continue to aspire to produce features that really will make Linux a suitable alternative platform for Enterprise workloads.
In terms of becoming a Linux admin guru, I suspect that it is easier to go from either AIX or Solaris to Linux, rather than the other way round.
"I certainly dont agree with you. I mean, IBM offerings has scaling problems and does not scale as well as Solaris (TPC-C, AIX scaling was rewritten to handle P795 with a measly 256 cores)."
Ehh ? What the f word are you talking about. The POWER server platform actually has good scaling. Well at least compared to anything Oracle can muster.
Lets have a look at perhaps the easiest scalable benchmark SPECINTrate2006.
Here going from 128 cores to 256 cores the new SPARC64+ based M9000 has a scaling factor of 91.6% compared to an ideal x2, now the POWER 795 going all the way from the 32 cores to 256 cores has a scaling factor of 98.7%.
But we can of cause also look at pure raw performance numbers, the 256 core M9000 with the 2.88 GHz SPARC VII on SAP2Tier does 175,600 SAPS, the POWER 795 does 688,630 SAPS. I mean even the 64 core POWER 780 does 202,180 SAPS.
Come on... Don't throw stones when you live in a glass house filled with china.
"IBM AIX is copying Solaris DTrace and renaming it Probevue."
Well, just cause others are doing neat stuff doesn't mean that you shouldn't do it yourself.
"IBM for many years trash talked Sun Niagara and said that 1-2 cores at high clock speed such as 5-6GHz is the future, because data bases like strong cores. To use many cores at lower speeds is just a bad idea, said IBM. One strong core is the future. "
You do know that POWER have managed to do all three things. Put more cores on a chip AND increase the per core throughput and socket throughput.
SPECJBB2005 for example:
POWER 780 with 8 chips and 64 POWER7 email@example.com GHz does 5,087,469 BOBS.
POWER 595 with 32 chips and 64 POWER6 firstname.lastname@example.org GHz does 3,435,485 BOBS.
That is a difference of 48% per core throughput.
and lets have a look at SPECINT2006rate
POWER 780 with 8 chips and 64 POWER7 email@example.com GHz does 2740
POWER 595 with 32 chips and 64 POWER6 firstname.lastname@example.org GHz does 2160
that is a difference in 27% in per core throughput.
And it not even with the highest clocked POWER7.
Again you have absolutely no clue what so ever.
"And now today POWER7 does not have 1 core at 8-9GHz, but instead it has many lower clocked cores, just like Niagara. Sun realized that GHz race will shift to many core race, but IBM did not understand that until POWER7. POWER6 was 5GHz and 2 cores. POWER7 is not 6-7GHz and 1-2 cores. So, the future is not 1-2 strong cores. Back in the Sun days, 8 cores in a cpu was just crazy, no one had that, except Sun. Today Oracle aim for 512 threads in one cpu, which is crazy today. But tomorrow everyone will have it. IBM will copy that many threads too."
Eh.. again with your GHz.. you sound like a IBM Mainframe sales guy.
And you don't get it again, in servers it's about having as good as possible combination of per thread, core and per socket throughput.
And to be quite blunt SUN/Oracle have in recent years failed being in the top pack with more than one of these at the same time. POWER on the other hand have managed to increase all three things: the per core, per thread, per chip and per socket throughput from generation to generation.
And you seriously need a history lesson. Cause you do seem to totally ignore facts.
Now comparing SPARC and POWER history wise.
More than one core per chip.
2001 POWER4 2 cores per chip
2005 Xenon 3 cores per chip.
2006 Cell 1+8 cores per chip
2007 BlueGene/P chip 4 cores per chip
2010 POWER7 chip 8 cores per chip.
2004 UltraSPARC IV 2 cores per chip.
2005 UltraSPARC T1 8 cores per chip.
2007 SPARC64 VI 2 cores per chip.
2008 SPARC64 VII 4 cores per chip
2009 SPARC64 VIIIfx 8 cores per chip
2010 SPARC T3 16 cores per chip
More than one chip per socket.
2001 POWER4 MCM modules for the p690 resulting in 4 chips/8 cores/8 threads per socket.
2004 POWER5 MCM modules for the p595 resulting in 4 chips / 8 cores per/16 threads socket.
2005 POWER5+ QCM modules for the p520/550/560 resulting in 2 chips/4 cores/8 threads per socket
2011 POWER7 MCM modules for the POWER 775 resulting in 4 chips/32 cores/128 threads per socket
SUN/Oracle doesn't have/use this technology.
2000 RS64 IV implemented 2 way Coarse Grained MultiThreading.
2004 POWER5 implemented 2 way Simultaneous MultiThreading.
2010 POWER7 implemented 4 way Simultaneous MultiThreading.
2005 UltraSPARC T1 implemented 4 way Fine Grained MultiThreading.
2007 UltraSPARC T2 implemented 8 way Fine Grained MultiThreading.
2007 SPARC64 VI implemented 2 way Coars Grained MultiThreading.
2008 SPARC64 VI implemented 2 way Simultaneous MultiThreading.
Do I really need to say more ?
"And for instance Solaris ZFS, I dont know of any IBM storage solution that protects your data as well as ZFS does. It would not surprise me if IBM copy ZFS too, soon."
I think the guy that put this best was Linus, when he asked why does a filesystem have to do that ?
And cool as it is.. I have to say I agree with Linus, this is perhaps taking the role of the filesystem one step to far.
As for putting computers into a container... well... I would hardly call that innovation. We have them.. it's stupid in most cases IMHO, but as a hack to have variable capacity that you can move from location to location it's ok.
"etc etc etc."
So that is basically your argument ? You couldn't
"What techniques has IBM created, that Solaris copied? You talk about "recent years". Can you give an example?"
You have to be kidding ? You forget that IBM is a hundred year old company that pretty much invented the Computer Industry together with companies like NCR, Xerox, bell labs ... and later DEC and ...
And to be honest Oracle (SUN) is kind of new in that perspective. So what about.....
The RISC processor ?
The hard disk ?
The tape drive ?
Logical Volume Manager.
Software package management system.
Solaris Jumpstart (AIX NIM)
And as far as I understand Solaris 11 will kind of start to use a ODM system like the one found in AIX.
The Hypervisor used on the T series ?
All due respect to Oracle (SUN), but they are not one of the inventors of the computer business, they will always have to build on stuff made by others, simply cause they are a much younger company.
And the rest is just your normal B******, I think people are pretty tired of hearing it. I mean even 'serious' IT people who really dig Solaris and SPARC (and there are a lot of those) are distancing them from you. Perhaps U need to find a new tune to play.
Don't want to have a flame match, but much of Sun's more recent innovations happened between 2000 and 2005, with the exception of LDOMs which look like as much a copy of IBM's LPARs as WPARs were a copy of Containers.
IBM keep adding new features in the virtualization area, as well as RAS, parallelization (which if you don't work with MPI programs, will be completely invisible to you) and large system integration and clustering. See the AIX 6.1 and AIX 7.1 release notes, which summarise the new features quite well.
I was not commenting on Power vs. SPARC vs. x86_64, as that is a discussion for a completely different news story. You definitely made some good points, although what makes customers continue to buy a platform is the combination of hardware, OS and applications, not just the best of one. We'll see what happens over the next few years, I guess.
"...Ehh ? What the f word are you talking about. The POWER server platform actually has good scaling. Well at least compared to anything Oracle can muster..."
Sure, you talk about one benchmark; SPECint. And you prove that POWER scales well on SPECint, which is an easy parallellisable benchmark. Does this mean you have proved that POWER scales well in general? I dont agree with you:
I read here on this site, that AIX needed to be rewritten to handle P795, with 256 cores. So IBM could not handle 256 cores until just recently? Isn't that bad scaling?
I also heard that IBM can not scale the TPC-C clusters to counter Oracle's TPC-C world record. Isn't that bad scaling? IBM will never be able to break Oracle's TPC-C record?
I also heard that IBM's biggest mainframes only has 24 cpus. Why not bigger? Problem with scaling? I just asking.
If we talk about Solaris 11, it has been rewritten to handle big Oracle servers with 16.384 threads. Even the old Solaris 10 handled 256 threads. Sun sold old SPARC servers with up to 144 cpus.
"...You do know that POWER have managed to do all three things. Put more cores on a chip AND increase the per core throughput and socket throughput..."
No, that is not that I am talking about. I am not talking about if IBM increased throughput and needed to lower GHz to stay in a reasonable wattage.
I am refering to when IBM explained that the future is in 1-2 super fast cores beyond 5GHz and higher, because databases prefers on strong cores with good single thread performance. When I look at POWER7, I dont see 1-2 cores clocked higher than POWER6, at 6GHz or 7GHz. Instead, I see many cores, lower clocked than POWER6, going under 5GHz. I wonder if POWER8 will have more cores than POWER7, and even lower GHz? And thus, stray even further from the "1 super fast core at 6-7GHz"? Dont you agree that IBM has abandoned the "1-2 superfast cores" and followed Sun's "many lower clocked cores"?
"...Again you have absolutely no clue what so ever..." - does this mean you think that POWER7 is more similar to a single core cpu at 6-7GHz, than a cpu with many cores at 3-4GHz?
"...I think the guy that put this best was Linus, when he asked why does a filesystem have to do that ?... And cool as it is.. I have to say I agree with Linus, this is perhaps taking the role of the filesystem one step to far..."
I certainly dont agree with you. As ZFS creator Jeff Bonwick explained:
"The job of any filesystem boils down to this: when asked to read a block, it should return the same data that was previously written to that block. If it can't do that -- because the disk is offline or the data has been damaged or tampered with -- it should detect this and return an error...Incredibly, most filesystems fail this test. They depend on the underlying hardware to detect and report errors. If a disk simply returns bad data, the average filesystem won't even detect it."
I also know that several large instituitions such as physics centre CERN (who stores large amounts of data for their big Hadron LHC collider) are very concerned with this. If CERN stores experiment data and if the data is silently corrupted, maybe CERN will not detect the Higgs boson. You know, there are thousands of researchers spending years on this project. That is the reason CERN is very concerned with silent corruption:
Or, if you encounter a silent corruption in your database. When did the corruption take place? How long will go for backups? Half a year? One year? Database admin talks about silent corruption
Or in this case: a flaky switch is corrupting the data. ZFS was the first one to notice, because ZFS protects it's data.
"As it turns out our trusted SAN was silently corrupting data due to a bad/flaky FC port in the switch. DMX3500 faithfully wrote the bad data and returned normal ACKs back to the server, thus all our servers reported no storage problems."
If you and Linus don't agree that the stored data should be intact, then you can not really trust your data, I hope you realize this? ZFS does what ECC RAM does: protects your data against power spikes, hardware problems, etc. I really do hope you have ECC RAM in your servers, but maybe you think ECC RAM is not necessary, just as you think that protective filesystems are not necessary?
Modern Enterprise SAS disks has 1 irrecoverable error in every 10^16 bits, just look at the spec sheet. Protection such as ZFS is necessary, in my opinion. But of course, you and Linus may have differing opinions, that is fine with me. But I would be careful, and suggest you read more about ECC RAM and Silent corruption. The study by CERN above is a good place to start. If you want, I have plenty of research papers on this. Just ask me if you want to start worry about protecting your data. Here is one link on ECC RAM, in case you are not familiar with ECC RAM:
To exemplify it's importance, for instance, Microsoft found out that many of the Windows crashes depended on non ECC RAM, which is why MS wanted everyone to use ECC RAM when running Windows. So yes, ECC RAM is important. Read the above link.
"...As for putting computers into a container... well... I would hardly call that innovation. We have them.. it's stupid in most cases IMHO, but as a hack to have variable capacity that you can move from location to location it's ok..."
I am just pointing out yet another case where IBM copied Sun/Oracle. And the BlackBox has its use, which in that case the blackbox is perfect.
Kebabbert: "What techniques has IBM created, that Solaris copied? You talk about "recent years". Can you give an example?"
Jesper Frimann: "...You have to be kidding ?..."
No, I am not kidding. Let me repeat my question: In RECENT years, what has Solaris copied? I know that IBM did great things in the 1960's etc. But in recent years? To me it seems that IBM is copying from others, but maybe you have some counter examples?
but ... teh cloud!!!1
wait a minute, I thought everything was moving to "the cloud" now? How can this "unicks" thing not be in the cloud??
... back to the future !
The cloud ? Is that like, "The Network is the Computer" ?
Reminds me of jooh-niggs. Somehow. Must've been half a lifetime ago ...
(looking for late 1980's marketing material ... which pocket was it in again ...)
simply responding to the topic
From an IT standpoint one has to make a decision. Should we spend more on our experienced professional staff or on licensing. With that said you can have high-skilled Unix gurus or you can have microsoft boxes sitting there costing you money and asking you to reboot them every 4 hours.
Commercial *nix* never has and never will take-off. We have FreeBSD, CentOS, and some legacy sun networks really handling the work-load.
I laugh when I enter facilities that run on windows. You can't miss it.
The management doesn't see hiring guys with a decent pay to run things so windows is expected to run itself. *nix on the other hand takes the expense out of the picture and allows you to hire real staff.
Unix will always be #1. It will never be successfully commercialized.
UNIX boxes more reliable.
I wonder if that appearance is due in part to UNIX admins as a demographic being older and more experienced than Linux admins?
BSD on the servers, Slackware Linux on the desktops.
Makes for an all around nicer computing experience, user & admin alike :-)