@kebabburt re: FUD
If Jesper and Matt are spreading FUD, then they are doing it in a way that is less rabid than you. I would be surprised if you are not foaming at the fingertips when you type some of the things you do.
A point in question. How much re-writing do you think is necessary to increase the number of processors managed by an OS. According to you "it had to be *rewritten* last year, because it could not handle 256".
Well. All that is really required is to change a couple of numbers in the kernel header files, and re-compile the kernel and any tools that reference those headers. In fact, the support was included in a PTF fix for AIX 6.1, not even a new level. Definitely not a re-write, more like a tweak.
Like other shortcomings, I guess that you have never worked in source at a kernel level for a UNIX, and I would also hazard that you never had to play with sysgen-ing an older SunOS release. Honestly, the more you say, the less relevent what you say becomes.
When it comes to new OS features, what do you think that Oracle are adding to Solaris 11? Both DTrace and ZFS are old news. How often can you consider them new (both have been around in previous versions of Solaris), and neither of them are really an extension to the OS. They are what IBM would call 'layered products'.
Unlike Linux and Windows, the remaining UNIXes, and especially AIX IMHO have a mature set of APIs, RAS features and other management processes. I will concede that lack of change may indicate stagnation, but excess change may also indicate immaturity and feature bloat driven by marketing hype. There is middle ground. What new features would you like to see in a UNIX?
On the filesystem front, ZFS moves the disk hashing up into the filesystem layer, and produces protection at the file or other disk object level. Reed-Solomon encoding of data at the filesystem block level effectively does the same in the GPFS de-clustered raid system. Big deal. And apart from Sun themselves, not everybody believes ZFS is safe. See this paper www.usenix.org/events/fast10/tech/full_papers/zhang.pdf that was presented at Usenix, which concludes that ZFS may be more tolerant of disk errors, but is not invulnerable to data corruption.
There is a fundamental design difference between the T series Sparc processors and the Power series of processors. One that is closing from both ends, and again they are converging on the middle ground. The announcements of what was it - heavy thread?- just shows that the Sparc design is being changed. One of the real problems with T1 and T2 processors is that they were committed to the lightweight thread model, which made them excellent for many small processes, but very poor for smaller numbers of large processes. Why is this change an innovation, and IBM putting a larger number of slower cores a realisation of a deficiency. I believe that Matt described this far better than I can, elsewhere in this thread.
I think I agree with Jesper's analysis of Larry's announcement claims. They look good, but do not actually stand up to any real scrutiny, as they claim things that other vendors do not bother to benchmark, or cannot with the same levels of code. It's like you saying that you are the fastest person on earth at running from your front door to your living room, but you never allowing anybody else in to your house to try and beat your time. Surely you can see this?
So, please calm down, and actually read stuff that comes from people other than Oracle's marketing team.