20 years already?
Time flies.
I shall drink more tonight.
Linus Torvalds has put penguins out of their misery by revealing that the next version number for the Linux kernel will be ***drumroll*** - 3.0. A week after Torvalds publicly deliberated what the next iteration of the Linux kernel should be, he said he bit "the bullet" and simply numbered the thing. He pointed out that the …
I have seen two kernel panics in my decade of using Linux. The first was my own dumb fault. Let's just leave it at that and forget the embarassing details (I was very new to Linux at the time). The other was caused by a hardware issue. Niether could be reasonably blamed on the kernel itself.
Ah, well there's the rub. As it happens, I've been using NT since the early 90s and can make a similar claim. I've seen perhaps half a dozen BSODs that were caused by dodgy third party drivers (fairly easily verfied) and one caused by a failure of the C: drive. Since I don't particularly want Microsoft to ban third-party drivers at ring 0 and I can understand it is quite hard to recover from a loss of your page file and system directory, I'm not going to blame MS for either.
On the other hand ... I can think of several people I trust on these matters who assure me that there most certainly were problems with all those versions of NT that were so faultless on my own machine. Therefore, forgive me if I don't extrapolate wildly from my one data point, like you did.
First time the major version number has been incremented in 15 years, and, er, nothing is actually changing?
"traditional '.0' problems" - yes, the kind you get when a new major version number actually means a major new version! If you just bump the version arbitrarily, its quite easy to avoid those kind of problems!
What does version mean again? Something that is different from another version? And a major version is different from another major version in a major way, presumably?
But hey, if Mr Torvalds wants to call apples oranges, then we all have to munch on nice crunchy green oranges.
Yay for Linus, the man with common sense and head on his shoulders. When so many in the industry are obsessed with ruining perfectly good software or hardware out of obsession for constant "new features" - required or not, the MAN holds his nerve and keeps the ship steady - one small step at a time. I salute you!
In many spheres of software endeavour, major version changes *are* significant, because they represent a compatibility discontinuity. I have a lot of experience with Solaris: the SunOS 5.10 kernel released in 2005 is compatible with the 5.10 kernel in the current production Solaris 10. Stuff which linked into the kernel in 2005 is guaranteed to still link in now (though a load of additional kernel API has been added with additional feature goodness, while retaining the original kernel API unchanged).
By contrast, the Linux kernel API changes every six months or so. So there is no need to reserve a major version rollover for something which breaks compatibility: such a break happens frequently. Therefore a major version change can be made according to whim, rather than marking a discontinuity.
That is exactly why glibc has its own headers and has changed compatibility only a couple of times in the last 10 years. I recently tried to run a couple of binaries from the original Loki disks released in 2000 (yes, I tried to run Civilisation, guilty as charged). A lot of t he stuff still worked 10 years later. I did not have the time to get it fully working, but if I needed to I suspect I would have gotten it up and running in a chrooted environment.
That is exactly why in Linux when you want to talk to the kernel you use /proc, sysfs, netlink and if worst comes to worst IOCTLs, not kvmread like on Solaris 5.10.
There is little or no need to link to kernel in Linux unless you are doing drivers. So the fact that Linus is doing "Perhaps it is a good day to die, prepare for ramming speed" with the kernel ABI is usually not visible to most applications.
I totally agree: stuff should use user level API wherever possible.
But in many contexts there *will* be stuff linked into the kernel: it may be a specialist bit of hardware with a compiled driver, or a proprietary shim twixt system and some bit of storage without which the rig is "unsupported", or a HSM which presents as a filesystem - in all these cases, you have to specify an exact kernel version, you can't say "Solaris 10 running at patch level x or later".
We can have a discussion about whether we should be linking compiled proprietary software into the Linux kernel, but GNU/Linux has been positioned as an enterprise class datacentre operating environment. The unices which it is displacing have much more stable kernel APIs.
I'm a Solaris user and Fan, and have been for too many years to count ... (and I did upvote you, but I think that this isn't just a one sided issue)
Solaris does have a static API (which is a wonderful thing), 32bit and 64bit at the same time, in the same package, and yes we have apps that run from 2.0 all the way up to the latest kernel ... but all that comes at a price.
because almost all objects are built statically linked, even though dynamic is well supported ... the current Solaris kernel needs at least 256Mb base memory to boot, the newer Solaris is made to support much more advanced chipsets ... but at the same time anything below a P4 system will not boot unless you modify /etc/system or use the kernel debugger to set flags ...
The X system on my OpenSolaris box is currently using 811Mb of memory running Gnome ... compare that the the 64Mb of my colleagues Ubuntu ...
Our Linux print servers and routers still sit happily in 128Mb with room to spare.
I will never move to putting Linux on the file servers, I can't do without ZFS now that we're using it, but It is huge.
stuartl@vk4mslp2 ~ $ uname -a
Linux vk4mslp2 3.0.0-rc1-vk4mslp2 #1 Mon May 30 22:01:19 EST 2011 i686 Mobile Intel(R) Pentium(R) 4 - M CPU 2.00GHz GenuineIntel GNU/Linux
stuartl@vk4mslp2 ~ $ uptime
22:52:04 up 13:57, 9 users, load average: 0.17, 0.09, 0.11
I realise this is just a release candidate. Haven't struck any issues thus far though... I remember kernel 2.6.0-test1 being a much more bumpy ride, but I tolerated it anyway as it was the first time I had working sound in Linux on my PII 300MHz laptop. (This, on a machine I was using for my daily university studies back in the day.)
I do recompile frequently. One of my linux boxes is a PS3, and we have... err.. memory constraints, amongst other issues.
My point still stands. I wish for a smaller simpler kernel. I know the real world *now* is a lot more complicated than that I guess and am not going to go into the whole kernel design debate, but remember the time when 'nix kernels were really small?
Well, when Linux was really unstable it was still the 90's and the Redmond alternative was Windows 9X, and we all remember how stable was *that*...
You still hit OOPSes or PANICS fairly often if you use early releases (2.6.37.x, x<=4), but then they are very much late beta quality software...
The fact that fixing kernel bugs on the systems admin side is more often than not restricted to aptitude update linux