Linus Torvalds has put penguins out of their misery by revealing that the next version number for the Linux kernel will be ***drumroll*** - 3.0. A week after Torvalds publicly deliberated what the next iteration of the Linux kernel should be, he said he bit "the bullet" and simply numbered the thing. He pointed out that the …
20 years already?
I shall drink more tonight.
No magical new features? Of course not.
There haven't been any for the last 20 years, why start now?
Just glad they didn't introduce a magical new "blue screen" feature that other O/S's have. An O/S is not to get in the way of usage, and should "just work".
Its a kernel
Its talks to the hardware, manages the memory and processor scheduling and other related stuff.
What else would you want it to do? Grow a GUI?
Do you mean KERNEL PANIC!
followed by a kernel dump? As I've seen a number of those on linux boxes in my time.
Seen any ....
in the last 10 years ?
The post is required, and must contain letters.
then it's time to get a modern-ish kernel
KERNEL PANIC ??
I've been using Linux since 1.2 and I've not seen a Kernel Panic for years certainly since before Suse 5.0
I have seen two kernel panics in my decade of using Linux. The first was my own dumb fault. Let's just leave it at that and forget the embarassing details (I was very new to Linux at the time). The other was caused by a hardware issue. Niether could be reasonably blamed on the kernel itself.
Re: reasonably blamed on the kernel itself
Ah, well there's the rub. As it happens, I've been using NT since the early 90s and can make a similar claim. I've seen perhaps half a dozen BSODs that were caused by dodgy third party drivers (fairly easily verfied) and one caused by a failure of the C: drive. Since I don't particularly want Microsoft to ban third-party drivers at ring 0 and I can understand it is quite hard to recover from a loss of your page file and system directory, I'm not going to blame MS for either.
On the other hand ... I can think of several people I trust on these matters who assure me that there most certainly were problems with all those versions of NT that were so faultless on my own machine. Therefore, forgive me if I don't extrapolate wildly from my one data point, like you did.
...love me. I need a kernel with absolute devotion and unconditional love for me and me alone.
They might if Canonical were in charge of the Linux kernel.
I mean, those guys try to make Ubuntu act more like Windows at ever /other/ opportunity...
Yay for Linus, the hairy chested he-man of the OS world. He could thumb-wrestle Steve J and Steve B into submission without breaking a sweat while doing three other things at the same time.
Which scheduler would he be using?
Good Lord, 20 years...
And to think that I've been using Linux in one form or another for 16 of those years. Blimey!
(Got my start with Slackware 3.0, running kernel 1.2.5, in April 1995.)
1.2 is recent...
1.2? Yikes, I feel old. I used 0.99.<various> for a couple years before 1.0 came out.
I still remember feeding floppy disks into my old Escom tower installing 0.99.??.
There must have been about 30 or 40 of them. Crikey.
Traditional point zero problems.
Yes, because everyone else puts those in *intentionally*, don't they?
It's difficult enough to get it right without going to the trouble of tempting Fate sufficiently for it to get off its arse and take a personal interest.
First time the major version number has been incremented in 15 years, and, er, nothing is actually changing?
"traditional '.0' problems" - yes, the kind you get when a new major version number actually means a major new version! If you just bump the version arbitrarily, its quite easy to avoid those kind of problems!
It's reached its third decade. I think that deserves a new number, although, you must all remember, "It is not a Number, its a Free OS"
Linux has entered the botique.
If you want to commemorate an anniversary, have a party.
What does version mean again? Something that is different from another version? And a major version is different from another major version in a major way, presumably?
But hey, if Mr Torvalds wants to call apples oranges, then we all have to munch on nice crunchy green oranges.
Got my first taste of Penguin in late 1996, with RedHat 4 (running kernel 2.0.18 I believe)
Been using it in one way or another ever since.
RedHat 4 was not only running kernel version 1, it was also running glibc version 1. Version 5 introduced glibc version 2, and version 6 introduced kernel 2.0.
4.0 (Colgate), October 3, 1996 (Linux 2.0.18) - first release supporting
I stand corrected
I apologise, my memory playing tricks. I thought kernel 2.0 came with version 6, but it was actually 2.2
Hurray for common sense
Yay for Linus, the man with common sense and head on his shoulders. When so many in the industry are obsessed with ruining perfectly good software or hardware out of obsession for constant "new features" - required or not, the MAN holds his nerve and keeps the ship steady - one small step at a time. I salute you!
Underlines Linux Kernel API Instability
In many spheres of software endeavour, major version changes *are* significant, because they represent a compatibility discontinuity. I have a lot of experience with Solaris: the SunOS 5.10 kernel released in 2005 is compatible with the 5.10 kernel in the current production Solaris 10. Stuff which linked into the kernel in 2005 is guaranteed to still link in now (though a load of additional kernel API has been added with additional feature goodness, while retaining the original kernel API unchanged).
By contrast, the Linux kernel API changes every six months or so. So there is no need to reserve a major version rollover for something which breaks compatibility: such a break happens frequently. Therefore a major version change can be made according to whim, rather than marking a discontinuity.
RE: Underlines Linux Kernel API Instability
I'm a Solaris user and Fan, and have been for too many years to count ... (and I did upvote you, but I think that this isn't just a one sided issue)
Solaris does have a static API (which is a wonderful thing), 32bit and 64bit at the same time, in the same package, and yes we have apps that run from 2.0 all the way up to the latest kernel ... but all that comes at a price.
because almost all objects are built statically linked, even though dynamic is well supported ... the current Solaris kernel needs at least 256Mb base memory to boot, the newer Solaris is made to support much more advanced chipsets ... but at the same time anything below a P4 system will not boot unless you modify /etc/system or use the kernel debugger to set flags ...
The X system on my OpenSolaris box is currently using 811Mb of memory running Gnome ... compare that the the 64Mb of my colleagues Ubuntu ...
Our Linux print servers and routers still sit happily in 128Mb with room to spare.
I will never move to putting Linux on the file servers, I can't do without ZFS now that we're using it, but It is huge.
That is exactly why you should not build vs the kernel
That is exactly why glibc has its own headers and has changed compatibility only a couple of times in the last 10 years. I recently tried to run a couple of binaries from the original Loki disks released in 2000 (yes, I tried to run Civilisation, guilty as charged). A lot of t he stuff still worked 10 years later. I did not have the time to get it fully working, but if I needed to I suspect I would have gotten it up and running in a chrooted environment.
That is exactly why in Linux when you want to talk to the kernel you use /proc, sysfs, netlink and if worst comes to worst IOCTLs, not kvmread like on Solaris 5.10.
There is little or no need to link to kernel in Linux unless you are doing drivers. So the fact that Linus is doing "Perhaps it is a good day to die, prepare for ramming speed" with the kernel ABI is usually not visible to most applications.
"Don't Build against the kernel"
I totally agree: stuff should use user level API wherever possible.
But in many contexts there *will* be stuff linked into the kernel: it may be a specialist bit of hardware with a compiled driver, or a proprietary shim twixt system and some bit of storage without which the rig is "unsupported", or a HSM which presents as a filesystem - in all these cases, you have to specify an exact kernel version, you can't say "Solaris 10 running at patch level x or later".
We can have a discussion about whether we should be linking compiled proprietary software into the Linux kernel, but GNU/Linux has been positioned as an enterprise class datacentre operating environment. The unices which it is displacing have much more stable kernel APIs.
Thou art geekier than thee
I have been using Linux for 21 years.
..but have you done any work yet?
Not even Linus has used it for 21 years. He announced Linux to the world in August 1991.
I lurked on the linux kernel mailing list starting in January, 1992 - out of curiosity about how a project like it would work.
So far so good...
stuartl@vk4mslp2 ~ $ uname -a
Linux vk4mslp2 3.0.0-rc1-vk4mslp2 #1 Mon May 30 22:01:19 EST 2011 i686 Mobile Intel(R) Pentium(R) 4 - M CPU 2.00GHz GenuineIntel GNU/Linux
stuartl@vk4mslp2 ~ $ uptime
22:52:04 up 13:57, 9 users, load average: 0.17, 0.09, 0.11
I realise this is just a release candidate. Haven't struck any issues thus far though... I remember kernel 2.6.0-test1 being a much more bumpy ride, but I tolerated it anyway as it was the first time I had working sound in Linux on my PII 300MHz laptop. (This, on a machine I was using for my daily university studies back in the day.)
No traditional .0 problems?
*Surely* he believes in numerology?
I have not a clue.
Might I enquire as to why, when Ubuntu updates itself, it reports 350K+ files up its suppository?
Yah Yah Yah and whatever.
I thought "3.0" was an aspirational target for Linux market share?
Or Linux 95 to celebrate the market share among supercomputers, perhaps.
how about some debulking?
hasn't it grown too large?
Are you suggesting that a operating system with drivers for thousands pieces of hardware in bootable form taking 12M is large? That 73M of source code is big?
Windows drivers for my HP printer need 10 times as much and still don't do their job properly!
You can always
compile it without all the stuff you don't need too (try doing that with Windows/MacOS). I used to but don't bother these days.
In all fairness...
I do recompile frequently. One of my linux boxes is a PS3, and we have... err.. memory constraints, amongst other issues.
My point still stands. I wish for a smaller simpler kernel. I know the real world *now* is a lot more complicated than that I guess and am not going to go into the whole kernel design debate, but remember the time when 'nix kernels were really small?
Linux 3.11 for Workgroups
So when will Linux 3.11 for Workgroups be released?
I think this is a good decision, it will make it more easy to realise that any 2.x is old and it is 3.x if you want to use the latest kernel. 20 years Linux, nice, only I am 20 years older too. Not much to do about that, of course.
Any 2.x old?
Apart from say, 2.6.39, that's not very old at all :-)
Ollibob, n. usu. as pl. ollibobs.
This is Kelly's bid for Merriam-Webster fame, isn't it? Own up.
Big kernel lock has gone
This won't mean much to many, but recently the Big Kernel Lock has finally been removed. It's been a very long time going, so maybe that justifies going to 3.0
That sounds like...
... a more reasonable ... reason.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire