If you don't need all the subsystems and drivers, just don't compile them in.
A new version of the Linux kernel has been unleashed, 2.6.30 — dubbed "Man-Eating Seals of Antiquity" — just three months on from Linus Torvalds’s previous release. The latest Linux kernel sped through eight release candidates before it landed yesterday. "I'm sure we've missed something, and I know we have some regressions …
If you don't need all the subsystems and drivers, just don't compile them in.
When it started, you could bank on a new release every few weeks, then we started to get smaller "point" releases, then they became every few months.
Then in 1996 version 2.0 came out with the first SMP support and things started to slow down considerably. Many a month went by without any significant new features- just bug fixes, a driver here or there and some obscure fiddling with anonymous algorithms deep down where real men are scared to venture.2.6 came along at the end of 2003 and hey, lookey! we're now at the 30th incarnation - 5 short years later.
At this rate I can't see us making it to 2.8 any time soon and I expect to be dead before 3.0 crawls out of the change control process.
Not that it will make any difference. I long ago stopped upgrading kernels - there just wasn't anything in them that I needed (oooh. apart from AMD64 support) and they just came as part of the new distro that I installed.
To that end, kernel revisions have become largely invisible - as has the whole idea of kernels themselves. So long as the box, as a whole, gets the job done, very few people seem to care any more about what's inside the beige tin.
"""If you don't need all the subsystems and drivers, just don't compile them in."""
Too bad so many people just let their package managers take care of their kernel and modules for them... Oh well, it's their loss.
Given that the "2.6" part of the version number now seems to be set in stone for ever more, can't we just call this Linux 30? Much like the way GNU Emacs went from 1.12 to 13, or Solaris went from 2.6 to 7...
it was a shit pun on the "seal" homonym, although I'm docking points equally for the shitness of the pun and for you not getting it
And the best bit (for me)? The EeePC support module has gained an interface for the “Super Hybrid Engine” (what a silly name that is), so I'm now underclocking my 901 when running on battery.
# echo 2 >/sys/devices/platform/eeepc/cpufv
Nice kernel that you have here, though the numbering thing is getting silly. But hey, it's all a matter of keeping the numbers relevant for the people who actually work on it. Makes the version tracking easier.
The Linux kernel is a bit bloated for my liking though (I know, you can make it slim by compiling only what you really need, but most people don't do it, and the point is, you still have to compile it in the kernel which is in theory less flexible than the microkernel+servers as in GNU's Hurd. I"ll be switching all my machines to Hurd as soon as it's reasonnably operable, which should right about... now. Or *now*. Or /now/. Or sometime in the next 20 years. Hopefully. ;-)
But is it still littered with nonGPL stuff?
Even if your package manager pulls world + dog, most of this is in the form of modules.
Modules only get loaded if they are needed, meaning your desktop is probably only running 100 or so out of the 2000+ modules in the full kernel.
"Too bad so many people just let their package managers take care of their kernel and modules for them... Oh well, it's their loss."
I run an everything-including-kitchen-sink distro (Mandriva 2009.1) and the space on disk of the kernel plus all the modules (Mandriva packages come with all modules built, plus a handful of '3rd party' ones that aren't in the kernel such as ndiswrapper) weighs in at less than 40MB.
And before anyone says 'yeah but having all those modules loaded is bloated' please lookup what a kernel module is - the whole point is it only loads the ones needed for my hardware!
"The Linux kernel is a bit bloated ... you can make it slim by compiling only what you really need"
A couple of folks have said this, but I'm a bit puzzled.
In what way is this bloatedness actually a problem?
E.g. The left hand says "you haven't got enough devices supported, get them supported".
So the Linux Driver Project comes along and courtesy of them and others, loads of devices, some quite obscure, are now supported.
And then the other side comes along and says "why are there so many device drivers these days?"
Because the users wanted them?
What *problem* do these drivers cause when they are unused? They make a distro a larger download and they take up space on disk but they don't use any worthwhile memory if they are kernel modules (which anything sensible is, even the daft one I've been writing).
Help me understand why the kind of "bloat" that is claimed as an "advantage" for Windows (eg lots of devices supported) is suddenly a *disadvantage* when Linux gets there too.
If you want a bloat free Linux there are several to choose from. If you want a Linux with lots of supported devices and lots of features there are several to choose from.
Or you can get someone to build you a Linux that fits your exact needs, either as an installable one or a LiveDrive one or whatever.
If you want a Windows customised to suit your needs in a similar way, there's
Could the person who wrote that be taken out and shot, NOW.
I don't care if it is Linus, I don't care if it was Lucy or Charlie Brown, that is a criminal act against language.
Actually the Linux kernel is quite good. What I meant by "bloated" is that it does a lot of things that are not supposed to be the kernel's job. It does a nice job of it, too. But at some point it causes problems. Amongst others, as highlighted by the numerotation silliness, it makes evolution a hard, arcane process. A microkernel that does its kernel job, and only his kernel job (which is basically assigning CPU time to task, or the other way round if you prefer) and let outside "servers" deal with almost everything else, makes more sense. More compat work involved (and, admitedly, a lot of compat problems to be expected when you mix servers and kernel of different ages), but still makes more sense. The module approach is a sort of compromise, but as a result the kernel still ends up doing many things that it shouldn't be doing.
I don't mind rebuilding modules each time I fiddle with the kernel, but as a famous -though fictional- scrivener put it, I'd prefer not to.
And also, as I said, modules are only a compromise, which means that not only do I have to build most of them by hand, but also I have to install a whopping 300 MB package if I want reasonable basic hardware support (OK, not Linus' fault, my distro is to blame here, but it's part of the "kernel does what kernel shouldn't do" problem). Which means keeping two such packages concommitently in the relevant partition -albeit for a short time- upon upgrade. And this is a no-no on most of my machines. I could just increase the size of said partitions, but again, I'd prefer not to. Or I could just use a large virtual machine to compile the bare kernel, build the modules, create my package and deploy that. But I'd prefer not to. And it would still be needlessly large. With a microkernel and suitably backward-compatible servers, I would be able to upgrade the whole system, minute amounts at a time. It allows for faster developpment and easier vuln patching, too. GNU's Hurd seems promising to me. I'll be waiting for it to be ready, reporting problems from time to time... or I could contribute more actively, but of course, as you'll have understood by now, I'd prefer not to! ;-)
There was quite a good interview with Linus some time ago on versioning. I recall that there just wasn't the need to keep odd number development forks, things are pretty modular and have been really robust. Much of the work that used to happen in the odd series appears to be happening in patch sets, you can build 2.6.30 and decide to pull in the bleeding edge patches from your favourite maintainer for just the bits that you want to be bleeding edge.