Steadily unstoppable snowballing.
And before people think its becoming bloated, the majority of the increase in code is in the drivers, new and better drivers get added all the time on each new release.
The Linux kernel is growing and changing faster than ever, but its development is increasingly being supported by a select group of companies, rather than by volunteer developers. That's according to the latest survey of Linux kernel of development by the Linux Foundation, which it published to coincide with the kickoff of …
Depending on the distro , once you boot, the drivers needed for the running system are added to the initrd (This is a small filesystem that is used to boot a linux system).
If you hotplug a lot of devices the kernel can then load what it needs, depending on the device.
There are unfortunately a need for some binary blobs to make certain hardware work.
The kernel however, can be compiled in any way you want, which is why I suspect biz is getting interested.
A considerable amount of time and effort can be saved if your code deployment is economical, and although I know little about M$ server deployment I am guess your choices are "black".
>One of the earliest things I do is compile a version of the kernel with unneeded code (almost entirely drivers) stripped out to reduce the attack surface.
Non-monolithic kernels go one further by taking running drivers out of "ring 0" so they can't interfere with the rest of the kernel or each other.
I theory, its far more robust and secure, but real-world need-for-speed tends to override that option.
Short history it doesn't work like in Windows.
Linux sees the drivers as loadable modules, the modules can be either made built-in (Useful for environments where your hardware does not change) or they can be made loadable upon boot.
When made loadable upon boot it is called a loadable kernel module, and they are files that live on your HD, plug in a device and they get loaded automatically as in the case of USB stuff, or upon booting in the case of a NIC.
Drivers in Linux are not "standalone" drivers for "standalone" hardware, most of the time they belong to what is called a "Driver family", this is a single driver supports multiple devices.
For example if two different network cards from two different vendors use the same chipset, the same driver will be used with both. This saves a lot of disk space and code work.
Usually the kernel and the drivers all come in the same package, this is because as they are modules they are intrinsically dependent on the kernel structures and headers.
To get support for a newer piece of hardware you can either get the source of a module and compile it yourself or wait until the next kernel version to come with the module already integrated.
And no, compiling stuff in Linux (building a kernel or a module) is not the chore most people with little Linux exposure think it is.
There is a learning curve like with everything, I do not know anyone who has mastered anything complex in Windows out of thin air.
Depends, some distros will install everything, others may try to detect upon install and then download kernel module packages ready for when you reboot into the OS.
Either way it's a huge mess. When Linux was first released it was seen to be old hat, huge monolithic kernels. Yes, it's efficient and with a lot of polishing it's not a turd by any means. But it's hardly a modern design, much like the x86 it's old, ugly but works in the real world.
>>"Rolling your own kernel used to be fairly easy, but it's a lot of work now."
I was a happy Gentoo user for several years, so I concede my perspective on this may not be that of the average inhabitant of this planet, but what is it you think has made it a lot of work these days, compared to how it used to be?
It's still easy to roll your own. There's just one heckuva lot more choices to wade through. Back when I needed a serial driver, video driver, parallel driver and sound-card/scsi driver for my cd player.sound combo. (pas16). That was it. So, I rolled them into the kernel, monolithic style. Then modules came along. Now you had a choice. I still rolled them in. To get any decent speed out of my 486 I had to roll my own. Later on, came USB, hot plugging, and more specific drivers for video and printing and networking. I switched to modular then. All along, I had MY choices to make and I alone lived with the results of my decisions. I still have the choices. Nothing not needed is not loaded at boot time. Ergo, you do not have some HUGE kernel with everything loaded, while you have everything you could possibly need ready to be loaded, on demand. Big difference.
This post has been deleted by its author
"I doubt I'll ever understand why an OS architecture with hardware drivers built into the kernel has become so popular."
Since no-one has except Phil has ( and he got a downvote for merely telling the truth) replied here goes.
Mostly the drivers are NOT built into the kernel but are loadable/unloadable modules so if I plug in a usb/serial converter I can see (if I want) the message :
[ 5743.316138] pl2303 2-8:1.0: pl2303 converter detected
[ 5743.349211] usb 2-8: pl2303 converter now attached to ttyUSB1
I didn't have to do anything - the module was found and loaded - if I unplug it the reverse will happen
[ 5786.751073] usb 2-8: USB disconnect, device number 3
[ 5786.751733] pl2303 ttyUSB1: pl2303 converter now disconnected from ttyUSB1
There's about 75 modules loaded at the moment.
What I don't get is why we are still mucking about with monolithic kernels like Linux in this day and age. (hint: Monolithic does not mean "not made with modules." It means "all drivers run in Ring 0." Linux is ALWAYS monolithic.) Jochen Liedtke proved microkernels could be fast in '88, if they where designed to be.
And which OSes are microkernels? Windows sure isn't. OS X / iOS sure isn't (though it is arguably a bit closer than Windows/Linux) There's QNX, but aside from Blackberry's recent usage of it, it is almost entirely in the embedded market.
Microkernels are great in theory, but unless you are suggesting scrapping 99.9% of server, desktop, laptop, tablet and phone operating systems and replacing them with something entirely new, complaining about Linux is basically complaining about the whole enterprise, personal and mobile operating system infrastructure.
"... wasn't Minix based on a microkernel? Also GNU Hurd?"
Minix was a macrokernel design, based on Unix principles. But Hurd was/is a microkernel design. Minix3 is microkernel.
@Oninoshiko and @Lusty
I do not remember Linus T misusing the term microkernel. The main reason that the GNU community adopted the Linux kernel over the the more elegant microkernel designs was efficiency and availability. The performance of micro versus macrokernels remains problematic. When Microsoft announced NT, they claimed it woulduse a Carnegie-Mellon style microkernel, but actually used a macrokernel design. The MSliterature/press releases were a source of cofusion for much of the less technical technical press.
The converse of micro iis macro, and of monolithic is modular. The very early Linux kernels were moolithic macrokernels. When a kerneel wascompiled, the required drivers were compiled in. It did not take very long for modulesto be introduced, when at build time the essential hardware drivers and filesystems could be selected to be built-in, and the merely desirable to be compiledas autoloading modules.
Minix is debatable, and Hurd is a microkernel. So what? My comment was about operating systems that are actually used for anything aren't microkernels. If you take the small percentage of Linux desktop users, and drop it by another four orders of magnitude, that's probably the total number of people who use Hurd even as a hobbyist.
"hint: Monolithic does not mean "not made with modules."
Actually, back in my day that's exactly what it meant. Your comment even made me go and check myself but all of my old Linux printed books, and all the old threads and documentation on the Internet I can find refer to modular vs monolithic when compiling. You're right that Wikipedia and other current sources define it your way, but that has changed in the last 10 years for some reason.
Thanks all for the downvotes though, presumably from people who thought I was wrong because they weren't compiling kernels when the old definition was used.
Monolithic has ALWAYs referred to if drivers running outside Ring-0. Linus used the term wrong, and a bunch of bad authors who don't really know kernel design followed his misuse. Look at any serious text on kernel design and monolithic will always be contrasted with microkernels. If it's recent enough to cover Linux, Linux will be listed as a monolithic kernel.
This made me LOL. You're saying that you're right and everyone who used Linux for the first two decades including the person who created it, professional authors and editors were all wrong? I really hope you have some big credentials if you talk like that in the real world. As another poster just said, the point was that Linux doesn't have to have all the drivers compiled into the kernel - this was the original point before certain people started being dicks about semantics...
No, I'm say me and everyone outside the Linux-centric world uses the word correctly.
Here's the famous flamewar from '92 between Andy Tanenbaum and Linus. Let me quote Linus' post:
"True, linux is monolithic, and I agree that microkernels are nicer."
He doesn't dispute Tanenbaum's (correct!) definition at all (although, he also needs to learn the difference between loose and lose). So not only did the old Linux docs misuse the term, but Linus KNEW it (although it's possible he didn't write it, and wasn't paying attention to documentation. The Linux world is known for poor documentation)
Any term widely used and understood for two decades is proper usage. Original meaning becomes irrelevant after a certain period, such is the nature of language. For instance Americans have been misusing the term Billion for quie some time to the point that their nonsense has become the standard. Stop being an ass hat.
Any term widely used and understood for two decades is proper usage. Original meaning becomes irrelevant after a certain period, such is the nature of language.
But in this case it is some elements on one community - everyone else has been using it correctly for the last thirty years. Loadable module or not, drivers are still in kernel space, run in kernel mode and a single errant driver can and does take down the entire system. Even if that wasn't the case it still wouldn't qualify as a true microkernel since it still includes many systems that reside in user mode under the true microkernel model, e.g. the process scheduler.
Those elements of the Linux community misusing the term are not alone, this kind of mislabeling is quite common. The classic example is Windows' use of the term "virtual memory" to mean disk paging, which gets so deeply ingrained that people people refuse to believe you when you point out the term does not by definition refer to hard drives at all. Just like that example, a piece of hyped mislabeling does not alter the accepted definition.
This post has been deleted by its author
"hint: Monolithic does not mean "not made with modules"
This all semantics - the point is the Linux kernel does NOT have to have compiled-in drivers - most distro run with loadable modules that are selected at boot or on hot-plug. On the other hand if you want a custom kernel with a limited set and with them optionally compiled in then you can have that as well.
"In other modern kernels you can load in the driver almost regardless of the kernel version"
I believe this was a choice by Linus early on since it forces open source drivers and makes it insanely difficult for vendors to supply binary drivers like they do with Windows. This led to many, many years of poor hardware support where graphics cards never had all their features, win modems simply didn't work, wireless cards didn't get support for years and the list goes on. Now that Linux is popular enough it's become worth while for vendors to open up and start contributing to the drivers which at the very least means more consistent results. A lot of the instability on Windows is due to poor third party drivers being loaded regardless of kernel :)
lack of a stable binary interface for bin and lib (ABI) in the linux world strongly encourages presence of source code and programmer at regular intervals .. unlike windows which has excellent binary stability running decades old binaries without modifications .. and hence little sign of the source code or programmers. I prefer having source code and programmers around .. but then I would wouldnt I.
What I don't get is why we are still mucking about with monolithic kernels like Linux in this day and age
I don't get statements like that. When was the last time you've thought to yourself "whew, thank fuck my OS doesn't have a monolithic kernel!"?
Well, I've never thought that in all the years I've been using Windows.
>I doubt I'll ever understand why an OS architecture with hardware drivers built into the kernel has become so popular.
Speed. For everything else, there's HURD.
Having said that, we do have a lot of spare CPU power on the desktop, so perhaps its time to revisit the benefits of the microkernel?
Which is a bit annoying when you have a huge kernel and modules to use a few % of the code.
It really smacks of Microsoft Word where you have a huge level of complexity and features to support every user on the planet. Meaning most people only use 10% of the features but each user uses a different set of features.
live kernel patching? This is technology that has been reinvented in the last 6 years, because it was held hostage by a database company.
Please, stop whining about systemd. I am no big fan, but unless you are writing an alternative, try and be productive. Use it , file bug reports, help the movement to something you WANT to see.
If Linux is going to survive the next 10 years, it seriously needs to man up and become more maintainable.
Oh yeah I know LP can be a bit toxic, but for someone that productive I give him a great deal of respect for what works.
I am no big fan, but unless you are writing an alternative, try and be productive.
The alternative doesn't need to be written, it already exists, works and runs on millions of systems.
Use it , file bug reports, help the movement to something you WANT to see.
I don't know if you realise how ironic you are being, because systemd is already a movement. If you don't fit in with the groupthink, you are ostracised and insulted. No thanks.
Go on AC say SysV init scripts, I dare you...
systemd is treated a bit like pornography. I don't know how to define it but I'll know it when I see it.
I have my own set of reservations about systemd I tried to make that clear. I know about the dependencies arguments raging on the debian news groups.
And again, have you proposed something else? How would we know AC?
"I know about the dependencies arguments raging on the debian news groups."
..and the users have finally had enough, and these people are being shouted down and/or moderated out. In case you don't know, unlike the major commercial players, Debian has the legal status of a club. As a user you have exactly one right, to join or not join. So, whatever direction it takes, you have nothing to say about it that is binding. A user is no more than a voluntary member of the Debian community. If you want to have input, then via the meritocracy rule, you have to contribute meaningful code/effort in order to actually join Debian. If you don't,, then whatever your opinion is is just that. Opinion. Irregardless, you have as much say in what the Debian install plants on your computer as you did when Slackware came out. Zip. Nada, Volkerding put on your machine what he thought best. He sure as hell didn't offer up a choice of init systems. He was on the top of his heap, and did as he damned well pleased. Nothing has changed. You have the right to do as you please and roll your own, just as he did. nothing stops you. So, to stand on the side of the road and throw rocks, is just plain soreheaded.
>nothing stops you.
But you don't just need to write an init system anymore. You need to decouple various other essential components and write an init system. That's the problem for people that don't want to use systemd and it should have never gotten this way. And what does the religiously pro-systemd crowd do when people come along and start doing the above: More bitching and whining about reinventing the wheel, "why don't you just use systemd?".
In my experience it is the other way around.
It is the anti-systemd crowd who has become hysterical.
In my experience the majority of the anti-crowd people is just repeating things they hear like screaming parrots.
No one is forcing anything on anybody, if you do not like it that much do not use it, or write your own, I'm sure you will do a great job and people will adopt it.
>It is the anti-systemd crowd who has become hysterical.
People really need to stop complaining about ad-hominem attacks on Lennart by using ad-hominem attacks against a massive group of people they don't know.
I don't care about systemd. I have it running on this debian machine because it was installed after the switch over. It's had it's problems like locking up during boot and shutdown but I've had similar issues with sysvinit too so I'm not going to use the "it broke a few times for me so it's bad" argument. Again, I don't care about systemd, but I am anti-the-steamrolling-of-many-fundamental-userland-components-that-everyone-including-systems-that-cant-run-systemd-need.
It's not written in the GPL or whatever but I think if you take over projects like udev you have a responsibility not to brake it for all of the users/use cases that don't fit with your "one daemon to rule them all" philosophy. Greg K-H giving up maintaining udev as an independent project was a massive mistake IMHO. Maybe I should have brought that up when he emailed me to ask how I wanted my commits to the kernel to be counted...
A plausible scenario is that systemd accelerates mainstream adoption of GNU/Linux on desktops and mobiles over the next few years – if only because it enforces a higher degree of conformity among the elements comprising a Linux system than exists now: fragmentation has frequently been cited as a major reason why 'the average user' hasn't come to Linux in droves, and hence why big name software is still very scarce (or, that reasoning but the other way around). Systemd's tighter control over the operating system will eventually have the side-effect of eroding the more glaring disparities across mainstream distros and maybe even attract some commercial softwares along the way. The thing is, Linux has grown tired of being just a kernel, and whether it realizes this or not, it wants to pop its husk and strive for something bigger. Systemd is another step in Linux evolution. Time will tell if it's an appendage in aiding its persistence, or one abetting its own demise. So I recommend all the people sit back, smoke a big one, and think about other things for a little while.
There was a very nice blog article by LP on the future of BTRFS/systemd etc...
The point was made that the proliferation of distros and library/package versions causing dependency issues that affect stability. This is one factor driving systemd. I suspect it is the reason WHY the distros are making it a priority.
I recommend you read LP's words and thoughts about the vision, rather than the ad hominem invective that is invariably in his wake.
The problem is LP is amazingly productive and I think shouting is easier than competing with a better idea.
Not sure really...
Please stop with the FUD
RH is not forcing anything on anybody. Distro developers are adopting it because they see value on it.
Is it perfect no it is not, does it have problems, yes it does, does it solve a problem? yes, a very complex one no one has stepped in to solve.
I bet you're one of those who complain about Linux lack of coherence or fragmentation.
Distro developers are adopting it because they see value on it.
Exactly. Think about it: they are the ones who have to maintain these init scripts. It's just the dick-heads who've read something in a forum that systemd is bad because they fucked something up, and have continued the FUD and hysteria.
I wonder about this "company affiliation". Are all those working with the kernel with a company affiliation doing it because they are asked to do it. Or, surprise surprise are there also people in IT who like to work with the kernel too. like if I sing in a chorus and work for say IBM does not prove IBM gives a shit about what I do in my spare time
As for Intel, I am not surprised as they bought one or two companies deeply involved in the embedded space and with a lot of knowledge about the kernel.
Any way it's very clear that Linux has become very important for many companies and the tool of choice for many too.
...And that's the type of proposition open source fanatics don't want to see discussed openly, even though it's a reality. The NSA already has significant chunks of code in just about every major Linux distro going by the name of SELinux, but that fact is unwisely set aside for convenience sake, as is Ken Thompson's very real little bit of open source malware (circa 1984): http://cm.bell-labs.com/who/ken/trust.html (lest we forget).
In the wake of Kaspersky's recent exposé of GrayFish, I can only conclude that a well-scrutinized open source OS is only half a solution at best. We now need hardware manufacturers to open up their firmwares and microcodes in order we have a platform reasonably secure from such nefarious exploits. Oh the NSA, CIA, GCHQ, CSIS, FSB/KGB, China, are all very capable threats, even to open software. In light of recent findings, there is no doubt they have sleepers embedded in Linux kernel development. And then there's a multitude of non-kernel attack vectors such as glibc, systemd and the copious DEs available.
>>"...And that's the type of proposition open source fanatics don't want to see discussed openly, even though it's a reality. The NSA already has significant chunks of code in just about every major Linux distro going by the name of SELinux"
I have plenty of objections to the bundle of ad-hoc fixes that is SELinux, but oddly enough it being a ploy by the NSA is not one of them. And this is from someone who had an extended argument on these forums about Windows vs. Linux security models. All of the SELinux code is Open Source and it is scrutinized by some very smart people who have no affiliation with the NSA (and in some cases are pretty much enemies, such as the Chinese government). When it comes to security against third parties, both Open Source and Proprietary have advantages and disadvantages and neither is inherently more secure, imo. But when it comes to security against a subverted vendor, Open Source has a clear and demonstrable advantage - you can inspect what you're given.
There could be cleverly hidden flaws in GNU/Linux, but I think the main threats to any user are going to be accidental vulnerabilities or (from well-resourced enemies) firmware exploits. Sorry for the long post - I just don't think SELinux is subverted.
You're probably right about SELinux... I'm just afraid it'll eventually get used as a bargaining chip for when the US government negotiates contracts with Red Hat, who in turn have a lot of sway with kernel; if SELinux is a mess, that fact could be leveraged by NSA (or any such equivalent) to cleverly inject malicious code with the appearance of benignity on first, second, third blush. Or maybe SELinux is not so bad in itself, but a red herring to divert scrutiny from parts of the kernel. I have no reason to believe or not believe any of this, just erring on the side of caution.
When something reaches critical mass, it tends to attract attention: Windows, Mac, Android, iOS already have the spies' attention. Linux is important on the server side, and it's destined to take a significant share of that creepy space they call the Internet of Things... So it's just a matter of time before they (governments and other shady types) find a way in there too. (I'm guessing a binary blob might be good starting point for them).
I apologize for the length of my post... I find this cloak and dagger stuff innarresting if not quite unsettling!
I couldn't agree more. Given that quality, not quantity, is the measuring stick, we definitely do need free (as in open) software. I'd like nothing more than to own a computer with open source firmware, BIOS and operating system to run good quality, open goods (and I wouldn't mind paying for that transparency). I may be naive, but I still hope that day comes.
As an aside (and nothing against Linux), but it just isn't my first choice for a free, open source OS. I've got my fingers crossed on PC-BSD, Haiku, or ReactOS. I'm still using Windows to retain my investments and seriously looking to FreeDOS as a platform for programming projects.
I just wanted to bring up ReactOS because that's an open source project (also under the GPL) that barely gets any air time. It doesn't seem to have nearly the of manpower of Linux at its disposal, but it does aim to bring a free and complete operating system compatible with Windows to the masses. I just checked at https://www.reactos.org/ and it looks like it's still in the "alpha" stage of development. But I find it interesting nonetheless.
Biting the hand that feeds IT © 1998–2020