Slayed?
I'm not absolutely certain, but that sounds wrong. Shouldn't that be 'slain'? Slayed only sounds right in the context of 70s rockers.
A recently resolved vulnerability in the Linux kernel that had the potential to allow an attacker to gain privilege escalation or cause denial of service went undiscovered for seven years. Positive Technologies expert, Alexander Popov, found a race condition in the n_hdlc driver that leads to double-freeing of kernel memory. …
In the context of the headline,you're right, 'Slain' would be more correct, grammatically.
"The knight slayed the dragon" vs "The dragon was slain by the knight".
"The developer slayed the bug" vs "The bug was slain by the developer".
Hmm... to be honest, I agree with @word_merchant -- "fixed" would be a better word choice all round, and doesn't change in spelling between the two contexts.
(going even further off topic: in old English, they did differentiate by putting an accent on the 'e' and pronouncing it differently, so by those rules it would be "the bug was fixéd by the developer". That whole craziness was dropped a long time ago though)
Err...
"The knight slew the dragon", etc.
I sentence you to fifty hours' reading of the Bible.
Your remark about fixed is also wrong. The accent has only ever been used in poetry and hymns, to force the unnatural pronunciation when the metre requires it. Both the past verb and the past participle were often written "fixt" as well as "fixed" in the early days of the word's history (it's a modern import to English, sometime in the 15th century).
in old English, they did differentiate by putting an accent on the 'e' and pronouncing it differently, so by those rules it would be "the bug was fixéd by the developer"
Should be an e-grave. E-acute would sound like "Fix-ay-d", whereas the archaic form would be "Fixèd", pronounced Fix-edd.
"because it is closed source, you don't have a clue what bugs have been fixed"
That's some strange logic there. Closed source software never get release notes listing bugs? Nonsense!
"let alone how long they have been there."
Usually true. Now tell me what do you gain by learning that your closed or open source software had a bug for years? Do you rage at the closed source software for having yet another bug, or are you satisfied with the open source software that the "many eyes" theory worked perfectly once again?
"because it is closed source, you don't have a clue what bugs have been fixed"
That's some strange logic there. Closed source software never get release notes listing bugs? Nonsense!
Microsoft release notes ?
I chose some random kernel vuln: This security update resolves a vulnerability in Microsoft Windows. The vulnerability could allow information disclosure when the Windows kernel improperly handles objects in memory.
Compare that to what you get here:
full code
a diff
a workaround (blacklist module)
Now, a diff is a patch -> you apply diff, recompile your kernel and are done on the very day it gets disclosed, if you are paranoid. You can also wait for Debian, Redhat or Suse, but you do not have to.
In Windows world, you wait for the patch, wait some more days because you are scared the patch might bork other things ... because Windows is such a big monolithic mess!
"Microsoft release notes ?"
I took offense with the OP's statement 'because it is closed source, you don't have a clue what bugs have been fixed'. I wasn't concentrating on MS, Linux or any particular product, just the RMS school of dissing anything non-GPL.
You don't obviously get diffs with closed source, but with most cases there are release notes stating what is fixed, and thus "you do have a clue of what's fixed". Similarly not all open source patches are well documented, many times it includes a blanket statement of "several small bug fixes".
"You can also wait for Debian, Redhat or Suse, but you do not have to."
In the case of my systems, I waited for Debian until 0739 US Mountain time on 9 March 2017 when unattended upgrades installed the upgraded kernel. I suspect users of other major distributions received similarly timely updates. I waited a couple of days more to reboot, since the vulnerability was local and the affected kernel module was not, in fact, loaded.
"Closed source software never get release notes listing bugs?"
Some closed source stuff may get release notes listing what's been fixed, but in recent decades the mainstream closed source stuff hasn't been bothering with such 'trivia' - Windows being the obvious example, but there are plenty of others. If it's open source, the fix is obvious, by definition.
What would Apple do?
Because someone saw the cluster-fuck that is Windows Drivers and thought "That is what Linux has been missing this whole time!"
I understand the point of modules, but I've seen far too many of them that were created out of sheer idiocy or just a complete lack of fore-thought by the engineer. Usually the module could be replaced by a few lines added to the kernel and a pair of daemons (One to run the stuff that absolutely needs root, and a second to handle the bulk of the work, but runs under its own least-privilege user)
BSD commonly has less drivers than Linux, in particular multimedia devices and so on; fewer people using it, who care less about those sorts of things. NIC, HBA etc drivers, no problem - USB webcam drivers or TV dongles, pretty much nothing.
Linux has all these things; we looked on enviously at things like MythTV. Eventually one guy came up with an idea: Why don't we take all those Linux USB drivers and make a compat shim to use them on FreeBSD. The interesting part is how he decided to do it; he wrote a compat library that runs the linux USB driver in userspace. The library co-ordinates with a single simple kernel module, cuse4bsd, which creates nodes under /dev and copies data to/from the user space program.
This means the entire linux driver is running only in userspace, where as on Linux it is all running in kernel space. Any bugs in the driver would cause an oops on Linux, whilst on BSD you can simply restart the userspace program containing the driver.
The only kernel code is simple, easier to test and debug, and is the same for all consumers. Compared to the Linux drivers, which are often written by box shifting manufacturers simply by taking an existing driver and tweaking it, and the surface of code within the kernel is tiny.
Obviously, it's not as efficient, data has to be copied. It's a lot safer and resilient.
"Obviously, it's not as efficient, data has to be copied. It's a lot safer and resilient."
But can that price be too high for something with a high potential for "context thrashing" like graphics and multimedia? That's one big reason high-performance drivers tend to stick with the kernel. Isn't that why BSD keeps the network drivers there for performance reasons (especially as network latency drops and speed rises)?
The funny part is that, if you have right hardware interface (for example virtualization support), then for best performance you actually want drivers in the userspace - in the same process where the userspace calls are being made. For example see OpenOnload and more.
Which probably explains why high-performance graphics remains the toughest thing to virtualize because it puts you in a dilemma between all the tight hardware demands (including latency) and the need to move things to userland for the purposes of virtualization. How do you maintain high performance in a situation like this without lots of context thrashing or the use of kernel shims on the host?
"How do you maintain high performance in a situation like this without lots of context thrashing or the use of kernel shims on the host?"
Why would you even want to think about virtualizing time-critical movement of high volumes of (graphical) data for a chunk of graphics hardware?
Conflicting requirements: good, fast, cheap. Which one (or two) can you live without? You can't have all three, sometimes you don't even get one.
The tricky part is how to virtualize, or to be more accurate - how to setup communication channel between hardware and userspace (virtualization being one of the options). If done right, the result can beat performance of kernel driver hands down, thanks to avoidance of context switching and memory copies. This however requires that the hardware itself implements high enough level of abstraction to be usable from userspace, which is doable (although expensive) for interface such as IP network stack. Looking at where the GPUs are going (but without much insight - not my area), I suspect the same is already happening in GPU advertised for their VDI capabilities such as (say) nVidia Grid, or perhaps Radeon Instinct - with the right hardware and firmware. I agree one thing - this is not going to be cheap, for a long time to come.
"how to setup communication channel between hardware and userspace (virtualization being one of the options). If done right, the result can beat performance of kernel driver hands down, thanks to avoidance of context switching and memory copies. "
Indeed. There has been lots of effort in recent decades into reducing the number of buffer copies in a network stack, for example, and even before that, OSes like QNX did high performance high volume inter-process communication (for certain specific requirements) by fiddling with memory management pointers rather than by copying buffers, and by making intelligent use of scheduling states in conjunction with message passing:
http://www.qnx.com/developers/docs/660/topic/com.qnx.doc.neutrino.sys_arch/topic/intro_Message_passing_OS.html
But doing this stuff well requires intelligence and maybe experience, which seem to be qualities in short supply in the programming world in general, which often seems to overvalue novelty.
Is it time to put to bed the open source myth that many eyes make all bugs shallow?" I'm not knocking open source, *which is great*, but the fervor which leads some acolytes to claim it is always superior in every way to software written for commercial sale seems misplaced.
The fact is, not all open source software has that many eyes on it, because nobody is paying for them.
The idea of many eyes making bugs shallow has always been wrong - you actually need expert eyes reviewing code: the eyes of many idiots is of almost no help at all.
However, the benefit of open source is that the source is available to view, so those who care can do code reviews. However, unless the software is free (libre), then you might be able to identify a bug, but have no licence to distribute a fix, being dependent on the copyright owner to make changes and distribute an update.
The benefit of free (libre) software is that identified bugs can be fixed without depending on somebody else. This does not mean all such bugs will be fixed, but the potential is there.
With non-libre software, especially if it is not open source, you have no idea if bugs exist or even if they are found that the copyright owner will distribute, or allow to be distributed, a repaired version. You simply have to trust the supplier of the software in question. This may or may not be a good idea.
>The fact is, not all open source software has that many eyes on it, because nobody is paying for them.
Well there are always enthusiasts and paid devs that are looking and that is the whole point, anybody can have a look - it will always be more eyes than closed software.
The very big advantage of Open Source, though, is not so much all the eyes, it is the patches ... when you discover a problem, you fix it and send the patch (hacker-centered ecosystem), the god-likes (Linus, Greg etc) validate it and include it, if it ticks all the boxes. The good thing is, you can get a diff and patch your kernel easily, if you need to ... without waiting on the vendor (Suse, RedHat, Ubuntu)!
Well, except when you have Linux on closed hardware (NAS, router), then you are reliant on the vendor ...
"Well, except when you have Linux on closed hardware (NAS, router)"
This is why for years I have not bought any home router etc without first checking that it's supported by OpenWRT. This policy really paid off last year when that major WPA vulnerability was discovered - all my routers, including 5 year old models were updated pronto. I'm sure that there are millions of vulnerable routers out there that are long since unsupported by the manufacturers, but still in use.
This is one reason why I compile a specific kernel for each machine I use: much less code, reducing the probability of bugs.
Distro kernels take the opposite approach, including code for everything they've ever heard of, in case some user needs it.
Maybe it would be better to have a choice of kernels in your distro, from "average PC" to "kitchen sink included".
Maybe it would be better to have a choice of kernels in your distro, from "average PC" to "kitchen sink included".
Actually that's why we have different distros from Red Hat and Ubuntoo include everything to Gentoo and Linux from scratch, include only what you want and compile it for your system.
If you're security conscious, you cut your attack surface and compiling your own kernel is base-level reduction. I've always done that, when it's available. I can't say, one way or the other, what's a better prescription for Linux in the hands of more mainstream users as I don't play there. Given past experience, ypu will probably find everyone uses the kitchen-sink kernel as soon as few few articles or tweets mention something not working out of the box,though. People are risk averse for the wrong risks, fairly invariably.
But aren't most Linux kernels these days modular in nature, meaning different parts only get loaded as and when needed? Meaning to trigger a vulnerability in some part of the kernel, you need to actually use that part, which would in turn get compiled into custom kernels, too?
It's 2017. Joe Public mostly doesn't need an hdlc driver, do they? And by similar reasoning, how many people need to worry about this vulnerability? See e.g.
http://seclists.org/oss-sec/2017/q1/572
The code in question is an optional "line discipline" for the Linux tty driver. I did a bit of custom work based on the tty driver a few years ago for a high performance non interactive serial line card for talking to some high speed data acquisition devices. Swiss penknife doesn't even begin to describe the internals of the then tty driver.
Linux's tty driver stuff (including the overall architecture) might be a good piece of code to rethink (ideally, to massively simplify) one day, now that interactive users tend not to be using ADM3As and VT100s, never mind teletypes. Even InterwebOfTat things don't need all the layers that the generic tty driver offers.
Apologies if this has been fixed in the years since I looked.
Meanwhile, this doesn't sound like the same kind of issue as systems which are vulnerable to privilege escalation when non-priv users open "a specially crafted JPG file", surely?
>>Given the flaw's age, Linux enterprise servers and devices have been vulnerable for some time<<
Such servers are run by professionals. Updates and application installation are only done after much testing, etc. These are well-regulated environments. Security is provided in a manual sense.
But consumer devices are run by people who like to download latest app, don't control their machine - they need automatic security.
While Linux might be good for servers, Linux - and Android - are not so good for end users.
Not all Linux kernels are open source. Individuals and any organization can create & maintain custom Linux kernels, based upon the open source Linux kernel. When an "old", unusual or neglected Linux kernel exists, often these private persons or organizations will issue their own version of the Linux kernel.
There are up & down sides of the private kernels. They may sneak in code that is used for privacy-tracing, error-tracking or "stolen" from other copyright sources. But often they will miss the bug-repairs & optimizations of the original open-source version of the Linux kernels.
These original open-source kernels are very frequently needing better optimizations & bug-fixes. Each "stable" release of the kernel has several bug-fixes release each week it seems. The Ubuntu-based operating systems can rapidly & easily maintain this rate of "improvement", via "http://kernel.ubuntu.com/~kernel-ppa/mainline/"
The hundreds of other Linux brand-names might need tedious compilation from source code, but each week?