back to article Dormant Linux kernel vulnerability finally slayed

A recently resolved vulnerability in the Linux kernel that had the potential to allow an attacker to gain privilege escalation or cause denial of service went undiscovered for seven years. Positive Technologies expert, Alexander Popov, found a race condition in the n_hdlc driver that leads to double-freeing of kernel memory. …

Silver badge

Slayed?

I'm not absolutely certain, but that sounds wrong. Shouldn't that be 'slain'? Slayed only sounds right in the context of 70s rockers.

18
0
Anonymous Coward

Re: Slayed?

Or maybe "fixed"?

14
0
Anonymous Coward

Re: Slayed?

In the context of the headline,you're right, 'Slain' would be more correct, grammatically.

"The knight slayed the dragon" vs "The dragon was slain by the knight".

"The developer slayed the bug" vs "The bug was slain by the developer".

Hmm... to be honest, I agree with @word_merchant -- "fixed" would be a better word choice all round, and doesn't change in spelling between the two contexts.

(going even further off topic: in old English, they did differentiate by putting an accent on the 'e' and pronouncing it differently, so by those rules it would be "the bug was fixéd by the developer". That whole craziness was dropped a long time ago though)

7
0

Re: Slayed?

Err...

"The knight slew the dragon", etc.

I sentence you to fifty hours' reading of the Bible.

Your remark about fixed is also wrong. The accent has only ever been used in poetry and hymns, to force the unnatural pronunciation when the metre requires it. Both the past verb and the past participle were often written "fixt" as well as "fixed" in the early days of the word's history (it's a modern import to English, sometime in the 15th century).

13
0

Re: Slayed?

'Secured' would be a better antonym...

0
1
Silver badge
Headmaster

Re: Slayed?

in old English, they did differentiate by putting an accent on the 'e' and pronouncing it differently, so by those rules it would be "the bug was fixéd by the developer"

Should be an e-grave. E-acute would sound like "Fix-ay-d", whereas the archaic form would be "Fixèd", pronounced Fix-edd.

7
0
Silver badge

Re: Slayed?

> "fixed" would be a better word choice all round

But, "Buffy the Vampire Fixer"? Eew. Who'd want to watch that?

16
0
Silver badge

Re: Slayed?

@frumious bandersnatch

"But, "Buffy the Vampire Fixer"? Eew. Who'd want to watch that?

Given the alternative usage of 'fix' as in "Since my old tomcat has been fixed he's been a lot quieter" the idea of 'Buffy the Vampire Fixer' could get a lot of viewers!

10
0
Silver badge

Re: Slayed?

@julian bradfield

This is one of the things I really love about El Reg - in theory we're all a bunch of hairy smelly techie-geeks who only understand computers, but we can get long threads on the nuances of English irregular verbs and mediaeval pronunciation. Priceless!

22
0
Facepalm

Re: Slayed?

No, not slain. He meant "sleighed", you know, taken for a ride.

3
1
Silver badge

Re: Slayed?

English irregular verbs

IE - all of them.. :-)

1
0

Re: Slayed?

Don't you just love English irregular verbs, shortcut to babbling insanity.

0
1
Coat

Should have used Windows...

7
11
Anonymous Coward

Should have used Windows...

...where kernel vulnerabilities to allow privilege escalation come as standard

21
3

Err AC...

From what I think you're saying and what you are trying to portray, it also seem like they are standard in Linux as evidenced in this very article.

9
6

...where kernel vulnerabilities to allow privilege escalation come as standard

Quite. Hence the obvious sarcasm, which if you missed initially, should have been clearly sign-posted by the "I'll get my coat".

You appear to have sadly missed both.

8
0
Anonymous Coward

Re: ...where kernel vulnerabilities to allow privilege escalation come as standard

Icons are not displayed in the mobile version.

5
0
Silver badge

Should have used Windows...

Where, because it is closed source, you don't have a clue what bugs have been fixed let alone how long they have been there.

8
1
Silver badge

@alain williams

"because it is closed source, you don't have a clue what bugs have been fixed"

That's some strange logic there. Closed source software never get release notes listing bugs? Nonsense!

"let alone how long they have been there."

Usually true. Now tell me what do you gain by learning that your closed or open source software had a bug for years? Do you rage at the closed source software for having yet another bug, or are you satisfied with the open source software that the "many eyes" theory worked perfectly once again?

4
5
Silver badge
Boffin

Re: @alain williams

"because it is closed source, you don't have a clue what bugs have been fixed"

That's some strange logic there. Closed source software never get release notes listing bugs? Nonsense!

Microsoft release notes ?

I chose some random kernel vuln: This security update resolves a vulnerability in Microsoft Windows. The vulnerability could allow information disclosure when the Windows kernel improperly handles objects in memory.

Compare that to what you get here:

full code

a diff

a workaround (blacklist module)

Now, a diff is a patch -> you apply diff, recompile your kernel and are done on the very day it gets disclosed, if you are paranoid. You can also wait for Debian, Redhat or Suse, but you do not have to.

In Windows world, you wait for the patch, wait some more days because you are scared the patch might bork other things ... because Windows is such a big monolithic mess!

12
2
Anonymous Coward

Re: Closed source software never get release notes ...

"Closed source software never get release notes listing bugs?"

Some closed source stuff may get release notes listing what's been fixed, but in recent decades the mainstream closed source stuff hasn't been bothering with such 'trivia' - Windows being the obvious example, but there are plenty of others. If it's open source, the fix is obvious, by definition.

What would Apple do?

3
0
Silver badge

@Hans 1

"Microsoft release notes ?"

I took offense with the OP's statement 'because it is closed source, you don't have a clue what bugs have been fixed'. I wasn't concentrating on MS, Linux or any particular product, just the RMS school of dissing anything non-GPL.

You don't obviously get diffs with closed source, but with most cases there are release notes stating what is fixed, and thus "you do have a clue of what's fixed". Similarly not all open source patches are well documented, many times it includes a blanket statement of "several small bug fixes".

2
0
Bronze badge
Coat

Are you suggesting that Linux is now insecure, because its been exploited, that the PDF printer in Ubuntu or the USB Bus is now compromised to let Stuxnet variants hack your systems?

The spooks will do anything to spy on you, to keep you safe, else just think of the children....

0
0
Bronze badge
Trollface

But we all know don't we??

That Linux is TOTALLY impervious - TOTALLY impervious - right?

There is NEVER a virus for Linux - Right?

There has NEVER been a virus for Linux - Rig.. err.. you get my meaning!

0
3
Silver badge

Re: @alain williams

"You can also wait for Debian, Redhat or Suse, but you do not have to."

In the case of my systems, I waited for Debian until 0739 US Mountain time on 9 March 2017 when unattended upgrades installed the upgraded kernel. I suspect users of other major distributions received similarly timely updates. I waited a couple of days more to reboot, since the vulnerability was local and the affected kernel module was not, in fact, loaded.

0
0
Silver badge

Ah modules

Because someone saw the cluster-fuck that is Windows Drivers and thought "That is what Linux has been missing this whole time!"

I understand the point of modules, but I've seen far too many of them that were created out of sheer idiocy or just a complete lack of fore-thought by the engineer. Usually the module could be replaced by a few lines added to the kernel and a pair of daemons (One to run the stuff that absolutely needs root, and a second to handle the bulk of the work, but runs under its own least-privilege user)

2
1
Silver badge

Keep reading I will get to a point eventually

BSD commonly has less drivers than Linux, in particular multimedia devices and so on; fewer people using it, who care less about those sorts of things. NIC, HBA etc drivers, no problem - USB webcam drivers or TV dongles, pretty much nothing.

Linux has all these things; we looked on enviously at things like MythTV. Eventually one guy came up with an idea: Why don't we take all those Linux USB drivers and make a compat shim to use them on FreeBSD. The interesting part is how he decided to do it; he wrote a compat library that runs the linux USB driver in userspace. The library co-ordinates with a single simple kernel module, cuse4bsd, which creates nodes under /dev and copies data to/from the user space program.

This means the entire linux driver is running only in userspace, where as on Linux it is all running in kernel space. Any bugs in the driver would cause an oops on Linux, whilst on BSD you can simply restart the userspace program containing the driver.

The only kernel code is simple, easier to test and debug, and is the same for all consumers. Compared to the Linux drivers, which are often written by box shifting manufacturers simply by taking an existing driver and tweaking it, and the surface of code within the kernel is tiny.

Obviously, it's not as efficient, data has to be copied. It's a lot safer and resilient.

11
0
Anonymous Coward

Re: Keep reading I will get to a point eventually

"Obviously, it's not as efficient, data has to be copied. It's a lot safer and resilient."

But can that price be too high for something with a high potential for "context thrashing" like graphics and multimedia? That's one big reason high-performance drivers tend to stick with the kernel. Isn't that why BSD keeps the network drivers there for performance reasons (especially as network latency drops and speed rises)?

5
0
Silver badge

Re: Keep reading I will get to a point eventually

The funny part is that, if you have right hardware interface (for example virtualization support), then for best performance you actually want drivers in the userspace - in the same process where the userspace calls are being made. For example see OpenOnload and more.

3
0
Anonymous Coward

Re: Keep reading I will get to a point eventually

Which probably explains why high-performance graphics remains the toughest thing to virtualize because it puts you in a dilemma between all the tight hardware demands (including latency) and the need to move things to userland for the purposes of virtualization. How do you maintain high performance in a situation like this without lots of context thrashing or the use of kernel shims on the host?

2
0
Anonymous Coward

Re: high-performance graphics

"How do you maintain high performance in a situation like this without lots of context thrashing or the use of kernel shims on the host?"

Why would you even want to think about virtualizing time-critical movement of high volumes of (graphical) data for a chunk of graphics hardware?

Conflicting requirements: good, fast, cheap. Which one (or two) can you live without? You can't have all three, sometimes you don't even get one.

4
0
Silver badge

Re: high-performance graphics

The tricky part is how to virtualize, or to be more accurate - how to setup communication channel between hardware and userspace (virtualization being one of the options). If done right, the result can beat performance of kernel driver hands down, thanks to avoidance of context switching and memory copies. This however requires that the hardware itself implements high enough level of abstraction to be usable from userspace, which is doable (although expensive) for interface such as IP network stack. Looking at where the GPUs are going (but without much insight - not my area), I suspect the same is already happening in GPU advertised for their VDI capabilities such as (say) nVidia Grid, or perhaps Radeon Instinct - with the right hardware and firmware. I agree one thing - this is not going to be cheap, for a long time to come.

2
0
Anonymous Coward

Re: high-performance graphics

"Why would you even want to think about virtualizing time-critical movement of high volumes of (graphical) data for a chunk of graphics hardware?"

Because no one will ever need more than 640k right?

2
0
Anonymous Coward

Re: high-performance graphics

"how to setup communication channel between hardware and userspace (virtualization being one of the options). If done right, the result can beat performance of kernel driver hands down, thanks to avoidance of context switching and memory copies. "

Indeed. There has been lots of effort in recent decades into reducing the number of buffer copies in a network stack, for example, and even before that, OSes like QNX did high performance high volume inter-process communication (for certain specific requirements) by fiddling with memory management pointers rather than by copying buffers, and by making intelligent use of scheduling states in conjunction with message passing:

http://www.qnx.com/developers/docs/660/topic/com.qnx.doc.neutrino.sys_arch/topic/intro_Message_passing_OS.html

But doing this stuff well requires intelligence and maybe experience, which seem to be qualities in short supply in the programming world in general, which often seems to overvalue novelty.

2
0
Holmes

Is it time to put to bed the open source myth that many eyes make all bugs shallow?" I'm not knocking open source, *which is great*, but the fervor which leads some acolytes to claim it is always superior in every way to software written for commercial sale seems misplaced.

The fact is, not all open source software has that many eyes on it, because nobody is paying for them.

1
0
Bronze badge

The 'many eyes' myth

The idea of many eyes making bugs shallow has always been wrong - you actually need expert eyes reviewing code: the eyes of many idiots is of almost no help at all.

However, the benefit of open source is that the source is available to view, so those who care can do code reviews. However, unless the software is free (libre), then you might be able to identify a bug, but have no licence to distribute a fix, being dependent on the copyright owner to make changes and distribute an update.

The benefit of free (libre) software is that identified bugs can be fixed without depending on somebody else. This does not mean all such bugs will be fixed, but the potential is there.

With non-libre software, especially if it is not open source, you have no idea if bugs exist or even if they are found that the copyright owner will distribute, or allow to be distributed, a repaired version. You simply have to trust the supplier of the software in question. This may or may not be a good idea.

5
1

@Paul 195

"The fact is, not all open source software has that many eyes on it, because nobody is paying for them."

I think you are missing the point that most kernel devs are paid these days. In this case though, there have been few eyes because almost no one uses the driver in question.

3
0
Silver badge
Boffin

>The fact is, not all open source software has that many eyes on it, because nobody is paying for them.

Well there are always enthusiasts and paid devs that are looking and that is the whole point, anybody can have a look - it will always be more eyes than closed software.

The very big advantage of Open Source, though, is not so much all the eyes, it is the patches ... when you discover a problem, you fix it and send the patch (hacker-centered ecosystem), the god-likes (Linus, Greg etc) validate it and include it, if it ticks all the boxes. The good thing is, you can get a diff and patch your kernel easily, if you need to ... without waiting on the vendor (Suse, RedHat, Ubuntu)!

Well, except when you have Linux on closed hardware (NAS, router), then you are reliant on the vendor ...

2
1
Silver badge
Happy

Re: The 'many eyes' myth

>You simply have to trust the supplier of the software in question. This may or may not be a good idea.

You simply have to trust the supplier of the software in question. This never is a good idea.

TFTFY

5
0

Who needs an HDLC serial driver?

This is one reason why I compile a specific kernel for each machine I use: much less code, reducing the probability of bugs.

Distro kernels take the opposite approach, including code for everything they've ever heard of, in case some user needs it.

Maybe it would be better to have a choice of kernels in your distro, from "average PC" to "kitchen sink included".

4
0
Silver badge

Re: Who needs an HDLC serial driver?

Maybe it would be better to have a choice of kernels in your distro, from "average PC" to "kitchen sink included".

Actually that's why we have different distros from Red Hat and Ubuntoo include everything to Gentoo and Linux from scratch, include only what you want and compile it for your system.

5
0
Anonymous Coward

Re: Who needs an HDLC serial driver?

It's 2017. Joe Public mostly doesn't need an hdlc driver, do they? And by similar reasoning, how many people need to worry about this vulnerability? See e.g.

http://seclists.org/oss-sec/2017/q1/572

The code in question is an optional "line discipline" for the Linux tty driver. I did a bit of custom work based on the tty driver a few years ago for a high performance non interactive serial line card for talking to some high speed data acquisition devices. Swiss penknife doesn't even begin to describe the internals of the then tty driver.

Linux's tty driver stuff (including the overall architecture) might be a good piece of code to rethink (ideally, to massively simplify) one day, now that interactive users tend not to be using ADM3As and VT100s, never mind teletypes. Even InterwebOfTat things don't need all the layers that the generic tty driver offers.

Apologies if this has been fixed in the years since I looked.

Meanwhile, this doesn't sound like the same kind of issue as systems which are vulnerable to privilege escalation when non-priv users open "a specially crafted JPG file", surely?

4
1
Silver badge

Re: Who needs an HDLC serial driver?

If you're security conscious, you cut your attack surface and compiling your own kernel is base-level reduction. I've always done that, when it's available. I can't say, one way or the other, what's a better prescription for Linux in the hands of more mainstream users as I don't play there. Given past experience, ypu will probably find everyone uses the kitchen-sink kernel as soon as few few articles or tweets mention something not working out of the box,though. People are risk averse for the wrong risks, fairly invariably.

3
1
Bronze badge

I would think that many admins have no idea what HDLC is/was, since it went out of fashion before they were born. This is a case where the system security was saved by obsolescence of hardware.

0
0
Bronze badge

Security problem is not servers - it's consumers

>>Given the flaw's age, Linux enterprise servers and devices have been vulnerable for some time<<

Such servers are run by professionals. Updates and application installation are only done after much testing, etc. These are well-regulated environments. Security is provided in a manual sense.

But consumer devices are run by people who like to download latest app, don't control their machine - they need automatic security.

While Linux might be good for servers, Linux - and Android - are not so good for end users.

0
1

Closed & Open Source kernels are different

Not all Linux kernels are open source. Individuals and any organization can create & maintain custom Linux kernels, based upon the open source Linux kernel. When an "old", unusual or neglected Linux kernel exists, often these private persons or organizations will issue their own version of the Linux kernel.

There are up & down sides of the private kernels. They may sneak in code that is used for privacy-tracing, error-tracking or "stolen" from other copyright sources. But often they will miss the bug-repairs & optimizations of the original open-source version of the Linux kernels.

These original open-source kernels are very frequently needing better optimizations & bug-fixes. Each "stable" release of the kernel has several bug-fixes release each week it seems. The Ubuntu-based operating systems can rapidly & easily maintain this rate of "improvement", via "http://kernel.ubuntu.com/~kernel-ppa/mainline/"

The hundreds of other Linux brand-names might need tedious compilation from source code, but each week?

0
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017