back to article Linus Torvalds on security: 'Do no harm, don't break users'

Linus Torvalds has offered a lengthy explanation of his thoughts on security, in which he explained a calmer and more detailed version of his expletive-laden thoughts on the topic earlier this week. Torvalds was angry that developers wanted to kill dangerous processes in Linux, a measure that would have removed potential …

Silver badge
Trollface

"You need to not piss off users, and you need to not piss of developers.”

And you especially need to not piss me off, he added.

50
0
Anonymous Coward

fairly sensible explanation ...

Torwalds may not be the best PR in the world, but this explanation makes a lot of sense.

It's interesting to see he apparently felt he should explain the bloody obvious ...

64
5
Anonymous Coward

Re: fairly sensible explanation ...

It's nonsense. The argument is "that's broken and exploitable, so leave it like that until both things are fixed." From a security perspective that's nonsense, as soon as something is compromised and being exploited you have to balance the desires of the users for things to work with the desires of the data owners to not have their data spunked all over the internet. Fuck developers and users, that's my debit card details!!

9
51
Anonymous Coward

Re: fairly sensible explanation ...

The kernel devs don't take very long to fix important bugs, especially security bugs. Upstream devs should report to the kernel devs as the kernel fix may fix the issue in its entirety or change what the upstream devs need to fix.

Next someone will point out long standing bugs that haven't been fixed. Some of which are not, at all, easily exploitable and others that have been in the kernel for years, but have only recently become an issue because of other changes to the kernel.

No kernel or OS is perfect folks, but the Linux kernel gets important updates as fast as possible. Some other are more questionable.

40
1

Re: fairly sensible explanation ...

>It's nonsense. The argument is "that's broken and exploitable, so leave it like that until both things are fixed."

Ok, so hypothetically, the person who is upgrading discovers, upon upgrading , that their stuff no longer works in some way.

do they:

a) go, oh well, must have been insecure, lets try and fix forward while everything is down.

b) Roll back the upgrade immediately (and delay updates to production if they discovered this in test).

99.999% of people will do b).

Thus not only that one bug that was "fixed" will still be present, so will loads of other bugs that the upgrade would have fixed. If you are very unlucky, the impact of the failed upgrade will include some kind of risk exception so that the software is not updated again (at least until the failed upgrade is investigated, the root cause discovered and the upgrade retested/rescheduled).

Making security fixes not break everything is pretty important, because if they do, people will not install them in a timely fashion

69
0
Silver badge

Re: fairly sensible explanation ...

> From a security perspective that's nonsense

... and that is why the whole article is an explanation of why the "security perspective" itself is nonsense.

Linus' original rant was about the software equivalent of bricking up the front door of every house in the street because there might possibly be a way of jemmying them.

35
0
Silver badge
Holmes

Re: fairly sensible explanation ...

"It's interesting to see he apparently felt he should explain the bloody obvious"

He did so for the "special" developers that submitted the CRAP PATCH in the first place. I think he's using a different tactic this time, to help eliminate the 'threat' in the future. I don't think it will work but I'm slightly entertained by it all. Anyway. GO Linus! Keep up the good work, k-thx!

Of course "being nice" and explaining things doesn't change the fact that putting "whitelists" into the kernel and killing processes is a *BAD* *IDEA* to begin with, which SHOULD HAVE BEEN "the bloody obvious" ( but apparently wasn't ).

Icon, because, topic related.

14
2
Silver badge
Facepalm

Re: fairly sensible explanation ...

"Fuck developers and users, that's my debit card details!!"

Thanks, Micro-shaft and/or "gnome 3" and/or "systemd" developer. "Your kind" of attitude is already so prevalent in the software world these days, that the rest of us don't even bother to comment on how NAUSEATED we are any more... [except on occasions like THIS one]

(the backlash is long overdue)

icon, because facepalm.

post-edit: I somehow doubt your debit card details would be easily revealed by a particular kernel bug. that's just FUD.

14
4

Re: fairly sensible explanation ...

Please stop with the CAPS Bob.

4
2

Re: fairly sensible explanation ...

I see a lot of comments taking Linus's rant as not fixing exploits. The thread he commented on is not about bug fixes, but about "hardening" the kernel. When a bug happens (overflow bug, for example), for it to be exploitable, an attacker must find it, and find a way to exploit it. The hardening effort, is to make it more difficult to find the location in memory to do the exploit. This particular patch series, is about limiting what memory in the kernel the "user copy" functions can write to, by adding "white lists". This prevents the sort of bug, where the kernel lets the "user copy" functions point anywhere, and be able to change how the kernel is suppose to work. It's not about fixing a bug. It's about how to keep bugs from becoming an exploit. It's not all black and white.

The issue is that the original patch series would crash the kernel when a "user copy" function accessed something not in the white list. It was at the end of the series that it was changed to have a "fall back" method to only warn, because it wasn't until the end that it was found that KVM would crash because of it. This mentality of the security folks to crash the kernel first is what triggered Linus to have his rant. He stated that it should have been a warning from the start. This is the disconnect that Linus is having such a big issue with the security folks, and where he says, the security folks are done when the exploit is closed, but for the kernel developers it only begins. The kernel developers now have to find all the places that need to be white listed that currently are not. Without the fall back feature, things that use to work (like virtual machines) now crash the kernel. That is not acceptable.

It's not that Linus hates security, far from it. He's trying to educate them, to see the bigger picture. There's folks in the security world that agree with Linus. Just read the response from Jason A. Donenfeld, and Linus's response to him (which this article is about).

6
0

Re: fairly sensible explanation ...

@AC "Fuck developers and users, that's my debit card details!!"

I totally get where you're coming from - BUT - if taking away functionality breaks the system, then the admins won't apply the update and so the bug/exposure lives on (although it may be fixed in an update that no-one uses). Then systems get more and more out of date and difficult to patch, then they just get left, then there's a mess that no-one can sort out, so finally the OutSoucers move in and continue to do nothing for 4 more years. Then finally everything is truly broken.

4
0
Silver badge

Re: fairly sensible explanation ...

"Making security fixes not break everything is pretty important, because if they do, people will not install them in a timely fashion"

I could not have said this better.

Already, breakages are common enough companions to updates that I get a feeling of dread every time that any software (OS or not) wants to do it -- which is why I no longer allow anything to automatically update anymore. Instead, I let them pile up until I know I have a free weekend to spend fixing everything after the update happens.

2
0
Anonymous Coward

Re: Fuck developers and users, that's my debit card details!!

And the 50 downvotes over me trying to control my data are why MS still rule the corporate world. Until you nerds stop with this "information wants to be free" bollocks you'll never get the buy in you need from those who control the cash.

Good.

0
4
Silver badge
Linux

Re: Fuck developers and users, that's my debit card details!!

And the 50 downvotes over me trying to control my data are why MS still rule the corporate world. Until you nerds stop with this "information wants to be free" bollocks you'll never get the buy in you need from those who control the cash.

That data you're trying to control.. Would that be the data that MS slurps whether you want them to or not?

Have to give them this much, at least you can stop MS from getting at it simply by not installing their malware on your computer, unlike a few other actors who try to infest every web page. (OTOH, they only get what you give when you visit a page, unlike the stuff MS are after....)

At least with Linux my data is in my hands and shall remain so unless I either choose to give it away or do something silly like letting some miscreant access my machine. Hence why no Windows is allowed online (MS being one miscreant who'll not get access any more).

0
0
Silver badge

Re: Fuck developers and users, that's my debit card details!!

No, you got 50 downvotes because your statement "The argument is "that's broken and exploitable, so leave it like that until both things are fixed."" is a figment of your imagination.

2
1
Anonymous Coward

It must be hard working on the kernel.

15
0
Anonymous Coward

But much easier now that we know it doesn't need to be secure, as long as it keeps working.

10
63
Silver badge

Point missed

No, it needs to be secure and keep working. If its secure but doesn't work its at least as useless as if its insecure and working.

56
0
Anonymous Coward

Did you read the article?

He's rightly saying that breaking something to harden something is not acceptable. I think if it's an exploitable bug then it falls into a different category.

Sometimes I wonder who the real masturbaters are.

43
2
Anonymous Coward

Did you read the article?

Yes, twice to be sure. He's saying that breaking users to fix a bug that hasn't been exploited yet simply annoys users unnecessarily. No-one would argue that the bug shouldn't be fixed correctly, but temporarily disabling something to ensure that it can't be exploited, while a full fix is being developed, is a perfectly acceptable security approach, and far, far better than leaving a potential kernel vulnerability unpatched. Users who don't care about security can always not upgrade.

3
51
Anonymous Coward

If you don't update then the bug exists.

If you update it gets disabled.

If you fix it properly everyone is happy and that is the option he's going for rather than the band aid approach which breaks the program all in the name of hardening it.

If you're going to do something do it right rather than the Windows 10 mantra of using your users as guinea pigs.

I'm not sure what you take from the article but maybe you should read the previous article as well to get some understanding.

49
1
Silver badge

Or if your kernel kills off processes because it doesn't like the way they look at it funny or whatever criteria it was, perhaps the security hardening could be turned into an DoS attack.

43
0
Silver badge

Especially when you have to consider stupid: particularly stupid users. What do you do when you biggest issue is PEBCAK? Some would say training while others counter you're doing it wrong.

7
3

This post has been deleted by its author

Silver badge

What if the bug is intrinsic to the interface, meaning the ONLY way to fix it is to break the user?

3
6
Anonymous Coward

If you don't update then the bug exists.

If you update it gets disabled.

It all depends, of course, on the bug severity. If we're talking of some piddling CVSS 1 bug, and you can get a proper fix out in a week or so, then of course you can probably live with it until you fix it properly.

On the other hand I've just found a CVSS 9+ bug in an infrequently-used program that lets an ordinary user become root. The fix will require redesign which will take some time to develop and test.

Am I going to disable that program until we have the fix, even though the bug isn't known yet and even if it's inconvenient? Too fscking right I am.

8
0
Silver badge

The problem, though, is how do you KNOW the bug isn't already known elsewhere?

3
4
Silver badge

"I'm not sure what you take from the article but maybe you should read the previous article as well to get some understanding."

Even better, go and read the actual post.

9
0
Silver badge

"What do you do when you biggest issue is PEBCAK?"

The kernel hardening approach would seem to be switching off the computer and removing the keyboard. And maybe the chair.

15
0
Silver badge
Joke

@ Symon

You mean you have to crack nuts to get at the kernel?

7
0
Silver badge

> perhaps the security hardening could be turned into an DoS attack

There's no 'perhaps' - or even 'turned in to' - about it. Most "security" policies are firmly inside the dimwit domain of "just say no"ism. In effect, much "security" already is a DOS attack.

7
1
Silver badge

"temporarily disabling something to ensure that it can't be exploited, while a full fix is being developed, is a perfectly acceptable security approach"

Disabling in instead of fixing it isn't. What was stopping the submitters of sending in patch to fix the problem instead of hiding it?

6
0
Silver badge

"The problem, though, is how do you KNOW the bug isn't already known elsewhere?"

As you like posing hypotheticals here's one for you: There's a bug in the OS that runs your intensive care monitoring system which could lead to it being pwned. Shall we shut it down, just to be safe?

16
0
Silver badge
Stop

"What if " [snip paranoia bait]

There are a zillion "what if" possible questions out there. We can _easily_ "what if" ourselves into a completely unproductive state. But that wouldn't be practical, would it?

I doubt that a bug would be "intrinsic to the interface" and EVEN if it _IS_ you RE-DESIGN THE INTERFACE to fix it (not slap on a 'patch' with whitelists and process killing as a "fix").

I have to wonder how those white lists work, anyway. Could a trojan horse application simply write a process with "the right credentials" and _BYPASS_ that anyway?

see, ya gotta think like an evil hacker to see the potential workarounds in order to recognize that a horsecrap "solution" (like white lists and process killing) is simply PURE HORSECRAP??

FIX THE ORIGINAL BUG instead, I say. So did Linus, apparently.

6
1
Silver badge

"As you like posing hypotheticals here's one for you: There's a bug in the OS that runs your intensive care monitoring system which could lead to it being pwned. Shall we shut it down, just to be safe?"

If shutting it down isn't an option, then something else needs to be done, such as isolating it, taking it offline so no one can connect to it. It's still possible to do intensive care monitoring without networking (What happened BEFORE then?), but you're getting yourself into serious Trolley Problem territory if you leave them online since someone could be getting ready to raise hell in your hospital right now (something one MUST assume in a high-security environment), and malpractice and negligent death lawsuits can be crippling, especially if done en masse.Look at all the mega-leaks that have happened already with the fallout still being assessed on them. We're fortunate so far there hasn't been a megahack that is directly responsible for lots of deaths, and I seriously doubt any hospital wants the ignominy of being the first.

The TLDR version: If your operations can't continue without being seriously vulnerable, what you REALLY need is a Plan B.

2
6
Bronze badge

THIS. Just ask any A/V vendor who flags something like, say, winlogon.exe as a false positive. Or 'accidently' kills lsass.exe on a windows box. Or flags a GINA dll as bad, when it's actually needed to logon to the workstation. Or any number of other stupid tricks that turn the workstation into an expensive and terrible substitute for a brick.

While I can certainly understand where the security folks are coming from, it gets all too easy to lose track of why the machines are there, and frankly, having someone rub your nose in it to remind you is sometimes the only way to break through that narrow focus.

7
0
Silver badge

" If your operations can't continue without being seriously vulnerable, what you REALLY need is a Plan B."

Which brings us back to Linus's post: his Plan B - or, really, just Plan - is to fix the bug.

9
0
JLV
Silver badge

>far better than leaving a potential kernel

Many bugs, esp in Linux kernel, exist in a low immediate threat context, where you already need some kinda access to play with them, like local access.

Are you saying f..k the users and the devs, even in a context where a reasonably locked down system would be at little risk??? i.e. most assuredly not a Heartbleed equivalent.

Go fly a kite. Why not just flag it? If the users dont want it, Torvalds doesn't and the devs don't, who elected you the boss?

Plus, is this where most of the real threats lie? In the kernel? Or is it not rather sloppy apps, sloppy configs, sloppy human processes by some IT shops, rogue insiders? Look at where breaches are actually happening, not at just at theoretical concerns.

6
0

I'm a user, but I also want a secure box. Any end point that allows a remote hacker into my box, I want a security workaround until a full fix is ready, and I accept that this may break utility until its the case. But this is because I'm a server admin first, which probably puts me in a subclass of user. But even so; any user who cares about the sanctity of their data probably agrees. This doesn't mean that the security patch is the final product, it means its the uncomfortable workaround. However, where I agree with Linus, is the same mindset shouldn't be applied to local userspace programs. Sure its annoying if one local user can gain extra priveleges and do damage, but that I can deal with.

11
1
Silver badge

So if they find a bug with Apache, and the work around is to shut it off, over 75% of the internet would fall offline.

If the issue isn't reported, but the actual flaw is fixed then patched, there's no issue.

Which one would you rather have happen?

21
2
Silver badge
Linux

But what he's extolling is that the security problem should be rectified rather than just killing the system. "Remote attacker can kill the whole box" (we're talking kernel development here) is worse than "Remote attacker can kill a service", which is worse than "Remote attacker can fuck right off".

Linus is essentially saying "Never permit the first; permit the second in extreme circumstances; go with the third choice wherever possible"

11
0
Silver badge

I also want a secure box. Any end point that allows a remote hacker into my box, I want a security workaround until a full fix is ready, and I accept that this may break utility until its the case. But this is because I'm a server admin first, which probably puts me in a subclass of user. But even so; any user who cares about the sanctity of their data probably agrees.

The "off" switch defeats all remote hackers instantaneously, albeit at the cost of 100% utility degradation. What you are debating is how close you're prepared to get to that logical endpoint in the interests of greater peace of mind re. security.

I have some online servers where the "sacred" issue is uptime. Data passes through these systems in pseudo-real time so the primary thing I'm concerned about is compromising system stability. If hackers get in then, so long as they're only reading/copying data and can't escalate to damage system configuration/stability, I don't have an immediate problem because the data the system processes is public domain in the first place.

On the other hand, I have another server with highly sensitive data and that one is on site (the ones above are VPS) and powered by remotely switchable PDUs so it can be physically killed (in extremis) in the event of something suspicious being detected. The former is "fail safe" (meaning it will try to keep running in the face of a problem) and the latter is "fail secure" (meaning it won't). Both sorts of use case exist, but security researchers tend to have tunnel vision focussed only on the latter.

[1] The data the system processes is public domain in the first place.

28
1
Silver badge

"But even so; any user who cares about the sanctity of their data probably agrees."

What if the crash the system approach leaves the user with corrupt data?

The effort should go into fixing the root problem.

8
0
Silver badge

So if they find a bug with Apache, and the work around is to shut it off, over 75% of the internet would fall offline.

Not a problem when there is an option to allow a buggy program to continue to run as was proposed. Better to let the user make an informed decision; allow a buggy program to run or block it.

If the bug isn't detected and blocked the user would never know it was buggy would be blindly continuing to use it believing it was bug free.

6
0
Silver badge

But to fix the root problem you have to IDENTIFY it first. Meanwhile, you still have an exploit avenue potentially being exploited while you twiddle your thumbs.

1
5
Silver badge
Trollface

But to fix the root problem you have to IDENTIFY it first. Meanwhile, you still have an exploit avenue potentially being exploited while you twiddle your thumbs.

In that case, the only prudent thing to do would be to turn your machine off and remain disconnected until you hear there's an OS +apps that is totally bug-free.

Care to demonstrate the effectiveness of this solution for us?

2
0
Anonymous Coward

"I'm a user, but I also want a secure box. Any end point that allows a remote hacker into my box, I want a security workaround until a full fix is ready, and I accept that this may break utility until its the case. But this is because I'm a server admin first, which probably puts me in a subclass of user. But even so; any user who cares about the sanctity of their data probably agrees. This doesn't mean that the security patch is the final product, it means its the uncomfortable workaround. However, where I agree with Linus, is the same mindset shouldn't be applied to local userspace programs. Sure its annoying if one local user can gain extra priveleges and do damage, but that I can deal with."

dd if=/dev/zero of=/dev/sda is your friend, then.

Your box, regardless of what OS it runs, is not secure. Many 0 days are inside. Deal with it.

Breaking user experience to avoid security flaws is retarded.

0
0
Silver badge

Sometimes you can't have it both ways...

Although I see his point I also think that he may have set his expectations a little (too?) high. In an ideal world this may work, but sometimes it just doesn't work this way..

Sure, if there is an error then that needs to be fixed asap. No arguments there. But what about the time in which the bug got discovered and the moment of implementing the actual fix? That's the moment when a system will be vulnerable, and a security hardening might be capable of preventing further damage from taking place if an attack were to occur.

Of course this can break stuff. I think a good example would be the "kern.securelevel" setting on FreeBSD. This is a setting which has a default value of -1 and administrators can only increase the value, by doing so this will harden the system some more.

For example value 1: you can no longer turn off immutable flags on files, /dev/mem and /dev/kmem may not be directly opened for writing (read: you can no longer load or unload kernel modules) and /dev/io is fully inaccessible

Value 2: All of the above and disks can no longer be opened directly for writing (mount is excluded). So it protects the filesystem(s).

These setting will plain out break X. But it also hardens the system and can prevent plenty of possible nastiness from happening. So on a desktop this setting might not be very useful, but on a server all the more valuable (assuming it doesn't use X).

So I can't help wonder if this also doesn't apply here. In an ideal world you wouldn't need failsaves, but the world simply isn't ideal.

4
1
Bronze badge

Re: Sometimes you can't have it both ways...

"But what about the time in which the bug got discovered and the moment of implementing the actual fix? That's the moment when a system will be vulnerable, and a security hardening might be capable of preventing further damage from taking place if an attack were to occur."

I think it's more to the point that linux shouldn't dictate the user choices (not like Windows 10 force update). If you do want a security harden whatever, you sandbox it under your control to avoid damage. But if another user who just want X program to work, he/she should still be able to get X program work regardless of security.

Some users here are sysadmin, which explains the bias. It is common to think as sysadmin, you want to secure your system as much as possible.

But in reality, you did a lot of work to balance the line of being secured but breaks something or being not secured but it will still works. You test patches ahead to avoid breaking the userspace all the time. You did it because you wanted the control.

Now imagine you don't have control and linux decides for you, your team and your company, breaking a lot of user programs. That probably isn't good for you, and linus thinks so too.

edit: we're talking about the enhance hardening from the security guy (explained in another article), which he explained it like a sandbox, but the problem is it would stop the processes with the vulnerability which means it can break user software.

8
2
Anonymous Coward

Benjamin Franklin: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

*

I think Franklin would come down on the same side as Linus!!!

14
5

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018