back to article Linus Torvalds on security: 'Do no harm, don't break users'

Linus Torvalds has offered a lengthy explanation of his thoughts on security, in which he explained a calmer and more detailed version of his expletive-laden thoughts on the topic earlier this week. Torvalds was angry that developers wanted to kill dangerous processes in Linux, a measure that would have removed potential …

  1. Pascal Monett Silver badge
    Trollface

    "You need to not piss off users, and you need to not piss of developers.”

    And you especially need to not piss me off, he added.

  2. Anonymous Coward
    Anonymous Coward

    fairly sensible explanation ...

    Torwalds may not be the best PR in the world, but this explanation makes a lot of sense.

    It's interesting to see he apparently felt he should explain the bloody obvious ...

    1. Anonymous Coward
      Anonymous Coward

      Re: fairly sensible explanation ...

      It's nonsense. The argument is "that's broken and exploitable, so leave it like that until both things are fixed." From a security perspective that's nonsense, as soon as something is compromised and being exploited you have to balance the desires of the users for things to work with the desires of the data owners to not have their data spunked all over the internet. Fuck developers and users, that's my debit card details!!

      1. Anonymous Coward
        Anonymous Coward

        Re: fairly sensible explanation ...

        The kernel devs don't take very long to fix important bugs, especially security bugs. Upstream devs should report to the kernel devs as the kernel fix may fix the issue in its entirety or change what the upstream devs need to fix.

        Next someone will point out long standing bugs that haven't been fixed. Some of which are not, at all, easily exploitable and others that have been in the kernel for years, but have only recently become an issue because of other changes to the kernel.

        No kernel or OS is perfect folks, but the Linux kernel gets important updates as fast as possible. Some other are more questionable.

      2. Justicesays

        Re: fairly sensible explanation ...

        >It's nonsense. The argument is "that's broken and exploitable, so leave it like that until both things are fixed."

        Ok, so hypothetically, the person who is upgrading discovers, upon upgrading , that their stuff no longer works in some way.

        do they:

        a) go, oh well, must have been insecure, lets try and fix forward while everything is down.

        b) Roll back the upgrade immediately (and delay updates to production if they discovered this in test).

        99.999% of people will do b).

        Thus not only that one bug that was "fixed" will still be present, so will loads of other bugs that the upgrade would have fixed. If you are very unlucky, the impact of the failed upgrade will include some kind of risk exception so that the software is not updated again (at least until the failed upgrade is investigated, the root cause discovered and the upgrade retested/rescheduled).

        Making security fixes not break everything is pretty important, because if they do, people will not install them in a timely fashion

        1. JohnFen Silver badge

          Re: fairly sensible explanation ...

          "Making security fixes not break everything is pretty important, because if they do, people will not install them in a timely fashion"

          I could not have said this better.

          Already, breakages are common enough companions to updates that I get a feeling of dread every time that any software (OS or not) wants to do it -- which is why I no longer allow anything to automatically update anymore. Instead, I let them pile up until I know I have a free weekend to spend fixing everything after the update happens.

      3. nijam

        Re: fairly sensible explanation ...

        > From a security perspective that's nonsense

        ... and that is why the whole article is an explanation of why the "security perspective" itself is nonsense.

        Linus' original rant was about the software equivalent of bricking up the front door of every house in the street because there might possibly be a way of jemmying them.

      4. bombastic bob Silver badge
        Facepalm

        Re: fairly sensible explanation ...

        "Fuck developers and users, that's my debit card details!!"

        Thanks, Micro-shaft and/or "gnome 3" and/or "systemd" developer. "Your kind" of attitude is already so prevalent in the software world these days, that the rest of us don't even bother to comment on how NAUSEATED we are any more... [except on occasions like THIS one]

        (the backlash is long overdue)

        icon, because facepalm.

        post-edit: I somehow doubt your debit card details would be easily revealed by a particular kernel bug. that's just FUD.

        1. matjaggard

          Re: fairly sensible explanation ...

          Please stop with the CAPS Bob.

      5. nevets23

        Re: fairly sensible explanation ...

        I see a lot of comments taking Linus's rant as not fixing exploits. The thread he commented on is not about bug fixes, but about "hardening" the kernel. When a bug happens (overflow bug, for example), for it to be exploitable, an attacker must find it, and find a way to exploit it. The hardening effort, is to make it more difficult to find the location in memory to do the exploit. This particular patch series, is about limiting what memory in the kernel the "user copy" functions can write to, by adding "white lists". This prevents the sort of bug, where the kernel lets the "user copy" functions point anywhere, and be able to change how the kernel is suppose to work. It's not about fixing a bug. It's about how to keep bugs from becoming an exploit. It's not all black and white.

        The issue is that the original patch series would crash the kernel when a "user copy" function accessed something not in the white list. It was at the end of the series that it was changed to have a "fall back" method to only warn, because it wasn't until the end that it was found that KVM would crash because of it. This mentality of the security folks to crash the kernel first is what triggered Linus to have his rant. He stated that it should have been a warning from the start. This is the disconnect that Linus is having such a big issue with the security folks, and where he says, the security folks are done when the exploit is closed, but for the kernel developers it only begins. The kernel developers now have to find all the places that need to be white listed that currently are not. Without the fall back feature, things that use to work (like virtual machines) now crash the kernel. That is not acceptable.

        It's not that Linus hates security, far from it. He's trying to educate them, to see the bigger picture. There's folks in the security world that agree with Linus. Just read the response from Jason A. Donenfeld, and Linus's response to him (which this article is about).

      6. enormous c word

        Re: fairly sensible explanation ...

        @AC "Fuck developers and users, that's my debit card details!!"

        I totally get where you're coming from - BUT - if taking away functionality breaks the system, then the admins won't apply the update and so the bug/exposure lives on (although it may be fixed in an update that no-one uses). Then systems get more and more out of date and difficult to patch, then they just get left, then there's a mess that no-one can sort out, so finally the OutSoucers move in and continue to do nothing for 4 more years. Then finally everything is truly broken.

      7. Anonymous Coward
        Anonymous Coward

        Re: Fuck developers and users, that's my debit card details!!

        And the 50 downvotes over me trying to control my data are why MS still rule the corporate world. Until you nerds stop with this "information wants to be free" bollocks you'll never get the buy in you need from those who control the cash.

        Good.

        1. Kiwi Silver badge
          Linux

          Re: Fuck developers and users, that's my debit card details!!

          And the 50 downvotes over me trying to control my data are why MS still rule the corporate world. Until you nerds stop with this "information wants to be free" bollocks you'll never get the buy in you need from those who control the cash.

          That data you're trying to control.. Would that be the data that MS slurps whether you want them to or not?

          Have to give them this much, at least you can stop MS from getting at it simply by not installing their malware on your computer, unlike a few other actors who try to infest every web page. (OTOH, they only get what you give when you visit a page, unlike the stuff MS are after....)

          At least with Linux my data is in my hands and shall remain so unless I either choose to give it away or do something silly like letting some miscreant access my machine. Hence why no Windows is allowed online (MS being one miscreant who'll not get access any more).

        2. jake Silver badge

          Re: Fuck developers and users, that's my debit card details!!

          No, you got 50 downvotes because your statement "The argument is "that's broken and exploitable, so leave it like that until both things are fixed."" is a figment of your imagination.

    2. bombastic bob Silver badge
      Holmes

      Re: fairly sensible explanation ...

      "It's interesting to see he apparently felt he should explain the bloody obvious"

      He did so for the "special" developers that submitted the CRAP PATCH in the first place. I think he's using a different tactic this time, to help eliminate the 'threat' in the future. I don't think it will work but I'm slightly entertained by it all. Anyway. GO Linus! Keep up the good work, k-thx!

      Of course "being nice" and explaining things doesn't change the fact that putting "whitelists" into the kernel and killing processes is a *BAD* *IDEA* to begin with, which SHOULD HAVE BEEN "the bloody obvious" ( but apparently wasn't ).

      Icon, because, topic related.

  3. Anonymous Coward
    Anonymous Coward

    It must be hard working on the kernel.

    1. Anonymous Coward
      Anonymous Coward

      But much easier now that we know it doesn't need to be secure, as long as it keeps working.

      1. JimC Silver badge

        Point missed

        No, it needs to be secure and keep working. If its secure but doesn't work its at least as useless as if its insecure and working.

      2. Anonymous Coward
        Anonymous Coward

        Did you read the article?

        He's rightly saying that breaking something to harden something is not acceptable. I think if it's an exploitable bug then it falls into a different category.

        Sometimes I wonder who the real masturbaters are.

        1. Anonymous Coward
          Anonymous Coward

          Did you read the article?

          Yes, twice to be sure. He's saying that breaking users to fix a bug that hasn't been exploited yet simply annoys users unnecessarily. No-one would argue that the bug shouldn't be fixed correctly, but temporarily disabling something to ensure that it can't be exploited, while a full fix is being developed, is a perfectly acceptable security approach, and far, far better than leaving a potential kernel vulnerability unpatched. Users who don't care about security can always not upgrade.

          1. Anonymous Coward
            Anonymous Coward

            If you don't update then the bug exists.

            If you update it gets disabled.

            If you fix it properly everyone is happy and that is the option he's going for rather than the band aid approach which breaks the program all in the name of hardening it.

            If you're going to do something do it right rather than the Windows 10 mantra of using your users as guinea pigs.

            I'm not sure what you take from the article but maybe you should read the previous article as well to get some understanding.

            1. Charles 9 Silver badge

              What if the bug is intrinsic to the interface, meaning the ONLY way to fix it is to break the user?

              1. bombastic bob Silver badge
                Stop

                "What if " [snip paranoia bait]

                There are a zillion "what if" possible questions out there. We can _easily_ "what if" ourselves into a completely unproductive state. But that wouldn't be practical, would it?

                I doubt that a bug would be "intrinsic to the interface" and EVEN if it _IS_ you RE-DESIGN THE INTERFACE to fix it (not slap on a 'patch' with whitelists and process killing as a "fix").

                I have to wonder how those white lists work, anyway. Could a trojan horse application simply write a process with "the right credentials" and _BYPASS_ that anyway?

                see, ya gotta think like an evil hacker to see the potential workarounds in order to recognize that a horsecrap "solution" (like white lists and process killing) is simply PURE HORSECRAP??

                FIX THE ORIGINAL BUG instead, I say. So did Linus, apparently.

            2. Anonymous Coward
              Anonymous Coward

              If you don't update then the bug exists.

              If you update it gets disabled.

              It all depends, of course, on the bug severity. If we're talking of some piddling CVSS 1 bug, and you can get a proper fix out in a week or so, then of course you can probably live with it until you fix it properly.

              On the other hand I've just found a CVSS 9+ bug in an infrequently-used program that lets an ordinary user become root. The fix will require redesign which will take some time to develop and test.

              Am I going to disable that program until we have the fix, even though the bug isn't known yet and even if it's inconvenient? Too fscking right I am.

              1. Charles 9 Silver badge

                The problem, though, is how do you KNOW the bug isn't already known elsewhere?

                1. Doctor Syntax Silver badge

                  "The problem, though, is how do you KNOW the bug isn't already known elsewhere?"

                  As you like posing hypotheticals here's one for you: There's a bug in the OS that runs your intensive care monitoring system which could lead to it being pwned. Shall we shut it down, just to be safe?

                  1. Charles 9 Silver badge

                    "As you like posing hypotheticals here's one for you: There's a bug in the OS that runs your intensive care monitoring system which could lead to it being pwned. Shall we shut it down, just to be safe?"

                    If shutting it down isn't an option, then something else needs to be done, such as isolating it, taking it offline so no one can connect to it. It's still possible to do intensive care monitoring without networking (What happened BEFORE then?), but you're getting yourself into serious Trolley Problem territory if you leave them online since someone could be getting ready to raise hell in your hospital right now (something one MUST assume in a high-security environment), and malpractice and negligent death lawsuits can be crippling, especially if done en masse.Look at all the mega-leaks that have happened already with the fallout still being assessed on them. We're fortunate so far there hasn't been a megahack that is directly responsible for lots of deaths, and I seriously doubt any hospital wants the ignominy of being the first.

                    The TLDR version: If your operations can't continue without being seriously vulnerable, what you REALLY need is a Plan B.

                    1. Doctor Syntax Silver badge

                      " If your operations can't continue without being seriously vulnerable, what you REALLY need is a Plan B."

                      Which brings us back to Linus's post: his Plan B - or, really, just Plan - is to fix the bug.

            3. Doctor Syntax Silver badge

              "I'm not sure what you take from the article but maybe you should read the previous article as well to get some understanding."

              Even better, go and read the actual post.

          2. Doctor Syntax Silver badge

            "temporarily disabling something to ensure that it can't be exploited, while a full fix is being developed, is a perfectly acceptable security approach"

            Disabling in instead of fixing it isn't. What was stopping the submitters of sending in patch to fix the problem instead of hiding it?

          3. JLV Silver badge

            >far better than leaving a potential kernel

            Many bugs, esp in Linux kernel, exist in a low immediate threat context, where you already need some kinda access to play with them, like local access.

            Are you saying f..k the users and the devs, even in a context where a reasonably locked down system would be at little risk??? i.e. most assuredly not a Heartbleed equivalent.

            Go fly a kite. Why not just flag it? If the users dont want it, Torvalds doesn't and the devs don't, who elected you the boss?

            Plus, is this where most of the real threats lie? In the kernel? Or is it not rather sloppy apps, sloppy configs, sloppy human processes by some IT shops, rogue insiders? Look at where breaches are actually happening, not at just at theoretical concerns.

      3. Dan 55 Silver badge

        Or if your kernel kills off processes because it doesn't like the way they look at it funny or whatever criteria it was, perhaps the security hardening could be turned into an DoS attack.

        1. nijam

          > perhaps the security hardening could be turned into an DoS attack

          There's no 'perhaps' - or even 'turned in to' - about it. Most "security" policies are firmly inside the dimwit domain of "just say no"ism. In effect, much "security" already is a DOS attack.

        2. J. Cook Silver badge

          THIS. Just ask any A/V vendor who flags something like, say, winlogon.exe as a false positive. Or 'accidently' kills lsass.exe on a windows box. Or flags a GINA dll as bad, when it's actually needed to logon to the workstation. Or any number of other stupid tricks that turn the workstation into an expensive and terrible substitute for a brick.

          While I can certainly understand where the security folks are coming from, it gets all too easy to lose track of why the machines are there, and frankly, having someone rub your nose in it to remind you is sometimes the only way to break through that narrow focus.

    2. Charles 9 Silver badge

      Especially when you have to consider stupid: particularly stupid users. What do you do when you biggest issue is PEBCAK? Some would say training while others counter you're doing it wrong.

      1. Doctor Syntax Silver badge

        "What do you do when you biggest issue is PEBCAK?"

        The kernel hardening approach would seem to be switching off the computer and removing the keyboard. And maybe the chair.

    3. This post has been deleted by its author

      1. Richard Jones 1
        Joke

        @ Symon

        You mean you have to crack nuts to get at the kernel?

  4. Anne-Lise Pasch

    I'm a user, but I also want a secure box. Any end point that allows a remote hacker into my box, I want a security workaround until a full fix is ready, and I accept that this may break utility until its the case. But this is because I'm a server admin first, which probably puts me in a subclass of user. But even so; any user who cares about the sanctity of their data probably agrees. This doesn't mean that the security patch is the final product, it means its the uncomfortable workaround. However, where I agree with Linus, is the same mindset shouldn't be applied to local userspace programs. Sure its annoying if one local user can gain extra priveleges and do damage, but that I can deal with.

    1. wolfetone Silver badge

      So if they find a bug with Apache, and the work around is to shut it off, over 75% of the internet would fall offline.

      If the issue isn't reported, but the actual flaw is fixed then patched, there's no issue.

      Which one would you rather have happen?

      1. Jason Bloomberg Silver badge

        So if they find a bug with Apache, and the work around is to shut it off, over 75% of the internet would fall offline.

        Not a problem when there is an option to allow a buggy program to continue to run as was proposed. Better to let the user make an informed decision; allow a buggy program to run or block it.

        If the bug isn't detected and blocked the user would never know it was buggy would be blindly continuing to use it believing it was bug free.

    2. Anonymous Coward Silver badge
      Linux

      But what he's extolling is that the security problem should be rectified rather than just killing the system. "Remote attacker can kill the whole box" (we're talking kernel development here) is worse than "Remote attacker can kill a service", which is worse than "Remote attacker can fuck right off".

      Linus is essentially saying "Never permit the first; permit the second in extreme circumstances; go with the third choice wherever possible"

    3. Lysenko

      I also want a secure box. Any end point that allows a remote hacker into my box, I want a security workaround until a full fix is ready, and I accept that this may break utility until its the case. But this is because I'm a server admin first, which probably puts me in a subclass of user. But even so; any user who cares about the sanctity of their data probably agrees.

      The "off" switch defeats all remote hackers instantaneously, albeit at the cost of 100% utility degradation. What you are debating is how close you're prepared to get to that logical endpoint in the interests of greater peace of mind re. security.

      I have some online servers where the "sacred" issue is uptime. Data passes through these systems in pseudo-real time so the primary thing I'm concerned about is compromising system stability. If hackers get in then, so long as they're only reading/copying data and can't escalate to damage system configuration/stability, I don't have an immediate problem because the data the system processes is public domain in the first place.

      On the other hand, I have another server with highly sensitive data and that one is on site (the ones above are VPS) and powered by remotely switchable PDUs so it can be physically killed (in extremis) in the event of something suspicious being detected. The former is "fail safe" (meaning it will try to keep running in the face of a problem) and the latter is "fail secure" (meaning it won't). Both sorts of use case exist, but security researchers tend to have tunnel vision focussed only on the latter.

      [1] The data the system processes is public domain in the first place.

    4. Doctor Syntax Silver badge

      "But even so; any user who cares about the sanctity of their data probably agrees."

      What if the crash the system approach leaves the user with corrupt data?

      The effort should go into fixing the root problem.

      1. Charles 9 Silver badge

        But to fix the root problem you have to IDENTIFY it first. Meanwhile, you still have an exploit avenue potentially being exploited while you twiddle your thumbs.

        1. Kiwi Silver badge
          Trollface

          But to fix the root problem you have to IDENTIFY it first. Meanwhile, you still have an exploit avenue potentially being exploited while you twiddle your thumbs.

          In that case, the only prudent thing to do would be to turn your machine off and remain disconnected until you hear there's an OS +apps that is totally bug-free.

          Care to demonstrate the effectiveness of this solution for us?

    5. Anonymous Coward
      Anonymous Coward

      "I'm a user, but I also want a secure box. Any end point that allows a remote hacker into my box, I want a security workaround until a full fix is ready, and I accept that this may break utility until its the case. But this is because I'm a server admin first, which probably puts me in a subclass of user. But even so; any user who cares about the sanctity of their data probably agrees. This doesn't mean that the security patch is the final product, it means its the uncomfortable workaround. However, where I agree with Linus, is the same mindset shouldn't be applied to local userspace programs. Sure its annoying if one local user can gain extra priveleges and do damage, but that I can deal with."

      dd if=/dev/zero of=/dev/sda is your friend, then.

      Your box, regardless of what OS it runs, is not secure. Many 0 days are inside. Deal with it.

      Breaking user experience to avoid security flaws is retarded.

  5. Anonymous Coward
    Anonymous Coward

    Sometimes you can't have it both ways...

    Although I see his point I also think that he may have set his expectations a little (too?) high. In an ideal world this may work, but sometimes it just doesn't work this way..

    Sure, if there is an error then that needs to be fixed asap. No arguments there. But what about the time in which the bug got discovered and the moment of implementing the actual fix? That's the moment when a system will be vulnerable, and a security hardening might be capable of preventing further damage from taking place if an attack were to occur.

    Of course this can break stuff. I think a good example would be the "kern.securelevel" setting on FreeBSD. This is a setting which has a default value of -1 and administrators can only increase the value, by doing so this will harden the system some more.

    For example value 1: you can no longer turn off immutable flags on files, /dev/mem and /dev/kmem may not be directly opened for writing (read: you can no longer load or unload kernel modules) and /dev/io is fully inaccessible

    Value 2: All of the above and disks can no longer be opened directly for writing (mount is excluded). So it protects the filesystem(s).

    These setting will plain out break X. But it also hardens the system and can prevent plenty of possible nastiness from happening. So on a desktop this setting might not be very useful, but on a server all the more valuable (assuming it doesn't use X).

    So I can't help wonder if this also doesn't apply here. In an ideal world you wouldn't need failsaves, but the world simply isn't ideal.

    1. Tim Seventh

      Re: Sometimes you can't have it both ways...

      "But what about the time in which the bug got discovered and the moment of implementing the actual fix? That's the moment when a system will be vulnerable, and a security hardening might be capable of preventing further damage from taking place if an attack were to occur."

      I think it's more to the point that linux shouldn't dictate the user choices (not like Windows 10 force update). If you do want a security harden whatever, you sandbox it under your control to avoid damage. But if another user who just want X program to work, he/she should still be able to get X program work regardless of security.

      Some users here are sysadmin, which explains the bias. It is common to think as sysadmin, you want to secure your system as much as possible.

      But in reality, you did a lot of work to balance the line of being secured but breaks something or being not secured but it will still works. You test patches ahead to avoid breaking the userspace all the time. You did it because you wanted the control.

      Now imagine you don't have control and linux decides for you, your team and your company, breaking a lot of user programs. That probably isn't good for you, and linus thinks so too.

      edit: we're talking about the enhance hardening from the security guy (explained in another article), which he explained it like a sandbox, but the problem is it would stop the processes with the vulnerability which means it can break user software.

  6. Anonymous Coward
    Anonymous Coward

    Benjamin Franklin: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

    *

    I think Franklin would come down on the same side as Linus!!!

    1. felixk

      > I think Franklin would come down on the same side as Linus!!!

      What with Franklin being on the Secret Committee and on the Committee for Secret Correspondence, he was tasked with breaking into British secured communications. Of course he would advocate for idiotic measures that perpetuate security holes -- they would make his work easier!

      Torvalds essentially says that security holes need fixing only after they are exploited; until then, no rush. That is what Microsoft was accused of for the longest time, and now we learn that it was all good, since we wouldn't want to inconvenience users (of software and of security holes). Did I wake up in Bizarro World or what?

      1. Charles 9 Silver badge

        The problem here is the delay between it being actively exploited and KNOWING it's being actively exploited: potentially long enough to exploit it into something that can persist even AFTER the original bug gets fixed. Thus the paranoia. Besides, if an exploit is used, what's to say the users and/or their interfaces can be considered trustworthy anymore?

        1. Doctor Syntax Silver badge

          "The problem here is the delay between it being actively exploited and KNOWING it's being actively exploited"

          No, the problem is submitters providing code to treat the possible symptoms rather than cure the disease - or, if they don't know how, telling someone who does.

      2. nijam

        > Torvalds essentially says that security holes need fixing only after they are exploited

        No, that's not what he says; it's quite difficult to see how you might have read that into it. He is arguing that security holes cannot be fixed until they're reported, and they're unlikely to be reported if no-one can run the affected code.

        1. Doctor Syntax Silver badge

          "they're unlikely to be reported if no-one can run the affected code."

          Except they could be reported by security researchers who think it better to cure the symptoms instead.

      3. CrispyD

        That's not what he said.

        Linus didn't say that security holes should only be fixed after exploitation. He said that the Kernel shouldn't kill processes that behave unexpectedly. These are two entirely different things, and should not be confused. Just because an application does something odd (from the kernel's point of view) doesn't mean it's a security flaw that is being exploited. More importantly, killing the process does not fix the underlying problem - or make it any less exploitable. Worse still is that this approach creates the impression of doing something without actually fixing anything - just because a badly written (but honest) application get's nailed doesn't mean that a well written exploit would be.

    2. bombastic bob Silver badge
      Devil

      thanks AC - I was considering the Ben Franklin quote in a slightly different form. I'm glad you said it.

      And I prefer VIGILANCE to "security".

  7. Adam 52 Silver badge

    Supposing Microsoft said "there are a whole load of insecure Win32 vulnerabilities in Windows, but we're not fixing them because there are no exploits in the wild and we don't want to upset users".

    Or an IoT vendor said "yes your car/router/aeroplane is vulnerable but there are no know exploits and it'd be inconvenient to take it out of service to patch and upset customers".

    There'd be an outcry.

    So why is it right when Linus says effectively the same thing?

    1. james_smith

      He's not saying "we're not fixing them" - the exact opposite. He wants bugs reported and fixed as soon as possible. What he doesn't want is systems refusing to work after a new hardening feature lands in an update.

      1. Anonymous Coward
        Anonymous Coward

        There are lots of Microsoft Trolls out. They're either being purposefully obtuse or their obtuse :)

        1. wallaby

          "There are lots of Microsoft Trolls out. They're either being purposefully obtuse or their obtuse :)"

          Pot .....

          Kettle......

          Black.........

          1. Anonymous Coward
            Anonymous Coward

            haha, yes. I really do need to proofread. No down vote from me, by the way.

        2. Doctor Syntax Silver badge

          They're either being purposefully...

          "It is difficult to get a man to understand something, when his salary depends upon his not understanding it"

    2. Doctor Syntax Silver badge

      "So why is it right when Linus says effectively the same thing?"

      Linus isn't saying the same thing. What he's saying is fix the problem instead of hiding it.

      AFAICS what's happening is that the security researchers are sending in patches which will throw an error if a dubious bit of code is hit even if it wouldn't cause a problem in that instance. They're then expecting him to incorporate that code in the kernel tree for the next release.

      What he wants is that the code itself is fixed. That can then be backported into older kernel versions* (that, of course, could also be done with the just kill it fix). However the effort that goes into the just kill it patch could either be put into a proper fix by the researcher or, if that's too difficult, into a proper bug report so it can be fixed. Either fix is likely to go into the same kernel release cycle anyway and it's vastly preferable that it's a real fix. If he allowed just kill it fixes in the real fixes are likely to be delayed.

      * Linux distributions don't always run the same kernel version. These appeal to different types of user.

      Production systems tend to be very conservative with LTS vernel versions and only security fixes made available as kernel updates. Consistency of operation is highly valued.

      More adventurous distros exist for those who must have the latest, greatest, coolest toys. These value novelty over consistency and can expect breakages from one release to the next. A release will have the latest kernel available at the time of packaging.

      Users who want to test new stuff - equivalent to the Windows Insider Fast Ring can either go for a bleeding edge distro or install RC kernels in other distros.

  8. Nuno trancoso

    Linus might be a bit of an ass when it comes to talking to people, but he usually makes good points. If the proposed security fix for something that MIGHT affect some people EVENTUALLY is to BREAK everyone's box NOW, then it's not a fix, it's an even bigger problem.

    I do grep the "brick on sight" mentality, it forces fixes faster, but, from a user perspective, my secure but not working system is an even worse proposal than my not secure but for the time being working system.

    1. Charles 9 Silver badge

      But the issue is you can be pwned WITHOUT YOUR KNOWLEDGE, by which time it's too late to fix the bug. It will have been leveraged to create something more persistent. Which would you rather have: a system that doesn't work or can't be trusted?

      1. Doctor Syntax Silver badge

        "Which would you rather have: a system that doesn't work or can't be trusted?"

        It's a false dichotomy. The effort that goes into the break it now fix should go into the fix it properly fix. What I want, and which I expect Linus to provide, because of this approach, is a system that works and can be trusted.

        1. Charles 9 Silver badge

          "It's a false dichotomy. The effort that goes into the break it now fix should go into the fix it properly fix. What I want, and which I expect Linus to provide, because of this approach, is a system that works and can be trusted."

          But then you hit Trolley Problem territory where you CAN'T have both, because of stuff like true zero-day vulnerabilities which are UNKNOWN bugs that are being ACTIVELY exploited. You can't fix a bug you don't know about yet, yet you can't just let it lie, either. It's like police being tipped off to suspicious activity yet they don't investigate it as it considered too minor and then BOOM! The Las Vegas shooting and so on. There are times when one MUST err on the side of caution. So having the ABILITY to perform some kind of hardening is necessary as a kind of vigilance. I think the chief complaint is that there needs to be some more control of these features in case there's a situation where high availability happens to be more important.

          1. Doctor Syntax Silver badge

            "But then you hit Trolley Problem territory where you CAN'T have both, because of stuff like true zero-day vulnerabilities which are UNKNOWN bugs that are being ACTIVELY exploited."

            I'm not sure how you drag the Trolley Problem into it and am utterly uninterested to find out because what you're banging on about isn't the subject of Linus's post and no amount of Bob capitals will change that. It doesn't concern bugs which are unknown.

            It concerns bugs which are known because security researchers found them and submitted patches which attempt to treat the symptoms (by amputation, as it were) instead of either submitting a patch to fix the bug or raising a bug report for someone else to fix it. To the extent that it takes up time to fend them off these guys become part of the problem, not the solution.

            BTW, if a bug is completely unknown it can't be actively exploited because nobody knows it to exploit it.

            1. Charles 9 Silver badge

              "BTW, if a bug is completely unknown it can't be actively exploited because nobody knows it to exploit it."

              Unknown to YOU, but NOT to whoever found AND exploited the bug without your knowledge. THAT'S the kind of threat you face with zero-day (remember, zero-day means it's being used in the wild BEFORE you know about it) vulnerabilities, and since you don't know about the bug that's creating the exploit, the ONLY way you can safeguard yourself is to watch for unusual behavior and shut down these potential avenues that may well be exploits you don't know about yet. IOW, some of the hardening is meant to safeguard against UNKNOWN (to you, not to the enemy) threats. How else can you safeguard against unknown threats when waiting to identify and fix the bug is a lot like shutting the door long after the horse has bolted?

              1. Kiwi Silver badge
                Facepalm

                You can't fix a bug you don't know about yet, yet you can't just let it lie,

                So you crash the kermel. How then do you detect what was going on to trigger the crash? How can you fix the bug if you cannot trace what occurred? What if the bug is something appropriate in your init system that Cook&co failed to whitelist?

                I wrote about something few days ago, and further comment here on this - I could've taken the server down and cost the company half a day or more of productivity, maybe cost them more than a day if we had to rebuild (if it was a hack you cannot trust backups - did the initial compromise occur today or was it 6 months ago and left dormant?).

                We could've gone with "kill" and maybe killed the company as a result, or we could've gone with "monitor" and found out it was nothing after all. So we went with the sensible option, watch for anything of real concern.

                Your approach would kill the server the instant a Chinese or Russian IP took a look at the system "JUST in CASE there's UNKNOWN bugs". The rest of us say "Lets wait till we know for sure, unexpected behaviour may only be a minor bug and it may even be perfectly acceptable behaviour that hasn't yet been whitelisted".

                That's why Cook got blasted, and that's why Cook agreed it was a bad idea

      2. JohnFen Silver badge

        "Which would you rather have: a system that doesn't work or can't be trusted?"

        Easy -- I'd prefer the "can't be trusted" option. A system that doesn't work isn't better than no system at all, and in this day and age, there is no system that I can think of that actually can be trusted. But ignoring that, it's often possible to use a compromised system in a way that's relatively safe, but you can't use the broken system at all.

        1. Charles 9 Silver badge

          I think the opposite. It's better by far to be told you can't do anything than to use something that can't be trusted and can potentially give you false results. Sometimes, a false result is worse than no result. There ARE things worse than death.

    2. Peter2 Silver badge

      Yeah, I have to agree with you and Linus on that.

      My servers are running mission critical software for our company. The server failing for a day costs multiples of my yearly salary in lost productivity. It *cannot* go down because somebody has decided to do something like this.

      If there is a security flaw on the box, as long as it's not being exploited then that's fine. This is one layer of about a dozen layers of security, and a single layer having some kind of flaw (but no breach) is not a critical issue for me. If you want to pop a huge box on the screen stating that application X is potentially insecure and the developers need a kick, that's fine. Shutting the company down until the developers produce a patch is not fine.

    3. Adam 52 Silver badge

      "my secure but not working system is an even worse proposal than my not secure but for the time being working system"

      But you're happy that Linux several faults on illegal memory access? Following the same argument we should remove seg faults too.

      And you're happy that the filesystem errors on attempts to write to missing locations? Should it not just carry on, in the interest of reliability?

      Or terminates a process on an out-of-memory exception? Or buffer overrun?

      This feels to me a lot like the mentality that wants a language to automatically cast pointers to integers and back.

      Remember when we all converted to 32 bit processors and badly written code started to corrupt the stack because variables weren't aligned on the same word boundaries as before? Some really lazy people built exception handlers to ignore the interrupts, sensible people allowed the exception to kill the process and fixed their bugs.

  9. Wilhelm Lindt
    Pirate

    Break stuff

    Linus changed the value of HZ and broke all kinds of stuff in the process, to show developers that their assumptions were bad. Why not now?

  10. Lee D Silver badge

    I have to say... as a user, he's right.

    Though I'm in no fear of the command-line, compiling my own software, even manually converting patches between kernel versions, etc. when the system doesn't work because of a security bug, it puts me off using that kind of security measure.

    I remember loading up a distro, and literally within minutes of configuring a program (I think it was Apache), I hit an SELinux problem. Apache was not doing what it was supposed to, I wasn't asking much of it, it took me a while to find out that SELinux was blocking something it didn't like (I was churning through Apache logs first, given that it worked fine when I was first starting playing about with the system).

    In the end, I got a huge wall-of-text error on SELinux that, for the life of me, I didn't understand. I got the gist... there was something Apache was doing that SELinux was pre-configured by the distro to deny. A while later, I was still none the wiser as to whether that was a real problem or a over-zealous setting, I was no wiser how to resolve it, and all that was happening was that Apache was sitting there doing nothing all the time I was working on it. Did I need to exclude a path? Apparently not. Did I need to provide Apache a capability? I don't know. How would I do so? Not a clue.

    Hours later, I just used another distro without SELinux. The system ran, exposed, on the Internet for years. What I was asking wasn't a "security risk", as such. There was no cgi-magic or anything too out of the ordinary, but the default SELinux just got in the way and wouldn't get out of it, and wouldn't tell me what I was doing wrong or how to fix it.

    In that kind of circumstance, I was more than happy to tune SELinux down to "look, warn me, but just carry on and I'll have a look at the manuals later" but - I can't remember why - it wasn't quite that simple.

    The user has to take priority, that's why the computer exists. They should be able to do so safely. But security rules that interfere with processes (why can't it just "deny" such an action and log it, rather than kill the process outright? Why can't it log it with a simple cause? Why can't those simple causes be linked to simple lists of config items that cause them, and the consequences of turning them off?) are just a way to make people turn off the security entirely, which is much more risky. It's UAC all over again.

    Where critical bugs exist, they need patching. But patching to a version that breaks users means nobody will deploy the patch anyway. We're not talking SACRIFICING security for convenience. We're talking a trade-off. I can make my car theft-proof by pouring tons of concrete through the window and letting it set. Nobody will nick it. But it's then bog-useless as a car.

    The balance should be "don't break stuff, except where the risk of OTHERS breaking our stuff for us is greater". Don't secure the door to the point that even the homeowner can't get in, but try to make sure the burglars can't get in easily either. In case you don't know, the tradeoff chosen by British Standard locks, and all kinds of devices, in even the roughest areas of the country is: Let the door still open for genuine users, even if that's slightly less secure than just building a steel wall.

  11. Anonymous Coward
    Anonymous Coward

    I remember when Samba broke after some obscure security fix that would never have effected me unless someone was standing next to my box. Utterly pointless and annoying. In fact between the obscurities of samba, and the utter fail of windows 10 file sharing, its just easier to send the file half way round the world to dropbox than to network it on the local network......

  12. jake Silver badge

    An orthogonal viewpoint.

    If you feel your code needs to cause a kernel panic "for security reasons", your code is too broken to be rolled into the mainstream kernel. Fix it before submitting it.

  13. handleoclast Silver badge

    As I understand it

    This wasn't about patching an existing bug, this was about putting mechanisms in place that would close off potential lines of attack by using white lists. As such it was a feature, not a bug-fix.

    Linux was advocating, in such pre-emptive cases, that the correct approach is something like:

    1) Include in kernel, but default to off. Brave users can enable warning mode (which might accidentally break things). See how it goes.

    2) After a testing period in the wild, default to warning mode. Brave users can enable secure mode (which might accidentally break things). See how it goes.

    3) After a testing period in the wild, default to secure mode. Anybody who has problems can set it to warning mode at their own risk.

    4) Maybe, a long way down the line, if nobody reports problems for a very long while, remove the possibility of disabling it. How long has SELinux been around? It's still possible to disable it.

    What Linus was complaining about was the inclusion of this whitelisting stuff in the kernel with no way of disabling it and before there had been any testing in the wild. It wasn't dealing with an existing exploit. It wasn't dealing with an imminent exploit predicated on some bug that had just been found. It was a security WIBNI and it should not have been shipped in secure mode with no way of disabling it before any testing in the wild.

    Reinforcing Linus' point was that there was a last-minute change when the developer found there was some important, fairly common things he'd forgotten to whitelist.

    Some commentards are responding as though this were a fix for a bug that was already being exploited in the wild. It wasn't.

    Some commentards are responding as though this were a fix for a bug that has no known exploits yet, but exploits are expected soon. It wasn't.

    This is a security WIBNI, and in that context Linus was right. In my opinion he didn't swear enough, because that was a fucktarded thing for the security guy to do.

    1. Anonymous Coward
      Anonymous Coward

      Re: As I understand it

      And not only that, these cry-babies (or MS trolls) who have "mission critical" services will be using an older kernel that doesn't have these "potential bugs".

      If he didn't use a naughty word then this whole thing wouldn't an issue.

      1. bombastic bob Silver badge
        Devil

        Re: As I understand it

        " these cry-babies (or MS trolls) who have "mission critical" services"

        I would laugh my backside off if it turns out that these people's "mission critical" services get NUKED by not being in the 'white list' (and it's not editable).

  14. John Savard Silver badge
    Facepalm

    What?

    If a security hole can be fixed without removing features from the operating system, that would be nice.

    And if the security hole consists of a vulnerability in a feature, which can be removed while retaining the feature, then it should be done that way.

    But sometime it is that the feature was ill-conceived, and the vulnerability is inherent to the feature. In that case, the answer is to eliminate the feature, and do without it. As Linus' comments indicate that he does not seem to realize that this case can and does exist, it is worrisome.

    1. bombastic bob Silver badge
      Linux

      Re: What?

      "sometime it is that the feature was ill-conceived, and the vulnerability is inherent to the feature."

      give me ONE example of that, please. Otherwise, it's "paranoia bait" 'what-if'ing

      Keep in mind - Linus personally reviews things. I doubt that an ill-conceived feature exists in the kernel. If it DOES then Linus would swear at HIMSELF.

      1. Doctor Syntax Silver badge

        Re: What?

        "I doubt that an ill-conceived feature exists in the kernel."

        "If it DOES then Linus would swear at HIMSELF."

        I wouldn't go quite as far as that.

        Some Via processors allow processor throttling and some don't. Googling a log message from one that didn't all I found was a comment from Linus saying that the worst that could happen was that it would just write a message to the log. It did. About once a second.

        "If it DOES then Linus would swear at HIMSELF."

        I take it that his comment about "What moron let that in" was just that when it was discovered that an early file system was letting stuff sit in buffers for far too long without being flushed to disk when memory capacities increased.

  15. Anonymous Coward
    Anonymous Coward

    Security w/o context is kinda pointless

    So... where I work had an outbreak of the viruses.. The TLDR fallout is that *everyone who had admin rights on their local machine* had their administrative rights taken away without notification or warning. Some of us managed to get a "different form of admin access back" (I don't choose to understand what they did).

    So, you're a vagrant user, using Hyper-V to spin up linux instances for test purposes. You happen to be on-site working on some proof of concept work that involves lots of linux machine instances; Today, you actually can't do any work, because well, you don't have admin rights to start Hyper-V. Explain that to the customer, "I can't do any work for you today, because my company has fucked up user rights, but you're still going to have to pay me." - it's not as though it's the security wonks are they ones that get shouted at by the customer...

    As it so happens, my division builds software, which can be installed as a service. Can't install as a service anymore on the corporate machines running Windows 7; so now we can't run all the regression tests that we need to until we commission some machines that do have full admin rights; this is already turning out to be a saga.

    This then is what security wonks do sometimes. They don't think it through; or even go to the effort to understand the ramifications of understanding the impact of what they do.

  16. Diginerd

    RFC1925 Truth #1 Applies here

    1) It must work.

    ...

    Everything else is secondary.

    There are no new Security Considerations created by RFC1925*.

    However, it does make the point that the Fundamental Truths also apply to Security...

    - A system evolves to become what its users deserve, and that's seldom the one they want.

    *Recommend you go dig it up if you're not familiar with it. Clearly many folks aren't.

    1. Kiwi Silver badge
      Thumb Up

      Re: RFC1925 Truth #1 Applies here

      *Recommend you go dig it up if you're not familiar with it. Clearly many folks aren't.

      Looks like several parts of it are appearing in this thread - NOT in a good way (espexcially 5, 6+a, 8, and 10 :) )

      Thanks for the prompt. I think I last saw that one around the time of release :)

    2. Charles 9 Silver badge

      Re: RFC1925 Truth #1 Applies here

      "1) It must work."

      I challenge this with a simple question: How do you KNOW it's working? What if it's pwned and giving you false information? There's working...and there's working RIGHT. And working RIGHT is easily more important than just working. Because the last thing you want is to report something that turns out to be WRONG...because you were misled.

  17. a_yank_lurker Silver badge

    “Because without users, your program is pointless..."

    “Because without users, your program is pointless, and all the development work you've done over decades is pointless.” - Linus nailed it. Programs and OSes are tools for users. Some forget the users is the final arbiter of success and worthwhileness: not the developer, not the tester, not the project manager, not the security guru.... He said you need to fix bugs with an eye on the user and keeping the program working for them as they are used to. His complaint is some have lost sight of who the ultimate judge is - and it is not some musty academic journal or trade rag the public never reads - it is the final user.

    1. Nuno trancoso

      Re: “Because without users, your program is pointless..."

      Sadly, that is one truth most *nix people still don't grep to this day. No matter how superior you OS/App/whatever is, on whatever criteria, if it fails the "user's like it and want to use it" criteria, you're gonna flop, crash and burn.

      1. Doctor Syntax Silver badge

        Re: “Because without users, your program is pointless..."

        "Sadly, that is one truth most *nix people still don't grep to this day."

        I'd like to disagree with you on the basis that most Unix folk do grok it because historically Unix was developed by people who wanted to use it. But while the "still ...to this day" is the wrong way to look at it there are some damn peculiar un-Unixy people coming into Linux these days. Whilst I disagree with your historical perspective you have a point about the situation as it is now.

      2. a_yank_lurker Silver badge

        Re: “Because without users, your program is pointless..."

        Linux's problem is not that it is too geeky but that it does not have a true major corporate champion who is pushing it as an alternative to Bloat and Fruit. Many Linux users understand that whatever users use they have to comfortable with it and the applications they use. Hence Ubuntu, Linux Mint, and many others. A corporate champion would be spending money on advertising and encouraging developers to produce a Linux version of their products. Linux's visibility frankly sucks and most who a 'nix are completely unaware of it (MacOS, iOS, Android, ChromeOS).

        1. Doctor Syntax Silver badge

          Re: “Because without users, your program is pointless..."

          "Linux's visibility frankly sucks and most who a 'nix are completely unaware of it (MacOS, iOS, Android, ChromeOS)."

          Let's bear in mind that two out of that list are actually Linux. Maybe the important factor is that it's widely used even if the Linux or Unix label is hidden on the inside.

          1. Doctor Syntax Silver badge

            Re: “Because without users, your program is pointless..."

            "Maybe the important factor is that it's widely used even if the Linux or Unix label is hidden on the inside."

            A few disconnected thoughts along this line.

            Firstly, mobiles, Macs and Chromebooks have got users out of the notion that a computer has to be Windows. That's moved on from the time that the original Linux netbooks were maybe a step too far for the average user. So is there a way of introducing something that isn't Windows and at the same time isn't tied to a service that's trying to monetise the user?

            ARM kit is now capable of running a full-featured desktop Linux for a low cost. One problem the original netbook vendors ran into was Microsoft muscling in with a sawn off version of Windows but only on a cut-down H/W spec which prevented Linux showing what it could do. Given that the vendors were generally PC makers as well MS had some arm-twisting ability. A mobile maker with no place in the Windows market wouldn't be in that position.

            OTOH the big vendors have conditioned users into expecting some sort of cloud service. A service that doesn't pay for itself by treating the user as the product is one that the user would need to pay a sub for. Could a service (?NextCloud) be sold to users on this basis? Maybe there'd be resistance to buying a piece of kit and then adding on a monthly charge for some additional service.

            Mobiles have got users used to the idea of renting a device as part of a service for which the charge itself is accepted.

            One charge which users do accept is their ISP. Is there scope for a device which is provided as part of an enhanced ISP package which also includes storage?

            Users also expect a 3rd party app store Most distros have a repository facility which could easily support that.

  18. cantankerous swineherd Silver badge

    the security nonsense has rendered my mobile pretty much useless. 2 "updates" resulted much reduced utility.

    LT right to start swearing about this sort of thing.

  19. Anonymous Coward
    Anonymous Coward

    So if my work's main web app, holding lots of personal information, hasn't been hacked yet, but I discover a TOTAL lack of authorisation and type-checking on the backend API methods?

    1. Doctor Syntax Silver badge

      "So... I discover a TOTAL lack of authorisation and type-checking on the backend API methods?"

      You raise a bug report and get it fixed instead of trying to hide the problems.

      1. Anonymous Coward
        Anonymous Coward

        And just sit there twiddling my thumbs while I can still get pwned in the meantime? That's assuming the hackers don't already know it and have been pwning me with zero-days.

  20. R3sistance

    This still doesn't address the fact that the submission in question had a mode that disabled the crashing of the kernel... the fallback mode, something Torvalds criticized it for it having, by the way. Torvalds made a hasty generalization and got it wrong, now this is him just trying to justify his unacceptable behavior.

    1. nevets23

      No, he did not criticize it for having a fallback mode. He criticized it for not starting with the fallback mode. The fallback mode was only added after kvm broke. Linus was pointing out that mindset is wrong. The adding of the fallback only after you notice something breaking. It should have been there from the start.

  21. Hstubbe

    And still..

    ..my wifi doesn't work on linux. So maybe it's time for less masturbating in public snd more fixing things in your toy os?

  22. Grease Monkey

    Sounds a bit too Microsoft to me

    It's always been the Microsoft way to dawdle over fixing security holes that they *think* aren't known about in the wild.

    Remember a few years back when Google went public on a flaw because Microsoft were doing nothing to fix it weeks after Google had informed MS of the existence of the flaw?

    Microsoft have actually admitted in the past four knowing about security flaws, but not fixing them as the were no known exploits in the wild.

    It sounds to me that for all his rationalising Torvalds is heading down the same dangerous route.

    1. nevets23

      Re: Sounds a bit too Microsoft to me

      It has nothing to do with fixing security flaws. It's about hardening the kernel. The disconnect is about how to do the hardening. If your front door can be picked, that's the bug and everyone agrees that it should be fixed. The debate is how to protect yourself if it happens to be picked. Does an alarm go off (Linus wanting just a warning), or do you have it booby trapped to launch an arrow into who ever enters the door (security folks crashing the kernel)? Linus is worried about the innocent person who comes through that door and takes an arrow in the chest.

      1. Charles 9 Silver badge

        Re: Sounds a bit too Microsoft to me

        That depends on where the door leads. There needs to be some configurability to it. Someone's house, a loud klaxons will probably do. Fort Knox, likely a different story.

  23. anthonyhegedus Silver badge

    Who is the Linus bloke anyway? Why do people look up to him as some sort of 'God of Linux'? Does he run Linux Ltd? No. He's just a dev and shouldn't be treated as some sort of arbiter here. Glad to hear there are other devs working on the kernel and not just him anyway.

    1. Camilla Smythe Silver badge

      Re: Who is the Linus bloke anyway?

      I'm with you. We voted for Brexit so why doesn't the bastard just get behind it. Where's my Unicorn Powered Flying Car? Fuckin experts.

    2. nevets23

      No but he happens to own the Linux trademark, and anything that wants to be called Linux needs his approval. But that's not why we consider him the "Linux God". I'm just not going to waste my time explaining that to you.

  24. Anonymous Coward
    Anonymous Coward

    Mixed Feelings

    So sad to see more retardtastic PR FUD from Microsoft's shill army shamelessly whoring themselves out.

    On the other hand, it is funny that anyone at Microsoft thought that these bullshit tactics would work on technically literate users on a site like El Reg.

  25. EssentialTremor

    The biggest outstanding issue in linux is the critical role of Mr. Torvalds in this process. LT is a genius to whom the entire world owes a great debt. His steadfastness and clarity have been essential factors in the rise of linux and his direct, unabashed approach to problems has allowed the OS to make progress where others have failed.

    Eventually, Mr. Torvalds will stop doing this crucial work. Preparing for that day is among the most important security issues facing the community.

  26. ZanzibarRastapopulous

    “the big win is when the access is _stopped_.”

    Yeah, our admin guys are like that.

  27. Anonymous Coward
    Anonymous Coward

    Security

    I'm at the site to post about other things. But security is so frustrating, I'm not a security expert, and a lot of things I don't know about, but what I do know from when I was designing my own OS decades ago, is do it right the first time and NOTHING leaks. Developers are their own enemy, and should be sued under class action for negligently making code where that is proven. In the end of thousands, or even millions, of class actions, we should have better developers. This also includes any leak, spy, mal ware in it, which should also attract compulsory gaol terms for intended acts (meaning the compulsory gaol terms passes down the chain to the collusion of perpetrators (so that innocent developers don't get nicked for the actions of others on their code).

    What is needed, is complete automatic privacy for every user, without harrassment (as another compulsory gaol term crime). Harrassment obviously to any reasonable person, is asking for permissions more than once at install and at use. A function/permission list function being maintained where the user can go and look and select new "temporary" and switch on and off permissions as they wish, and a resolve issue button that takes them their, when the requested function does not work because of an expressly needed to actually do such a function permission problem. That permissions be required to be limited in breadth to those actually needed to expressly perform the express function expressly intended (no going and snooping more than intended. Like in storage). That it be an offence to ask for permissions unrelated to the express functionality of the program as expressly overtly expressed to the user (think about that one). That there is no stalking (tracking) physically, between sites, or between organisations in sites, or inside pages in a sites, or potentially, even non aggregate tracking between pages even (however, exception be made for browser local history and forwards and backwards functions). That all handoffs between sites be push orientated from the users direction and possibly involving user's central repositories (like password manager at Google) but in a non tracking way. Maybe then app stores can make money by charging and making software worth something again. All crimes are to be counted under treason law, because they do cover spying on government organisations and contractors and individuals involved, and under business espionage as they do cover business organisations, their contractors and individuals working for them. Thanks for existing laws.

    Let Linus and jurisdictions, spruik that.

    What hubris, that these people think that they should do more than present non tracking/ed ads based purely on content on the page visited/app used, the country, state or region somebody anonymously is being served too, or general ad. No tracking at all is needed. The user should be able to nominate a region or block region information. Ads are actually much more interesting and enjoyable without tracking I find. They should only see it has to deliver something to such and such a region and not who or where, the browser/system should protect the anominity by regulating the access according to the users sole wishes. This even can use a tor like infrastructure service with virtual session ID lasting the life of the online session, with gaps of X seconds between sessions and sites to stop people gaming the system with repeated virtual visits, with reports sent to authorities and IP time information, that somebody is trying to defraud an advertiser. Sites will have to get back to actually selling advertising space like all other media. None of this illegal spying to make a dwindling buck, which the the people doing it make hefty income from rather than the average sites themselves, who are actually doing most of the work. The day of a painful, for users, free ride must end. I don't care if I have to pay a cent per hundred pages visited, it is out of control frankly, in my opinion illegal stuff going on now that should be stopped.

    1. Charles 9 Silver badge

      Re: Security

      Two problem.

      One, the government itself wants the info, and fat chance getting them to limit their own abilities. Far more likely they'll enact enabling legislation, and since ALL sides want it, good luck getting in a government that will do anything to stop it.

      Two, foreign sovereignty can get in the way. How do you enforce such a law are in the servers where the practice is allowed, if not mandated on penalty of prison?

  28. MK_E
    Trollface

    Everyone knows your car can't be stolen if you slash the tyres.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019