back to article Untamed pledge() aims to improve OpenBSD security

Linus Torvalds may have used the Washington Post to drop a bucket on the “masturbating monkeys” of OpenBSD, but they seem insular enough not to care overmuch. In a set of slides posted at openbsd.org, one of the project's founders, Theo de Raadt, has set down the principles behind one of the projects that Torvalds dislikes – …

  1. Anonymous Coward
    Black Helicopters

    Only goes down, not up ;)

    Interesting development, leave it up to OpenBSD to come up with specific ideas to further secure the whole lot. Oh, and in case you're thinking what I was thinking when I first read this: pledge() can be called multiple times but only to reduce further abilities, not to regain them. Just found out about this myself in the pledge() manual page.

    Now.. I'm an administrator, not so much a programmer, but I can't help draw some parallels with SELinux, even if that comparison doesn't really cover it. But... There is a routine in the kernel to secure things and the userland programs will need to support those routines in order to work with it. With that in mind, whilst knowing that I might make an unfair comparison, but I have to admit that this system appeals a lot more to me and I think it also has much more potential.

    Here's the thing: security isn't only about covering all your bases. It's also about accessibility: not making it too hard for people to use. Because it you make things too complicated and too hard then chances are very high that several people will bypass or ignore it. And so here we are: one simple function call is enough to start using this routine, which I think makes it very accessible. And with that quite secure.

    I really hope that we'll eventually see this popping up in FreeBSD as well.

    1. Paul Crawford Silver badge

      Re: Only goes down, not up ;)

      That was my thought: like SELinux or AppArmour, but internal to the program.

      I can see how this helps mitigate bugs inside the software and hence possible future exploits, but I can't help thinking that having an external rule set (like SELinux, etc) is a good idea in case someone tries to replace/modify-in-place a program/daemon with a Trojan version. The external rules also help you know what a program is allowed to do without delving inside it.

      1. Charlie Clark Silver badge

        Re: Only goes down, not up ;)

        external vs. internal

        The two can be seen as complimentary. The nice thing about this approach is that it shouldn't be possible to use permission escalation to work around it which seems to be the sledgehammer approach for SELinux or the new stuff in Mac OS. Here trying to get an app to do something which isn't on the manifest should just fail (maybe unpleasantly and with undesirable consequences but with no way out of the sandbox).

        But any oversight system always runs the risk of quis custodet custodes? which then becomes the next target for attack.

      2. Hardrada

        Re: Only goes down, not up ;)

        I can't help thinking that having an external rule set (like SELinux, etc) is a good idea in case someone tries to replace/modify-in-place a program/daemon with a Trojan version.

        OpenBSD has systrace.

        Pledge() is a good example of why I like OpenBSD. They ship with basic tools that work. If you need something fancier, you know where to find it.

  2. Anonymous Coward
    Anonymous Coward

    This doesn't sound like a terrible idea, but what about something like a programming/scripting language where you don't know for sure what calls you're going to need in advance? For this type of thing you're probably going to need an 'all' option, and if you have an 'all' option then I think it's likely that many programmers will simply use 'all' rather than going to the trouble of figuring out what calls are needed, effectively bypassing the mechanism. Maybe a tool to evaluate what calls your program makes and provide you with the appropriate pledge() call would help to avoid that.

    1. Infernoz Bronze badge
      Meh

      The whole point seems to be for an application to declare restrictions on expected behaviour, so an 'all' option is nonsense because it implies no restriction. Yes, an access monitoring tool would be a good idea for fine tuning security, possibly via configuration driven pledges; however the developer should still have some idea of behaviours an application should never do e.g. try to modify or delete some application resources, configuration or data, or try to do other things it shouldn't, which may not even be blockable by user account restrictions.

      It would be nice if Windows had security like this too and not just bolted-on security systems which can miss or break stuff, like anti-virus etc.

      1. Kristian Walsh Silver badge

        If your scripting language is so open-ended that you don't know what system calls it could possibly make in response to inputs, then you have chosen the wrong tool for a secure application development.

        Even the most dynamic of languages can be made to work with this facility. If it can't, then there's a problem with the application design (e.g., creating an unholy REST engine that blindly allows execution of a command inside a local directory with the same name as the requested URL part...), not the language.

    2. JLV

      Exactly how does the scripting language angle figure into this? Assuming it's a config/system call, not somehow (???) baked into compile/link stages then Python/Ruby know just as well as C/Java coders what we are going to need access to.

      Whether or not coders, script or compiled language, are going to ask for the kitchen sink is another story. Hopefully this will be an option and requested rights will be visible to the user for perusal.

    3. Vic

      if you have an 'all' option then I think it's likely that many programmers will simply use 'all' rather than going to the trouble of figuring out what calls are needed, effectively bypassing the mechanism

      What, like users demanding CAP_SYS_ADMIN on command interpreters? Nah, that would never happen...

      Vic.

      [ Tired of refusing that particular request... ]

  3. Anonymous Coward
    Anonymous Coward

    What about program compatibility? It sounds like there will be an awful lot of programs that need to be modified. On the other hand, maybe in a highly locked down server environment, it might be doable.

    1. Dr. Mouse

      What about program compatibility?

      This is for the programme to use to help secure itself.

      “Most programs can use pledge() with 3-10 lines of code,” he claims in the presentation.

      So, it's simple to add. It's actually similar to the permissions model in Android, as an example. The application declares what it will need access to, and the OS enforces that.

      I would expect that most applications with FreeBSD compatibility will start to use it fairly quickly, assuming it is as easy as claimed. The administrator will not really need to worry about it. It is just an additional failsafe to stop certain exploits from becoming usable, as well as something to warn about exploits more quickly: Admins will see processes dying due to broken pledges and will get on to the developers sharpish, rather than it being a while before they notice something was amiss.

      In the end, it's just another tool for devs to use to help secure their programmes from attack.

      1. Paul Crawford Silver badge

        Also one hopes that developers will start to check carefully what they are doing any why, rather than just asking for the Moon on a stick as Android devs seem to do.

      2. Doctor Syntax Silver badge

        "most applications with FreeBSD compatibility"

        This is OpenBSD. Either FreeBSD would have to (a) add it to their kernel (preferred option), (b) add a dummy call or (c) expect their users to #ifdef round it.

        1. Dr. Mouse

          This is OpenBSD

          Ooops, sorry, my mistake.

          one hopes that developers will start to check carefully what they are doing any why, rather than just asking for the Moon on a stick as Android devs seem to do.

          &

          The application declares what that it will need access to absolutely everything just in case, and the OS enforces that. rolls over

          Server software developers tend to be more... thorough than app developers (in general). And sysadmins tend to be more protective of their systems than users are of phones. It's like the Android permissions system would be if both devs and users were conscientious and knowledgeable.

      3. phil dude
        Joke

        fixed that for you...

        @Dr Mouse. "So, it's simple to add. It's actually similar to the permissions model in Android, as an example. The application declares what that it will need access to absolutely everything just in case, and the OS enforces that. rolls over"

        :-)

        P.

  4. Charlie Clark Silver badge
    Thumb Up

    Formalising intent

    If nothing else this might help developers think twice before asking for everything. They might be pleasantly surprised by how little they need and enjoy the resulting sense of security that their code is less likely to provide an attack vector.

    I wonder if (static) code analysis can be applied to come up with permissions likely to be required by the main loop?

    I can see some potential edge cases that might annoying or inconvenient but nothing that can't be solved by making the list of required permissions a bit longer. Making that explicit is a win in itself.

  5. Alan J. Wylie

    seccomp

    https://en.wikipedia.org/wiki/Seccomp

    http://man7.org/linux/man-pages/man2/seccomp.2.html

    Linux has seccomp, There are two calling methods. The simple one is that rather than specifying a list of system calls, seccomp's list is hard coded: exit(), sigreturn(), read() and write(). It's up to the program to do all of the opening of files, connecting of networking etc. before calling seccomp.

    There is also a more complex interface, in which a list of system calls is specified using Berkeley Packet Filter (BPF). This seems to be pretty much equivalent to OpenBDS's pledge()

    1. Alan J. Wylie

      Re: seccomp

      Kees Cook: "evolution of seccomp"

      https://outflux.net/blog/archives/2015/11/11/evolution-of-seccomp/

    2. Michael Wojcik Silver badge

      Re: seccomp

      Yes, it's similar to seccomp, and in intent to the Linux capability system, and to setpriv. SCO added setpriv to its UNIX in, what, 1993? And Linux picked it up circa '96, I believe.

      pledge() offers finer-grained control than setpriv(), but it's hard to see at first glance what it does that's significantly different from seccomp-bpf. That doesn't mean it's not a good idea, but it's hardly as novel as the article makes it sound.

      And as for uptake - we've had the basic notion available in UNIX implementations for over two decades. Some packages use one or more of these mechanisms. Most don't. I'm not holding my breath.

  6. DrXym

    Pretty bad idea

    It would be better to ship the software with a policy (signed of course) which says what the software wants to do. Then the administrator can choose to enforce the policy, write their own or throw it away entirely. That's how SELinux works for example.

    Baking the "pledge" into the executable just seems a bit nuts because there is no way to modify it after the fact. An example of this happened to me recently when I tried to run a VPN with a cert which was in the "wrong" directory. Fedora's SELinux policy complained about what I was doing and prevented me from doing it. But it told me what instruction I could use to add an exception to this policy and that's what I did. If it were baked into the executable it wouldn't have been any good to me.

    1. xslogic

      Re: Pretty bad idea

      If you need to make it do something different, you're probably modifying source anyway. Anything you want the program to be able to do shouldn't be dropped.

      You're talking about using signed policies which are less flexible. (Both in terms of having to provide 2 separate files and from the point of not being able to drop privileges when you no longer need them)

      There are privileges that you may not want the binary to have at all, even briefly - but you'd probably still want to bake them into the binary directly in some way that they get activated before the binary is run.

      It's an extension of what most (ftp, http) servers do. (e.g. typically servers used to start as UID 0 (Normally "root") and drop down to another user who can do less damage when they've opened the relevant port number. (Port numbers less than 1024 normally require UID 0 access - dunno if any of the security frameworks affect that...))

      1. DrXym

        Re: Pretty bad idea

        "If you need to make it do something different, you're probably modifying source anyway. Anything you want the program to be able to do shouldn't be dropped."

        I didn't have to modify openvpn to make it work with the dir I wanted to store my certs in. I just ran a command to add an exception.

        I don't see how policies are less flexible. As I said you can make them any strength you like. You could even "train" a policy by running the software to see what it does and then lock the policy down to that set of permissions.

        Also, the upfront model of permissions has utterly failed on Android and I don't see the concept of a pledge being hugely different.

        1. xslogic

          Re: Pretty bad idea

          Well, for a start - you appear (Unless I'm missing something) to be talking about changing directories and pledge() is about disallowing parts of the API. If openvpn is designed to read files from a directory, changing the directory that it's reading from will not be affected by pledge(). If you then go to read it in from some exotic device that requires extra API calls - then that is going to require a rewrite to your code to require extra API calls. (Which does mean that the person writing the code is going to have to know which parts of third party libraries may or may not be called. This may be interesting if, for example, you upgrade a dynamic library for reading device information and it suddenly sprouts support for talking to some exotic device that requires extra API calls...)

          pledge(), for example, would allow you to start a webserver up, bind to a socket and then get rid of socket(), setsockopt(), bind() and listen(), never to be used again. If it's a daemon and you want to fork() once at initialisation and then never more - fine, cut it off. If you decide you *do* want to use it later on, it's likely going to require recoding the application.

          Using both may be useful, though.

          The training is a tricky one. It'd be possibly - but you would have to make sure you fully exercised the code.

          Finally, I'd guess that the upfront model failed on Android because a) a (very) large number of people have an Android device, b) applications are relatively easy to develop and deploy, c) the original vendor of the code is interested in selling convenience, rather than security (To a point) On the other hand, at least the last point is false for OpenBSD. (Yes, you have to weigh your options and decide if it's a wise thing to do - but then you'd have to anyway)

  7. druck Silver badge

    Narrowing

    I assume pledge only allows narrowing of permissions? Or there would be nothing to stop a compromised program from issuing pledge again to widen them.

    1. Notas Badoff

      Re: Narrowing

      You mean like the first comment already explained? See "Only goes down, not up ;)"

  8. Anonymous Coward
    Anonymous Coward

    Stupid idea

    Having a process limit its ability to do only those things it is supposed to do is a good idea of course. Killing it if it tries to do one is a terrible idea. Denial of Service, anyone?

    A more reasonable solution would trap/log on an attempt to do something it shouldn't do but allow it to continue to go about its business. A resilient program should take attempts to subvert it in stride and continue to perform its intended function. Causing a program to kill itself via pledge could open the door to security holes (for example, if you could get an AV daemon to terminate, this would allow known malware to get a foothold)

    This pledge() thing sounds like combination of MAC and assert() as far I can tell. It is hardly anything groundbreaking or novel. SELinux is a MAC which allows you to enable only specific privileges for a process rather than giving it the blanket setuid root. It sounds like pledge() goes further so you could for instance provide a hello world program the ability to output to the TTY only, but not do stuff like read from the tty or read/write from the filesystem or network. The self termination aspect in the example shown in the article looks exactly like assert(), except that assert is more syntactically concise since it encapsulates everything you need in a single line. Not sure why pledge() is more verbose here, this just adds to the possibility of programming errors.

    1. Rich 2 Silver badge

      Re: Stupid idea

      If you get the pledge right then the only way the program can break its own rules is if it's corrupted in some way. Why would you want such a program to carry on running?

      If you get the pledge wrong then you don't understand what your program is doing. In which case, you definitely don't want it to run!

    2. Frumious Bandersnatch
      Terminator

      "I promise not to kill anyone" (was Re: Stupid idea)

      Killing it if it tries to do one is a terrible idea

      Not necessarily. Look at Erlang. There, a major plank in the philosophy for handling errors is to just let the process die, along with any linked processes. Mind you, it also has the concept of supervisors that can restart processes after a shutdown. You have to actively work with this paradigm and code your application to take advantage of this, though. You can't just bolt it onto existing programs.

      As for assert(), they do you no good if the code has a buffer overflow or something that wasn't explicitly trapped and the bad guys get to run arbitrary code. The pledge() idea gets around this nicely as injected code is sandboxed and probably won't be able to do much damage.

      As for something managing to deliberately kill your anti-virus, the AV vendor can just use the idea from Erlang: let the affected daemon/process die, have a supervisor watching for any termination, log it and restart the process. If it consistently crashes on a given input file, mark that file as possibly containing malware and skip it on subsequent runs. Though really, I don't expect AV software to be crashing on arbitrary input files...

      The main downside to this, as I see it, is that it does no good against viruses that are aware of how it works. Simply trace the call to pledge(), replace it with a jump to the viral code and there's no barrier to doing whatever "unsafe" operation it wants. Ditto running any untrusted program, which definitely won't call pledge().

      I can see the reason for wanting this to be a system call rather than an external "rights" database. It's a lot simpler, easier to support and very lightweight. It isn't a complete solution, but for certain things (code injection) it's rather a neat defence.

    3. Charlie Clark Silver badge

      Re: Stupid idea

      . Causing a program to kill itself via pledge could open the door to security holes (for example, if you could get an AV daemon to terminate, this would allow known malware to get a foothold)

      That really is a stupid example. As someone else has already noted, that sort of thing can be configured to restart automatically, as I'm sure only too many of us have witnessed only too often.

      The problem with simply killing any process (because it tries to overstep its pre-defined limits or for any other reason) is that it might fail in an undefined way which might corrupt or expose state in some way.

      1. Androgynous Cupboard Silver badge

        Re: Stupid idea

        Any process that oversteps bounds that have been specified by the original developer has by definition already failed in an undefined way. Kill it, kill it with -9.

        As a developer I would look very favourably on this, as I'd rather not have a daemon I wrote be responsible for a pwning (although naturally this will be a second line of defence behind my already exemplary coding standards).

        I've never really got on with SELinux, I can understand the appeal to admins to lock down a process their way, but sometimes the developer knows best. If I know my code will never fork() or exec(), I am more than happy to specify that in the source: that alone would be a massive win for system security.

    4. thames

      Re: Stupid idea

      @DougS - "This pledge() thing sounds like combination of MAC and assert() as far I can tell"

      How do you use MAC and assert() to let a program drop privileges dynamically as soon as it no longer needs them?

      1. Anonymous Coward
        Anonymous Coward

        Re: Stupid idea

        Capabilities, which is a basic part of MAC. The thing missing with the Linux implementation of MAC is that you can only drop root capabilities (to make the process a less than super user) but not normal user capabilities like creating a file or opening a network socket.

  9. thames

    A Simple Idea

    I haven't followed the latest developments in this, but in the earlier discussion this wasn't intended as a universal cure-all. Rather, it was a simple solution for programs that had simple use cases. If it can't solve every program's situation, that's not a strike against it, because that wasn't in the plan.

    OpenBSD itself ships a lot of software as part of the standard distribution, including a lot of things such as their version of standard unix utilities. Being able to harden potentially hundreds of commonly used packages with just a few lines of patch to each one is an extremely attractive goal. Even if third party developers don't take it up, it is felt that it will give OpenBSD itself a nice boost in security with much less work than implementing something like SELinux.

    The basic idea behind it is that many programs need certain privileges when they start up that they don't need when running. If they can drop those early on, then they can greatly reduce the ability to use a bug to create a useful exploit, such as a privilege escalation. If for example netcat doesn't need to write to files (I don't know if it does or not, but let's use that as an example), then by dropping that privilege you make it impossible to exploit say a buffer overflow in netcat to overwrite a file which would let an attacker escalate a multi-step attack to the next stage.

    By making this part of the program itself rather than an externally imposed "policy", it is possible to drop more privileges than would otherwise be the case. This is because the program can often arrange to do certain things during initialisation, such as reading configuration files or opening ports, that it doesn't need to do once it is running. I don't think this sort of dynamic dropping of privilege can be emulated with a static policy. And of course since this is baked in, it can't be turned off accidentally via misconfiguration. It also doesn't stop you from adding external static policy based security on top of it as well if you want to (and if it is available). This sort of "do something privileged to start up and then drop to a lower privilege state" has been pretty standard unix philosophy for a long time. This just makes that dance more widely applicable.

    Some people don't like it because it isn't the answer to all problems. However, the OpenBSD developers feel that it is a nice and simple answer to a lot of problems that can be implemented without a huge effort.

    I'd like to see how it works out for them, and if successful, possibly adopted elsewhere, such as in Linux.

    1. Michael Wojcik Silver badge

      Re: A Simple Idea

      I'd like to see how it works out for them, and if successful, possibly adopted elsewhere, such as in Linux.

      So it can be added to all the other security mechanisms that most Linux developers ignore, eh?

      At least it won't be much of an effort, since it can be implemented as a facade on seccomp-bpf.

      Mind you, I think privilege reduction is a fine thing. It just runs into the same problem as all of these mechanisms: it's hard to get developers to use it. Just as it's hard to get developers to stop writing code that's full of buffer overruns, SQL injection points, lousy authentication mechanisms, etc.

    2. Loud Speaker

      Re: A Simple Idea

      If for example netcat doesn't need to write to files (I don't know if it does or not, but let's use that as an example), then by dropping that privilege you make it impossible to exploit say a buffer overflow in netcat to overwrite a file which would let an attacker escalate a multi-step attack to the next stage.

      Whether it needs to write or not probably depends on run time options - pledges can be used selectively based on the command line options, so if it should not write to files, then it can't. This handles the case where the code "accidentally" invokes a path that it was not meant to.

      - Obviously this would be a bug, but somehow, I expect code to go on having bugs in for quite a while.

      It might also catch cases where files are opened in the wrong mode - either by coding the wrong mode, or, more likely in my experience, modifying code during a cut-and-paste and using the wrong file handle. This kind of thing can easily occur when maintaining old and large programs which invoke several libraries maintained by different teams. Yes, you, author of the main code, can prevent updates to libraries you call, from doing something naughty you do not expect. Sure you could read the entire source code. Or you could have a holiday this year.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like