back to article Oh SNAP! Old-school '80s Unix hack to smack OSX, iOS, Red Hat?

Unix-based systems, as used worldwide by sysadmins and cloud providers alike, could be hijacked by hackers abusing a hard-coded vuln that allows them to inject arbitrary commands into shell scripts executed by high-privilege users. A class of vulnerabilities involving so-called wildcards allows a user to affect shell commands …

  1. Pete 2 Silver badge

    dot and slash

    [ disclaimer: I'm not gonna try this on the production system in front of me ]

    ISTM this is merely sloppy use of wildcard expansion. I have always assumed that these are easily prevented by changing the "*" argument (and root users who put that in scripts should be shot - slowly) to "./*" or $EXPLICIT_PATH/ ... which changes what is expanded by the shell from being arguments starting with a dash, into pathnames, all starting with ./<something>

    P.S. If you really *were* trying to write a trapdoor into a system, surely you'd use "invisible" files with names containing backspaces or octal \000 characters?

    1. Preston Munchensonton
      Alert

      Re: dot and slash

      Assumption is the mother of all fuck ups...case in point.

    2. alain williams Silver badge

      Re: dot and slash

      P.S. If you really *were* trying to write a trapdoor into a system, surely you'd use "invisible" files with names containing backspaces or octal \000 characters?

      You cannot create a file name with a NUL character in it - that would be the end of string to the system call.

    3. Wzrd1

      Re: dot and slash

      "...could be hijacked by hackers abusing a hard-coded vuln that allows them to inject arbitrary commands into shell scripts executed by high-privilege users."

      Erm, compromise the high-privilege user, own the system anyway. Be it a user with ill will or a user managing to have a malware product installed.

      The simple truth is, anyone with high-privilege access essentially owns the system at worst, the entire network at more worst. Hence, the story is nonsense fluff that warns about excessive privilege granting.

      In short, something that *should* and largely is, industry standard.

      1. Vic

        Re: dot and slash

        The simple truth is, anyone with high-privilege access essentially owns the system at worst, the entire network at more worst. Hence, the story is nonsense fluff that warns about excessive privilege granting.

        Exactly what I was thinking. Requiring root privilege to create a root escalation is a null problem - if you've already got the power, you don't need to nick it.

        If you did want to exploit a temporary grant of root privilege, it would be a lot easier to copy /bin/bash to your home directory & then setuid it...

        Vic.

        1. Stephen Samuel
          WTF?

          Re: dot and slash

          No. You don't need to be root to create any of these traps.

          The problem is that you need to trick a root user (or otherwise privileged command) to execute a wild-care command with the booby-trapped directory as the current directory .. and using a relative wild-card with no path elements.

          Among other things, this attack requires that you are operating in a directory that is write-others (or, at least, writable by a hostile group), and that the target user use that other-writable directory as current directory when starting up the wild card affected command .. .. AND that the target user use a completely relative wildcard.

          The simple way to avoid getting stuck with this problem is to do any command operating with '*' prefixed wild-cards is to use ./*blah. instead of *blah

          The other approach (that doesn't require re-engineering the system would be to change these affected commands so that -- once you start interpreting arguments as filenames, you don't go back to interpreting command arguments.

          I was, in fact, under the impression that no backtracking to argument mode was how most commands interpreted their arguments. Even so, I stil tend to use defensive shell scripting, and use ./*... in instances where I may be using wildcards in a hostile environment.

          1. Jamie Jones Silver badge
            Thumb Up

            Re: dot and slash

            " I was, in fact, under the impression that no backtracking to argument mode was how most commands interpreted their arguments."

            Indeed. That is quite nasty.

            remember you can use '--' to end arguments with most commands these days, but I still agree with you there!

    4. Stephen Samuel

      Re: dot and slash

      The problem with that would be crafting an option that is valid with those characters in the string. (and, no \000 is not an option it is illegal (and mostly impossible) in a filename, since it terminates strings in Unix calls..

  2. John Sanders
    Paris Hilton

    Confused...

    So you need access to the box, and need sufficiently high-privileges to be able to alter other's people processes.

    So... the root user is a hardcoded vulnerability then?

    Sorry I'm confused... I'm I the only one who's wondering why is there such a pletora of articles lately highlighting "problems/defects" in non-microsoft products?

    1. Mark Simon

      Re: Confused...

      Actually, it’s easier than that.

      If, for example, you have a web site that allows uploading files and don’t filter the file names, the you can create these problem file names without trying.

    2. heyrick Silver badge

      Re: Confused...

      " I'm I the only one who's wondering why is there such a pletora of articles lately highlighting "problems/defects" in non-microsoft products? "

      I said many years ago, in response to "Linux is completely secure!" type statements, that Linux was secure THEN because it was not important enough to be an attack vector. With Windows 8 not flying out the door and many many mobile phones running a variant; it has now matured into something worth attacking, as has iOS...

      The non Microsoft products have come of age.

      1. LaeMing Silver badge

        Re: Confused heyrick... Ho humm this old saw again.

        "I said many years ago, in response to "Linux is completely secure!" type statements, that Linux was secure THEN because it was not important enough to be an attack vector. With Windows 8 not flying out the door and many many mobile phones running a variant; it has now matured into something worth attacking"

        Of course back many years ago, Windows was only a majority player on the Desktop, while the vast bulk of externally-connected servers (the ones serious hackers want to attack) were *NIX. These days, I believe, the spread between *NIX and Windows on the server is closer to 50:50, so by your argument attacks on *NIX systems should be going DOWN!

    3. Frederic Bloggs

      Re: Confused...

      It doesn't have to be a root user especially, just a directory in which one has sufficient rights to create a file AND (rather more importantly) some dumb person (with sufficient rights) who is likely not to notice a a file called '-rf *' or whatever before doing some wildcard rm anyway.

      The crucial thing is, of course, that the perp will have logged in with a username and left his (bound to be a bloke) fingerprints all over it, username, creation time etc etc. Any sysadmin worthy of the name is likely to notice these peculiarly named files and is going to investigate.

      Surely?

      1. Anonymous Coward
        Anonymous Coward

        Re: Confused...

        We all make mistakes... especially when we're in a hurry.

      2. sisk Silver badge

        Re: Confused...

        Any sysadmin worthy of the name is likely to notice these peculiarly named files and is going to investigate

        Speaking as a part time sysadmin for a bank of Linux servers I have to say I probably wouldn't notice the odd file names. The reason is simple: I rarely see lists of files. The only time I ever type ls into my terminal (these servers are gui-less, as *nix servers should be in my humble opinion) is when I can't remember some obscure command. I do see file lists on FTP, but I have it set up so that no one, not even me, can access anything except their own home via FTP.

        Then again you'd never catch me running something like 'rm -rf *' either.

        As I said, I'm only a part time sysadmin (I'm a web developer by day), so take that as you will.

      3. Vic

        Re: Confused...

        It doesn't have to be a root user especially, just a directory in which one has sufficient rights to create a file AND (rather more importantly) some dumb person (with sufficient rights) who is likely not to notice a a file called '-rf *' or whatever before doing some wildcard rm anyway.

        But that's not an exploit.

        Even if the "-rf *" is interpreted as a file before wildcard expansion (as it is on my shell[1]), all it does is prevent the command from working properly; it doesn't give the file's creator any additional privilege unless it is being executed by some sort of command processor - i.e. the root user needs to type "python *" or something eequally idiotic.

        In short, this can only catch out users with elevated privilege and not the slightest clue what they are doing. And there are easier ways to pwn them than this...

        Vic.

        [1] As follows :-

        [vic@perridge wc_test]$ ls -l

        total 4

        -rw-rw-r--. 1 vic vic 0 Jul 4 11:05 foo

        -rw-rw-r--. 1 vic vic 4 Jul 4 11:05 -rf *

        [vic@perridge wc_test]$ rm -rf *

        rm: invalid option -- ' '

        Try `rm ./'-rf *'' to remove the file `-rf *'.

        Try `rm --help' for more information.

  3. Martijn Otto

    Very unclear

    Am I correct in that the problem is multiple wildcard expansion?

    If I make a file called '*' (without the quotes) and it gets sent as an argument to a shell script, when this script calls something else the * gets expanded to mean every file in the directory.

    That is actually really easy to prevent if you write your shell scripts properly. Enclosing a filename in quotes is usually enough. Most languages also have specific functions to escape shell arguments.

    Basically, this is the same thing as SQL-injection.

    1. Bronek Kozicki Silver badge
      Boffin

      Re: Very unclear

      The problem is parsing of filenames by traditional unix utilities, since "everybody" knows that if a filename starts with dash (i.e. - ) then programs will parse it as if it was an option. That's why some programs support -- after which everything will be interpreted as filename, even if it "looks" like an option.

      As for actual vulnerability .... well if you are running shell scripts as root and these use globbing, and it never occurred to you that users might have files starting with a dash ... now it time to start checking these scripts.

      1. Steve Knox

        Re: Very unclear

        The problem is parsing of filenames by traditional unix utilities, since "everybody" knows that if a filename starts with dash (i.e. - ) then programs will parse it as if it was an option.

        No, the problem is method of expansion of wildcards by the shell, combined with the tradition of Unix of allowing essentially any printable character in a filename.

        The shell expands the wildcards by interpreting them and placing the resulting filenames UNQUOTED into the argument string of the utility.

        The fix is to rewrite the shell wildcard expansion routine to quote every filename. For example, in a directory with the following files:

        Important File.ows

        shellscript.sh

        My "Expenses".ods

        -rf

        the shell command rm * is being passed to the utility as

        rm Important\ File.ows shellscript.sh My\ \"Expenses\".ods -rf

        (Note that the shell is smart enough to escape characters which are used for argument separation or quotation, but it doesn't escape the parameter character "-". This actually makes sense, because whether an argument is a parameter or not should be up to the application.)

        when it should be

        rm "Important File.ows" "shellscript.sh" "My \"Expenses\".ods" "-rf"

        1. This post has been deleted by its author

        2. Bronek Kozicki Silver badge

          Re: Very unclear

          rm "Important File.ows" "shellscript.sh" "My \"Expenses\".ods" "-rf"

          When writing C (or C++) program parsing parameters like the above, you will find that the last parameter "-rf" was passed by shell to your program without surrounding quotes. Thus this gained nothing :(

          Of course you might be advocating that quotes surrounding parameters should be passed to program (also when put explicitly by the user) but I'm not certain that this is good idea. For one, how do you pass a filename starting with quotes to your program and make it understand that these quotes are part of the filename, not a decoration?

          It is up to program to decide what is filename and what is option.

        3. PJI

          "The" shell?

          AlthoughKK those brought up only on Linux seem very unaware, actually there are dozens of different shells, perhaps scores. Bash, is to my mind, one of the more broken ones but widespread because of Linux (viz. ghastly array syntax and downright odd handling of variables in while loops as well as no supplied alias for repeating commands quickly …). Most common for system and other formal scripts is Bourne shell or Posix shell with Korn shell not far behind (yes, I like ksh). csh and tcsh are still much loved and zsh and others are far from dead.

          So, which shells quote some characters automatically (and sometimes annoyingly and wrongly)?

          As for the -string names, surely this is the old trick to test friends and colleagues - just create a file called, e.g. "-" and tell someone to try to copy or delete it. Such a simple thing catches most people. But is this not the difference between an experienced UNIX user and the self-taught Linux or other home enthusiast?

          I fear that no system is utterly bullet proof and if this were such an awful bug, one imagines that it would have caused problems rather before now in the forty odd years that UNIX has been around in some form or other.

    2. Gordon 11

      Re: Very unclear

      That is actually really easy to prevent if you write your shell scripts properly.

      Precisely. It's what "set -f" is for.

  4. John Sanders
    Paris Hilton

    Less confused...

    I read it again, so it goes like this:

    Script uses shell expansion to execute/source an indeterminate number of scripts in a directory.

    If the script is owned by root, a rogue script could be run/sourced by the "evil" expansion and that will mean that the rogue script could be used to escalate privileges or do bad things.

    As I see it is not a vulnerability per se but as someone taking advantage of a retarded use of shell expansion in a script I guess.

    1. Michael Wojcik Silver badge

      Re: Less confused...

      I read it again

      I'm glad to see someone read the actual whitepaper (assuming that's what you meant). Clearly many of the comments in this forum are from people who couldn't spare a couple of minutes to do so, and instead thought they'd go ahead and post idiotic and irrelevant crap.

      If the script is owned by root

      No. Ownership of the script doesn't matter.1 The effective UID under which it's running is the key.

      Note, too, that it's not simply an exposure for the superuser. There are various possible exploits against other non-privileged users which may be of interest to attackers who simply want access to data they shouldn't have.

      If the script is owned by root, a rogue script could be run/sourced by the "evil" expansion and that will mean that the rogue script could be used to escalate privileges or do bad things.

      That's one possibility, though it's not the main one the whitepaper discusses. Its authors are more fond of the filename-as-option vector.

      1Prolepsis: The exec(2) implementations in modern UNIXes generally ignore the setuid bit on interpreter ("script") files.

  5. filik

    which is why...

    unix programmers and administrators have used ./* since 1988 to avoid this

    1. lurker

      Re: which is why...

      Indeed, this was well known about when I was first exposed to unix in the early 90s.

      So basically some 2014 script kiddies just learned about a 80's era issue which has been worked around since forever, and now it's news?

    2. the spectacularly refined chap

      Re: which is why...

      Indeed. I've gone right through this "paper" and there is nothing new. It's enough to make you smile in places.

      1) It isn't that "even many security-related people" are not well aware of these kinds of issue and how to guard against them. The problem is noobs presenting themselves as self-styled security gurus. I've been using Unix systems as my primary OS since the early 90s and this was well documented then. It was well known enough that some even advocated using it to your advantage - placing a file "-i" in key directories such as root as a protection against fat finger syndrome. In this case this lack of real experience and expertise on the part of the author is further evidenced by the next point.

      2) A lot of these examples are in reality duff. At several points in the paper assertions are made along the lines of "command accepts a particular --long-option" without any further clarification, to which my immediate response was "No it doesn't". The author confuses GNU extensions with POSIX options or other options widely supported outside a GNU userland. The POSIX standard committee do scrutinise the semantics of tools with a view to vulnerabilities such as these.

      If you use a system that extends those tools in a way that could potentially be "exploited" then that is a flaw in the particular revised version. It doesn't affect other implementations and so can't be extended to all variants. I'm not going to get involved in a debate as to whether those extensions are useful or desirable, but the fact that the author is unable to distinguish between the two itself speaks volumes.

      What's the follow up? Let me guess: Brand new discovery! Re-setting $IFS can expose vulnerabilities in poorly written scripts! No-one has ever noticed this before!

      1. Destroy All Monsters Silver badge

        Re: which is why...

        This.

        Unfortunately it's down to the fact the that stuff seen by programs has no type and an array of typeless strings of which some may be options, some may be concatenated options, some may be stuff expanded by the shell (and this in a frankly inconsistent manner revealed whenever hacking bash), some may be badly quoted strings with spaces ... this is REALLY HARD TO HANDLE CORRECTLY.

        Well, when in doubt, print "usage:"

      2. Daggerchild Silver badge
        Devil

        Re: which is why...

        Ditto on the old guard responses. This is Unix 101 stuff.

        I remember Debian had fun with the /tmp cleanerscript for this reason a decade or so back, although they hated symlink race games more :)

        I'm surprised they haven't mentioned putting xterm escape sequences into filenames - that was my personal favourite. Especially the 'print-screen' sequence!

        And who can forget that good ol smiley! :(){:|:};:

        Good times...

      3. Michael Wojcik Silver badge

        Re: which is why...

        It's enough to make you smile in places.

        Yes, the authors really don't know much about their subject. Sample quote: "Unix shell will interpret files beginning with hyphen (-) character as command line arguments to executed command/program". That's fundamentally and entirely incorrect - the shell does not "interpret file[name]s beginning with hyphen[s]" at all. It just passes those strings to the exec system call, which will in turn make them available to the invoked program as part of argv.

    3. PJI

      Re: which is why...

      script owned by root? Who cares who is the owner. Most utilities are owned by root. It's who is running them unless they are set UID. I think that has not been possible on shell scripts for a very long time and even Perl has some defences against this.

      Of course if the user, working as root manages to do something silly, well, he or she can do that anyway without this possible problem.

    4. Michael Wojcik Silver badge

      Re: which is why...

      unix programmers and administrators have used ./* since 1988 to avoid this

      Yes, and why modern versions of find(1) have the "-print0" option and xargs(1) the corresponding "-0" option, and so on. It's a widely-recognized issue.

      There's also the related trick of embedding ANSI or other terminal control code sequences in filenames, for entertainment when someone lists them using a suitable terminal (emulator).

  6. Anonymous Coward
    Anonymous Coward

    Security scan?

    Some distros used to scan for world-writeable files/directories, etc, to warn the admin staff of poor security. Maybe it is time to scan for files starting with '-' as a check for such silly buggers?

    If coding for a mass delete of files in bash I tend to do something like this:

    for f in *

    do

    if [ -e "$f" ] ; then

    rm -f "$f"

    fi

    done

    Verbose for sure, but no strange errors if no files match and slightly less opportunity for silly buggers. OK, the tar example is harder to catch, but you can list files in to a temporary file, then tar using that file as a source list.

    Edited to add: Just read filik's suggestion of using the likes of:

    rm ./*

    Must be more paranoid in future!

    1. Pete 2 Silver badge

      Re: Security scan?

      > Edited to add: Just read filik's suggestion of using the likes of:

      > rm ./*

      All fine and dandy, until one day when you get a sticky "dot" key and accidentally transform it into

      rm /*

      - even worse with the -rf option -

      (don't try this at home, kids)

      1. Anonymous Coward
        Anonymous Coward

        Re: Security scan?

        On the same theme, I was once almost bitten by trying to run chmod on my hidden files with:

        chmod -R [something] .*

        Of course .. is also a match to .* so it then recursed up to /home and then down in to all other user's directories!

        Thankfully I was sane enough to do this as myself, not root, so no damage was done. But it taught me a valuable lesson!

        1. Destroy All Monsters Silver badge

          Re: Security scan?

          Of course .. is also a match to .* so it then recursed up to /home and then down in to all other user's directories!

          What system does that??

          Thankfully I was sane enough to do this as myself, not root, so no damage was done. But it taught me a valuable lesson!

          Yep, generally used filesystems in 2014 are STILL just databases with no transaction support. This is bad.

      2. Martijn Otto

        Re: Security scan?

        And that's what the safe-rm package is for. Saved my hard disc once.

      3. Anonymous Coward
        Anonymous Coward

        Re: Security scan?

        even worse with the -rf option

        I once got some script kiddie who thought he was a "1337 h4x0r" who was going to "delete my stupid C drive" from a chat room because he was "better than me" because he had Linux to run that. I didn't bother telling him that I'd been running Linux for years. Nor did I bother telling him that he'd gotten the wrong idea when he heard that Linux was a hacker OS. I simply demonstrated by telling him rm -rf /* would show him how to use all the "1337 tools" in Linux to cause problems for people. I just neglected to mention that "people" meant him.

        That was the same week I got someone to send their virus to 127.47.82.174 (or something like that...only the 127 is important).

    2. phil dude
      Linux

      Re: Security scan?

      i use find ... -exec rm {} \;

      you can preview the list, and as someone pointed out, sometimes "windows/mac" files that have been copied can have strange names. using "find" it goes through the directory listing file by file, then applying the "-name" flag.

      I would prefer a copy on write fs though....

      P.

  7. Matt Bryant Silver badge
    WTF?

    Just use a restrictive shell?

    Stops users writing to or changing files in directories other than the ones they are permitted to use. Locking out the permissions on the boot and shutdown directories has been a standard advisory for as long as I can remember, so even if you hadn't restricted the users shell they wouldn't be able to affect critical processes. On systems we feel really jumpy about we restrict lusers to a chroot environment expressly so they cannot mess with the rest of the filesystem, especially stopping them from dropping nasties in /tmp. TBH, this seems a lot of noise about something that may catch the hobbyist out but shouldn't bother proper sysadmins.

  8. seven of five

    directory write access?

    Wouldn´t you need write access to the directory in order to use this exploit?

    1. lurker

      Re: directory write access?

      Yes. Basically what they are suggesting is that

      - if you have the ability to create files (e.g. some webserver that allows you to upload files and stupidly doesn't rename them using an internal, safe, naming scheme)

      - and if the operating system happens to also have some scripted shell glue which runs wildcard commands on the contents of that target upload directory

      - and if that shell glue didn't use the ./* convention named above which has been widely used since the 80s to avoid this kind of cockup

      Then there might be a problem. But this is in no way a 'newly discovered exploit'.

  9. naive

    Need root to get root

    These are all well documented shell features, nothing new here.

    First the hacker need the ability to place these files in the directories where root is supposed to use these wildcards.

    Most Unix security handbooks warn for this type of exploits, like having "." in the search PATH for commands.

    A more useful introduction, while funny to read at least, to this type of documented features can be found here: http://en.wikipedia.org/wiki/The_Unix-Haters_Handbook

    (The pdf can be loaded for free)

    M$ did not pay these security experts to retrieve this information from the manual pages ?.

  10. Mark Simon

    Old News, but still a worry …

    The referenced article in turned referred to this one:

    http://www.defensecode.com/public/DefenseCode_Unix_WildCards_Gone_Wild.txt

    which was very helpful. The following article rubs it in:

    http://www.dwheeler.com/essays/fixing-unix-linux-filenames.html

    I tried it myself with the following:

    mkdir one; touch one/stuff; mkdir two; touch two/stuff; touch ./-rf

    rm *

    The trick, of course is in the ./ prefix, which allows you to get away with murder.

    A solution:

    yum install detox

    detox -rv ./*

    The other part of the solution is to filter all incoming file names.

  11. Arkasha

    -- anyone?

    rm -rf -- *

    stops expansion of filenames being treated as further arguments. Example:

    arkasha@freeman ~/xxx $ touch a b c d

    arkasha@freeman ~/xxx $ touch -- -z

    arkasha@freeman ~/xxx $ ls

    a b c d -z

    arkasha@freeman ~/xxx $ tar cf xxx.tar *

    arkasha@freeman ~/xxx $ tar cf yyy.tar -- *

    arkasha@freeman ~/xxx $ file *.tar

    xxx.tar: gzip compressed data, from Unix, last modified: Thu Jul 3 13:37:37 2014

    yyy.tar: POSIX tar archive (GNU)

    Works on most commands. Always put it before any filename expansion.

    1. seven of five

      Re: -- anyone?

      most UNIX systems do not support --, but on linux (and most probably, BSD) this should do the trick.

      1. the spectacularly refined chap

        Re: -- anyone?

        most UNIX systems do not support --, but on linux (and most probably, BSD) this should do the trick.

        That goes back a long way - it probably predates Linux. It's guideline 10 of the utility syntax guidelines (POSIX.1 section 12.2, at least in the 2008 revision which is what immediately comes to hand here). Can't say definitively whether that term was included but I recognise the precise wording of many of those terms as far back as the SCO OpenServer docs, circa 1994 or so.

        1. Anonymous Coward
          Anonymous Coward

          Re: -- anyone?

          Do n't we just love Linux experts? "--" to mark the end of option processing has been part of any programme or script using one of the various forms of getopt(1) in shell and C libraries before Linus could reach a keyboard.

  12. Suricou Raven

    True bug location.

    This isn't a bug in unix. It's a bug in idiot programmers who don't know how to sanitise their inputs. If they were writing for Windows they would be just as dangerous.

    1. lurker

      Re: True bug location.

      Quite.

      Apparently though, if you take a unix trick which has been known about, documented and worked around for thirty years and throw it into a document with the words 'exploit', 'vulnerability' and the names of popular current unix-derived operating systems, you have 'news'.

    2. Anonymous Coward
      Anonymous Coward

      Re: True bug location.

      If they were writing for Windows they would be just as dangerous."

      Windows at least sanitises wildcards so that they do not enable unintended recursion up the directory tree.

      1. Destroy All Monsters Silver badge

        Re: True bug location.

        AFAIK, wildcard interpretation in "CMD.EXE" (BARF!) is left to the called program.

  13. alain williams Silver badge

    This is easy to prevent

    The problem is that file name starting with a '-' will be pattern matched; words on the command line that start with '-' can be interpretted as options. This is not usually a problem as most people do not create files with names starting '-', but a cracker might.

    So get the shell to not expand (do wild card matching) on file names that start '-'. Put the following into your environment (eg via /etc/profile):

    FIGNORE='[-.]*'

    GLOBIGNORE='-*'

    The first is for ksh, the second for bash. QED.

  14. Yet Another Random Guy

    Is this really something new?

    The man-page for GNU rm states the following:

    "

    To remove a file whose name starts with a '-', for example '-foo', use one of these commands:

    rm -- -foo

    rm ./-foo

    "

    As far as I know all GNU utilities support the "--" construction. So I don't think that this is something new, nor should it have any impact. Anyone in their right mind would be using "--" in scripts... hopefully

  15. Anonymous Coward
    Anonymous Coward

    Why is this news? Unix sysadmins have known about this since the epoch.

    1. Mr Flibble
      Facepalm

      rm -- *

      It's click-bait, of course.

  16. rizb

    Dear glod...

    "In the context of programming a wildcard is a character, or set of characters, that can be used as a replacement for some other range or class of characters. Wildcards are interpreted by a shell script before any other action is taken"

    This is THE REGISTER. Readers should not need to be told this. What kind of first-line support cretin are you writing for these days?

    1. This post has been deleted by its author

    2. Destroy All Monsters Silver badge

      Re: Dear glod...

      HUSH!

    3. Anonymous Coward
      Anonymous Coward

      Re: Dear glod...

      ITYM "Dear glob", no?

      1. Anonymous Coward
        Anonymous Coward

        Re: Dear glod...

        These days, "Dear blog"

  17. ElReg!comments!Pierre Silver badge

    Yep, especially as the sentence "Wildcards are interpreted by a shell script before any other action is taken" is dubious at best...

  18. Jamie Jones Silver badge
    Flame

    You've made be rant now..

    Firstly, I'm not one of those who will blindly defend UNIX, and downvote anyone who dares criticise it (even if they have a valid point) but I'm sorry, this is absolute bollocks.

    "Since this bug originates from a design problem it will be very interesting on how operating system vendors address this problem. It is something you cannot fix with a simple patch. The way on how the system interacts with files has to be completely redesigned," SEC Consult writes.

    Seriously, what is their agenda? As others have pointed out, this has been known by any half-compitent UNIX user for ages. There is no OS level bug to fix.

    No UNIX system needs to be completely redesigned (and if it was a real problem, it would only be the SHELL and it's globbing that would need to be 'fixed' - this has bugger all to do with the way the 'system' (kernel, compiled executables etc.) work)

    As has already been mentioned here, any fault solely lies within buggy crappy programs ("buggy crappy programs holding hands" *cough* /coat) and they can be fixed without needing to make any changes to the UNIX kernel, userland, or even the bloody shell.

    TO BE FAIR.....

    It can be argued that the fact the way globbing works makes it easy for incompetent shell programs to screw up is at best unfortunate.

    Indeed, there are many who argue that kernels should not allow files to exist which start with a '-', or contain spaces, newlines, tabs, various binary characters etc...

    But, all competent UNIX programmers know that filenames can contain *ANY* value from the 256 in a byte, apart from ascii '/' and NUL, and therefore code appropriately.

    This flexability may be a curse to some, but it can be useful to proper programmers (after all, why should a program written in C be restricted from storing files with 'special' characters just because some badly written shell scripts can't cope? -- especially as spne of these systems will be storing files that NEVER need to be accessed from the shell)

    Yes, this has been known for years. Just like sql-injection, and other problems, you simply need programers who know what they are doing, without forcing syntax restrictions on them to appese the stupid.

    There is a very well written website that describes these issues (and it itself has been around for years):

    http://www.dwheeler.com/essays/fixing-unix-linux-filenames.html

    well worth a read, but to be blunt, anyone who is surprised at what it says shouldn't be bloody programming shell scripts to be consumed anywhere other than their home computer in the first place.

    /rant

    1. Jason Ozolins
      Devil

      Re: You've made be rant now..

      I think your early points are great, but you lost me starting at...

      "Indeed, there are many who argue that kernels should not allow files to exist which start with a '-', or contain spaces, newlines, tabs, various binary characters etc..."

      My view is that if I'm the sysadmin for a multiuser system, it's *my* prerogative to prevent silly filenames creation by the users. It should *not* be a kernel default; but a filesystem mount option to reject open/creat/mknod/link/symlink/rename operations where the target filename contains characters from \001 to \037 would be entirely appropriate and save lots of user confusion when they create such problem files by accident. This is fine for UTF-8 encoding and EUC coding.

      ...And if my users want to store data against arbitrary binary keys using 'special' C programs to make 'special' filenames, I'll tell them: Don't use a filename as the key, because it's a half-arsed hack. Instead, here you go, sqlite3 or gdbm or bdb, take your pick, they *do this stuff for you*. Oh, by the way, you can *even* use data containing '/' and ASCII NUL as a key. Whoa!!!!

      The traditional "woo, anything goes except '/' and \0!" boast is making a virtue out of what likely started as laziness on the part of the kernel programmers. Laziness which probably made perfect sense for the times and the Bell CSRG's use cases. These days, adding an extra "check character code is greater than 32" to the kernel path parsing is not such a burden. It will branch predict correctly almost all the time.

      UNIX got some things really right, but some of what the early designers chose not to care about has turned out later to cause problems for scaling and security. What made sense for the use cases and developer resources of a CS research lab in the early '70s is not necessarily appropriate now. Robust filesystems with synchronousness guarantees, race-free file syscalls and other niceties all came about because people recognised the need to take UNIX beyond what Ken and Dennis first envisaged. No slight to the inventors, just progress.

      1. Jamie Jones Silver badge
        Thumb Up

        Re: You've made be rant now..

        "I think your early points are great, but you lost me starting at...

        "Indeed, there are many who argue that kernels should not allow files to exist which start with a '-', or contain spaces, newlines, tabs, various binary characters etc..."

        My view is that if I'm the sysadmin for a multiuser system, it's *my* prerogative to prevent silly filenames creation by the users. It should *not* be a kernel default; but a filesystem mount option to reject open/creat/mknod/ link/symlink/rename operations where the target filename contains characters from \001 to \037 would be entirely appropriate and save lots of user confusion when they create such problem files by accident. This is fine for UTF-8 encoding and EUC coding."

        Hiya. Sorry for the delay in replying.

        I told you I was on a rant, so I'll probably backtrack a bit :-)

        I agree with you (I think!)

        Some argue it should be a kernel default (DWheeler in the article I linked too, for example) - but I don't. Besides, that horse has bolted already, and any new restriction would undoubtably cause problems.

        But I probably didn't show that I also agree that such restrictions should be possible, and easily configurable by the sysadmin if he/she thinks it's appropriate. - Just as you describe above.

        "...And if my users want to store data against arbitrary binary keys using 'special' C programs to make 'special' filenames, I'll tell them: Don't use a filename as the key, because it's a half-arsed hack. Instead, here you go, sqlite3 or gdbm or bdb, take your pick, they *do this stuff for you*. Oh, by the way, you can *even* use data containing '/' and ASCII NUL as a key. Whoa!!!!"

        Backtrack time..... Yes, I agree and like to think I'd behave the same way!

        The point I was trying to make was that it doesn't need to be a kernel based restriction - not that such a restriction shouldn't be possible.

        But then I ranted off in some utopian way about the freedom of the programmer to be able to do what he/she wants without OS restriction that isn't necessary for the OS to work - but I didn't provide any practical real-world example.

        I've never used such weird characters, and can't see any situation where I would recommend it - I was just trying to say that an arbitary restriction shouldn't be a place just to protect some programmers from writing prograns with parsing bugs, or indeed programmers silly enough to use stupid characters in the first place.

        "The traditional "woo, anything goes except '/' and \0!" boast is making a virtue out of what likely started as laziness on the part of the kernel programmers. Laziness which probably made perfect sense for the times and the Bell CSRG's use cases. These days, adding an extra "check character code is greater than 32" to the kernel path parsing is not such a burden. It will branch predict correctly almost all the time."

        So now it should be in the kernel? :-)

        More backtracking from me... Fair enough, and you are right.. If such sane restrictions were in place from the beginning, I'd be cool with that.

        TL; DR - I guess what I'm getting at is that this is how it is. It works. It can cause problems, but programmers should know this, and act accordingly. It's not something that needs to be 'fixed' at an OS level to stop the sky falling in. And ultimitely a blanket restriction would just be an added restriction that isn't actually necessary.

        A lot of the power (and problems) in UNIX comes from it's rawness, and whilst any effort to make it easier and less exposed should be applauded, whilst I was in rant mode I was concerned with enforced 'dumbing down' - as it seems car analogies are usually used at thjs point, I'd say that you wouldn't force an experienced driver to drive an automatic car, just because some people can't drice manual (stick-shift) - even though in some situations said driver may even decide an automatic is his most suitable choice.

        "UNIX got some things really right, but some of what the early designers chose not to care about has turned out later to cause problems for scaling and security. What made sense for the use cases and developer resources of a CS research lab in the early '70s is not necessarily appropriate now. Robust filesystems with synchronousness guarantees, race-free file syscalls and other niceties all came about because people recognised the need to take UNIX beyond what Ken and Dennis first envisaged. No slight to the inventors, just progress."

        Yes. Situations have changed, and the other stuff you mention above I agree with, but whilst tightened restrictions on filenames would probably make some programs more robust, without these restrictions the filesystem itself is no less robust if the programmer knows what he/she is doing.

        I think I more or less agree with you, I just didn't explain well why I thought 'unnecesaary' restrictions shouldn't be enforced in the kernel, but as you say, under the control of the sysadmin.

        I hope I've explained myself more clearly, and didn't backtrack too much, but thank-you for reigning me in!

        Cheers, Jamie

        P.S. I've just written this using the 'w3m' console browser under an xterm session, because VI (or any other text editor) is far better for writing long replies than some slow click-and-type 'notepad-style' gui.... How I wish my current GUI browser setup allowed me to use an external editor like with the Firefox 'ItsAllText!' extension...

        El Reg is one of the few sites you can actually use a non-GUI browser on these days...... The last of a dieing breed...

  19. Herby Silver badge

    The paper is interesting, but wrong!

    It has several examples of wildcard expansion, and fails to note that the expansion is done in alphabetical order (where '-' collates first). Then goes on to show several commands where the file name expansion takes place after ALL the options are processed (like the tar command) and options are ignored. Oh, and 'ls' collates differently as well.

    This falls into the trick of putting a file name of '*' in someone's directory and then asking them to delete it. Not for the faint of heart.

    1. Jason Ozolins

      Re: The paper is interesting, but wrong!

      Does '-' collate before alphabetical chars in *all* Euro locales?

      But yeah, if the '--rf' *did* sort to the end of the wildcard expansion, and POSIXLY_CORRECT was set, then getopt(3) would stop scanning for options once it hit the first non-option argument. So in this case the baddies are relying on GNU's super-helpful default getopt(3) behaviour to work their evil.

  20. Gordon 11

    Typos can also cause problems.

    I once intended to type "rm *.o".

    But obviously left the shift key on after the "*".

    Took me a while to figure out why I now had no files in the directory expect one empty one called "o". Fortunately I kept backups, even then.

    An since then I've had an alias for "rm" that runs "rm -i"'

    1. John H Woods Silver badge

      Re: Typos can also cause problems.

      Yep, I had the same, rm *>o. Your file wasn't empty, it contained a newline!

  21. CAPS LOCK Silver badge

    I'm saying Microsoft FUD....

    ... and ironically they were the inventors of 'Not a bug, a feature.'

  22. Random Q Hacker

    oldies but goodies

    Sorry but this was sysadmin 101 back in my day. You always unset/override the path, the list of command separators, dynamic libraries, checked args, etc. Especially when writing something that will be SUID or called by external users.

    I worked at a fortune 5 where sloppy admins replicated a corrupt password file across their environment; used a similar exploit from a high uid working account to recover root access on hundreds of boxes. Woulda been a long weekend otherwise...

  23. Ommerson

    Furthemore, how was the author running shell scripts on iOS? It might have BSD unix heritage, but sure doesn't ship with shell tools, nor allows their use (without jail-breaking that is)

  24. David Roberts Silver badge
    Pint

    Show me the money

    Just a bunch of chancers putting some buzz words together in the hope of getting some funding.

    If they are lucky then they will be presenting to finance and marketing who aren't taking detailed technical advice.

    Once they have the funding, just crib the answers from this thread and beers all round.

  25. computers suck

    LOL

    Hey windows-is-one-big-security-hole types, tell me again how your droid/iPhone/Mac/Linux box is infallible and virus free computing swiss army knife while my windows laptop is probably already infected. Oh, that's right, you're probably already infected. While you're telling me how wrong I am, some cyber criminal in ch-ussia-baltic-indi-stan-erica is enjoying your rant no less than they are mine. And yes, I'm typing this on my droid tablet while playing a video on my windows laptop with an iPhone in my pocket and a Ubuntu box in the bedroom.

    Us people who don't share your religious devotion to a platform know that relying on an OS for security is idiotic, anything connected to a network can be hacked, and platform agnosticism is the only way to go. You spread your risk as wide as possible and hope the device you're currently on is statistically likely to be safe.

    So, you have an encrypted, antivirus-protected NAS box as your network hub? Well then, what OS does it run? You're either naïve or delusional if you believe that the company builds its own when it doesn't have to, your box is unix-like. With that out of the way, what of (apple/microsoft) forcing UIs you don't like down your throat? Well, here's a list for you: windows 8, iOS 7, Android 4.0, Unity, Gnome 3, shall I go on?

    The point is, when your OS is your religion your security suffers. This article is no better than it's comments.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019