back to article Scary code of the week: Valve Steam CLEANS Linux PCs (if you're not careful)

Linux desktop gamers should know of a bug in Valve's Steam client that will, if you're not careful, delete all files on your PC belonging to your regular user account. According to a bug report filed on GitHub, moving Steam's per-user folder to another location in the filesystem, and then attempting to launch the client may …

  1. Aoyagi Aichou
    Gimp

    Halftruth?

    Apparently that was already fixed.

    https://github.com/SteamDatabase/SteamTracking/commit/155b1f799bc68f081cd6c70b9af47266d89b099d#diff-1

    1. Steve Crook

      Re: Halftruth?

      If you've not got the latest version it's still a whole truth.

      When I read the code I was wondering if I'd fallen into a wormhole and it was 1980. Again. Anyone seen a groundhog recently?

      1. Aoyagi Aichou

        Re: Halftruth?

        See, there's an "if" in that full truth. And I thought that in the IT world, it's assumed everyone has software up to date, especially when it comes to an online service's client.

    2. Teiwaz

      Truth is in the eye of the beholder

      Just checked my steam version (Ubuntu 15.04 - updated this fine morning) the code is still there under ~/.steam/steam.sh, line 468 of 756

      A quick look at the fix shows the addition of an if $STEAMROOT != "" encapsulating the rm -rf comand.

      1. e^iπ+1=0

        Crap fix

        addition of an if $STEAMROOT != "" encapsulating the rm -rf comand.

        Hmm, I wonder how well this would cope with something like

        STEAMROOT=/

        or

        STEAMROOT=/.///./

        or such like.

        1. Blitterbug
          Facepalm

          Re: Hmm, I wonder how well this would cope with...

          Exactly. It's still utter rubbish until a comparison is made against permitted path strings. Some detailed logic is needed, not some numbnuts single-line comparison operation.

  2. John Sanders
    Holmes

    If my memory serves me right...

    SteamOS was still in beta isn't?

    So, while unfortunate as this is (someone got his drives completely wiped of his user accessible data) and the person who wrote the script should know a bit better, in the end you're not supposed to run beta stuff on your production data.

    1. streaky

      Re: If my memory serves me right...

      This isn't SteamOS it's the Linux Steam Client, they aren't the same thing.

      1. John Sanders

        Re: If my memory serves me right...

        DOH!, Clearly I wasn't paying any attention to the article.

        What always gets my attention about the Steam client in Linux, its not so much that it has lots and lots of scripts (and obviously this amounts to bugs like this one), it is the fact that it does not use the package system for updates, only uses it for bootstrapping the install.

        1. Charles 9

          Re: If my memory serves me right...

          Two reasons. One, Steam has its own content distribution system separate and apart from any Linux package manager (and well predates Steam on Linux, for that matter). Second, game updates can be very piecemeal, particularly when the update concerns game content rather than program code, so Valve recently updated its content system to reflect this. It reduces update package sizes most of the time: a kind consideration for people with limited bandwidth allotments.

  3. Anonymous Coward
    Anonymous Coward

    I feel the need...

    ...of a Torvalds hairdryer to that coder! Sheesh!

    1. Khaptain Silver badge

      Re: I feel the need...

      Forget the hairdryer and borrow his hammer..

      1. Anonymous Coward
        Anonymous Coward

        Re: I feel the need...

        Instructions for use:

        1. Take hammer

        2. Take a clue

        3. Place clue on head

        4. Repeatedly hit clue with hammer until the knowledge enters the skull

        Note: in the event of the skull already being full, some existing knowledge may be lost as the clue is inserted.

  4. Anonymous Coward
    Linux

    It's shit programming

    As per subject.

    Running a command like that without testing to see what $STEAMCLIENT actually points at beforehand is pretty irresponsible.

    I've never heard of this sort of nonsense before and certainly never done anything like it myself 8)

    As for running beta on a "production" machine: I doubt many people can afford a testing gaming rig. The bloke even had a backup and this wiped that out. Shame his backups weren't owned by another user - say "backup" but even so it's not his fault really.

    Cheers

    Jon

    1. BongoJoe

      Re: It's shit programming

      I think that it's going to take some time before someone finds a better howler that one...

      ...unless, of course, they find my source code.

    2. Bloakey1

      Re: It's shit programming

      Agreed and what is worse in my view is that the programmer knew it.

      1. TimeMaster T

        Re: It's shit programming

        ".. the programmer knew it."

        Just because someone can create a script it does not mean they really know what they are doing.

        Last place I contracted for the "trained and qualified programers" were doing some really stupid $%#& in the scripts they wrote. Like deleting a directory that was being mounted by /etc/fstab onto another part of the filesystem at startup. When the system booted it would hang due to a missing mount point. Not a big deal, unless it is a headless system in an embedded environment. So about twice a week I would have to spend 40 minutes to pull the device apart to get at the CPUs VGA connector so I could fix it.

        I pointed out to the author of the script, described as a "Skilled programmer", exactly what the issue was and how to fix it (I have 15 years working with *NIX OSs and he knew it). He told me he had effectively zero experience with Linux and had just cut and pasted stuff from the Internet to make the script.

        He didn't change the script, his excuse was that since they were setting up some new update manager the directory at issue wouldn't get deleted anymore, since it was being deleted by a different script during updates.

        Due to that and many other similar issues I didn't renew my contract and moved on.

      2. regadpellagru

        Re: It's shit programming

        Yep, and it is even worst timing:

        http://www.gamespot.com/articles/valve-steam-machines-will-be-front-and-center-at-g/1100-6424591/

        So basically, they're gonna unlock the steam machines from their stasis, running on SteamOS, which is a modified debian.

        And they stupidily screw up in the steam Linux client !

        I'm sure the dev's butt has already been nicely kicked.

    3. Turtle

      Re: "I've never heard of this sort of nonsense before"

      "I've never heard of this sort of nonsense before"

      I have. If memory serves, an early version of 12 Tone Systems' Cakewalk music/audio production software would, after finishing an installation, delete C:\Temp and all files in it - which was not a problem except if there was no C:\Temp folder, in which case it would delete C:\. And that *was* a problem...

      1. J.G.Harston Silver badge

        Re: "I've never heard of this sort of nonsense before"

        Back in Win3 days... I installed something on a friend's DOS/Win system. It ended by wiping %TEMP%. But... his system had %TEMP%=C:\DOS

  5. Anonymous Coward
    Anonymous Coward

    I'll bet that

    Steam was coming out of his ears.

    There was no way of him blowing off steam

    because he couldn't get up a head of steam.

    etc etc

  6. Destroy All Monsters Silver badge
    Thumb Up

    Scumbag Steve Meme goes here

    1) Writing "# Scary!" in the code

    2) Not performing any checks anyway

    ...probably because "no risk, no fun!"

    1. Steven Raith

      Re: Scumbag Steve Meme goes here

      To be fair, I'm a bit inexperienced at *nix admin - if I wrote something and thought it was 'scary', I'd probably get someone else to have a look at the code and see if it can be done in a less scary way.

      The most potentially dangerous, buiggy code out there is written by people who don't think anyone knows better than them, and don't bother checking.

      Steven R

      1. Dan 55 Silver badge
        Windows

        Re: Scumbag Steve Meme goes here

        I thought one of the first things everyone learnt with the shell that if it's an environment variable then it's just as unreliable as user input, but it looks like they skip that lesson these days.

        1. MJB7

          Re: Environment variable

          It isn't an environment variable. STEAMROOT is a shell variable (which isn't the same thing at all, although they are accessed with the same syntax). It's calculated from $0 which is another shell variable.

    2. OliverJ
      FAIL

      Re: Scumbag Steve Meme goes here

      IANAL, but doesn't the comment provide enough legal ground for a class action lawsuit against Steam on the base of gross negligence?

      1. Anonymous Coward
        Anonymous Coward

        Re: Scumbag Steve Meme goes here

        Let me guess... American?

        Not everything requires a lawsuit.

      2. Steven Raith

        Re: Scumbag Steve Meme goes here

        OliverJ - it's your responsibility, as the computer owner/user to do backups and verify their veracity and recoverability.

        This goes double if it's a machine you make earnings on.

        Otherwise, every tech support shop out there would be sued out of existence for that *one* time they acci-nuke a HDD.

        Steven R

        1. MrDamage Silver badge

          Re: Scumbag Steve Meme goes here

          Accidentally nuking one drive by a repair shop is one thing, having all drives nuked by poorly written code is quite another.

          The term "fit for purpose" springs to mind, which this software clearly isn't.

          1. Steven Raith

            Re: Scumbag Steve Meme goes here

            Which is why you take a backup before doing anything precarious, like moving a folder and symlinking it back, not knowing if it'll mount or be seen correctly.

            All code is, in some respect, shit. Backups aren't hard to do, but everyone learns that only just *after* they needed to know it...

            HTH.

            Steven R

            1. Anonymous Coward
              Anonymous Coward

              Re: Scumbag Steve Meme goes here

              Which is why you take a backup before doing anything precarious, like moving a folder and symlinking it back, not knowing if it'll mount or be seen correctly.

              Did this user not mention that his files were deleted from a backup drive mounted elsewhere on the system?

              Ergo: he was taking backups. Then Steam's client decided to delete those too.

              I'd agree with others: gross negligence. It's not like the user was keeping his files in /tmp and there was a copy stored on an external drive. (Like some keep theirs in the "Recycle Bin" / "Trash")

        2. OliverJ

          Re: Scumbag Steve Meme goes here

          AC, Steven R - I respectfully disagree. The programmer knew he was doing it wrong ("scary"), but obviously didn't act on it. More importantly, this issue raises the question how this code got through quality assurance in the first place.

          This takes the case from the "accidental" into the "gross negligence" domain. IT firms need to learn that they have to take responsibility for the code they dump on their customers.

          And regarding your argument of making backups - that's quite true. It's good practice to make backups, as it is good practice to wear seat-belts in your car. I do both all the time. But this does not mean that the manufacturer of my car is allowed to do sloppy quality assurance on the ground of my requirement to wear seat belts to minimize consequences of an accident - as GM is now learning...

    3. Vic

      Re: Scumbag Steve Meme goes here

      1) Writing "# Scary!" in the code

      You missed :-

      0) Not using the right command in the first place (e.g. readlink)

      Vic.

  7. Anonymous Coward
    Anonymous Coward

    i think the moral of the story is...

    ...no kind of structural security can protect you from a program you trust going bad.

    And don't keep your backup volumes attached for longer than is necessary to run a backup.

    1. John H Woods Silver badge

      What is the best practice here?

      Apart from expressing the fact that the script writer was a total jerk - I could forgive it if it weren't so clear they realized it was dangerous and couldn't be arsed to do a 10 second google to see how to phrase it - I'm like to know what people recommend here. Removal of backup devices or media is obviously good, but what are the additional strategies here to defend against executables you want to trust, but not completely?

      Back up to tar files (preserves permissions and owners), which themselves are owned by 'backup' and/or not writeable? Run such executables as a different user? Chroot them?

      1. Boothy

        Re: What is the best practice here?

        Mount backup drive

        Backup to a different username i.e. 'backup'

        Unmount backup drive

        Don't mount the drive again till you need it.

        Preferably phsically remove the drive until needed, or use a separate NAS.

        1. This post has been deleted by its author

          1. Peter Gathercole Silver badge
            Linux

            Re: What is the best practice here?

            Traditionally, in the UNIX world where you normally have more than one user on the system, you backup the system as root. Tools like tar, cpio and pax then record the ownership and permissions as they create the backup, and put them back when restoring files as root. This also allowed filesystems to be mounted and unmounted in the days before mechanisms to allow user-mounts were created.

            The problem is that too many people do not understand the inherent multi-user nature of UNIX-like operating systems, and use them like PCs (as in single-user personal computers). To my horror, this includes many of the people developing applications and even distros maintainers!

            There is nothing in UNIX or Linux that will prevent a process from damaging files owned by the user executing the process. But that is not too different from any common OS unless you take extraordinary measures (like carefully crafted ACLs). But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.

            1. DropBear

              Re: What is the best practice here?

              But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.

              Not much of a consolation these days when a desktop Linux is likely used as a single-user machine where all the valued bits likely belong to said user, while the system itself could probably be reinstalled fairly easily...

              Anyway, I know one guy who'll be doing all his Steam gaming on Linux with a separate user that isn't even allowed to flush the toilet on the system...

            2. John Doe 6

              Re: What is the best practice here?

              best practice is to check what $STEAMROOT is and if it is sane

              change to $STEAMROOT

              remove files from $STEAMROOT

              if $STEAMROOT is not sane (/ or ~) you throw an error telling the user that $STEAMROOT can't be located.

              This IS NOT rocket science, almost everybody has been doing this for 30 years on UNIX,

            3. Anonymous Coward
              Anonymous Coward

              Re: What is the best practice here?

              But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.

              You think so?

              A year or so ago our sysadmins started getting calls from users (this in an office with 100+ Unix developers) about missing files. The calls quickly turned into a queue outside the admin's offices. Running a "df" on the main home directory server showed the free space rapidly climbing...

              Some hasty network analysis eventually led to a test system running a QA test script with pretty much the bug described here. It was running as a test user, so could only delete the files that had suitable "other" permissions, but it was starting at a root that encompassed the whole NFS-automounted user home directory tree. The script was effectively working its way through the users in alphabetical order, deleting every world-accessible file in each user's home directory tree.

              Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done. Fortunately our admins are competent at keeping offline backups.

              1. Peter Gathercole Silver badge
                FAIL

                Re: What is the best practice here? @AC

                And the problem here is typified by your statement 'could only delete the files that had suitable "other" permissions'.

                Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".

                With regard to running the script as root. You're not that familiar with NFS are you?

                If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root. Anybody who sets up a test server to have root permissions over any mounted production filesystem deserves every problem that they get!

                There are people who have been using NFS in enterprise environments for in excess of quarter of a century. Do you not think that these problems have not been addressed before now?

                1. Anonymous Coward
                  Anonymous Coward

                  Re: What is the best practice here? @AC

                  Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".

                  They're not my users, they (and I) are, for the most part, senior developers who are well aware of how umask works, and may (or may not) choose to share files.

                  With regard to running the script as root. You're not that familiar with NFS are you?

                  I am, as it happens. At the kernel level.

                  If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root.

                  And that is, of course, exactly how our systems are configured.

                  It is also why I said that running the script as root would have been less serious, since not only would it have been potentially less serious for the NFS-mounted files, it would have permitted the test server to destroy itself fairly quickly as it wiped out /usr and /etc. Instead the faulty script (running as a QA user) didn't destroy the critical system files, it only destroyed those files that people had left accessible. The server remained up.

                  There are people who have been using NFS in enterprise environments for in excess of quarter of a century

                  True. I'm one of them.

                  Do you not think that these problems have not been addressed before now?

                  Indeed they have, and fortunately by people who read and understood the problem before making comments.

                  1. Peter Gathercole Silver badge

                    Re: What is the best practice here? @AC

                    I stand by every word I said. I do not think that your post is as clear as you think it is.

                    You cannot protect from stupidity, and setting world write to both the files and the directories (necessary to delete a file) is something that you only do if you can accept the scenario you outlined. Just because you have "experienced" developers does not mean that they don't follow bad practice ("developers" often play fast and lose with both good practice and security, claiming that both "get in the way" of being productive). And giving world write permissions to files and directories is in almost all cases overkill. Restrict the access by group if you want to share files, and give all the users appropriate group membership. It's been good practice for decades.

                    You did say "Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done", but this is probably not true. You did not actually point out that root would not traverse the mount point of the NFS mounted files, but you did say "starting at a root that encompassed the whole NFS-automounted user home directory", implying that it was not the root directory of the system that was being deleted, but just the NFS mounted filesystems.

                    From personal experience, I have actually seen UNIX systems continue to run damaging processes even after significant parts of their filesystems have been deleted. This is especially true if the command that is doing the damage is running as a monolithic process (like being written in a compiled language or an inclusive interpreted one like Perl, Python or many others) and using direct calls to the OS rather than calling the external utilities with "system".

                    Many sites have home directories mounted somewhere under /home, so if it were doing a ftw in collating sequence order from the system root, it would come across and traverse /home before it would /usr (the most likely place for missing files to affect a system), so even it it did run from the system root, enough of the system would continue to run whilst /home was traversed. Not so safe.

      2. JulieM Silver badge

        Re: What is the best practice here?

        Run all executables which are supplied without Source Code in a chroot environment which is on its own disk partition. Such a location is secure against a program misbehaving, because no file anywhere else on the filesystem can be linked into or out of it. Hard links cannot transcend disk partitions, and symbolic links cannot transcend chroot.

        1. Destroy All Monsters Silver badge
          Holmes

          Re: What is the best practice here?

          Run all executables which are supplied without Source Code in a chroot environment

          Whether you have the source code is only relevant if your theorem prover can prove that shit won't hit the fan (however defined) when the program corresponding to said source is run. This is unlikely to be in the realm of attainable feats even in best conditions and even if said open sourced program has been written in Mercury.

    2. waldo kitty
      Megaphone

      Re: i think the moral of the story is...

      And don't keep your backup volumes attached for longer than is necessary to run a backup.

      exactly! plug the media in or otherwise make the connection to it avaiable first, then do the backup, finally disconnect from that media... and no, rsync can also kill ya when it sees the files is should be keeping updated are gone and removes them from the remote...

  8. Anonymous Coward
    Thumb Down

    Sheesh

    If you're running Steam on Linux, it's probably best to make sure you have your files backed up and avoid moving your Steam directory, even if you symlink to the new location, for the time being.

    Better advice might be to hold off on using Steam until the programmer responsible has been hunted down and re-educated.

    1. fearnothing

      Re: Sheesh

      In that context, 're-educated' has a deliciously violent undertone to it. Room 101 anyone?

  9. jrd

    I once wrote a shellscript which, during testing, removed *itself* due to a malformed 'rm' command. I had to rewrite the script from scratch but at least I learned a valuable lesson!

    1. Number6

      At least you were testing it.

    2. mhoulden

      The Adobe Acrobat Reader installer does that as well. Incredibly annoying if there's a chance you might need to reinstall it or run it on another machine later.

      1. lurker

        In all honesty, I consider anything which deletes Adobe Reader from my PC to be a good thing, it's just a shame it stops at only removing the installer.

  10. Anonymous Coward
    Anonymous Coward

    Classic

    Well, what can one say.

    I ALWAYS sit on my hands for a few minutes until I press <enter> when using rm -rf (especially when root), but years ago found that rm -rfi is the GOOD way to do it... you get prompted on each deletion so you can do a sort of test first.

    rm -rfi

    1. John Brown (no body) Silver badge

      Re: Classic

      rm -rfi

      It's fairly common practice to alias that in your local shell startup config eg alias rm='rm -i' so you get -i on all invocations of rm.

      1. Anonymous Coward
        Anonymous Coward

        Re: Classic

        "It's fairly common practice to alias that in your local shell startup config eg alias rm='rm -i' so you get -i on all invocations of rm."

        Yep, agreed and have that in all my .bashrc files , but using 'rm -rf' ingores it. I guess the 'f' takes affect after the alias of the '-i' (we then get 'rm -irf')i so it fails... I guess.

        Ideas?

        1. Ben Tasker
          Joke

          Re: Classic

          Do a 'safe' test run first

          ssh root@someonelesesbox "rm -rf $DIRECTORY"

          The best advice, really, is to always carefully think about what you're actually running (not what you think you're running), though mistakes can happen.

          1. Flocke Kroes Silver badge

            @someonelse

            someonelesesbox has a misconfigured ssh server. Take a look in /etc/ssh/sshd_config for:

            #PermitRootLogin yes

            and change it to:

            PermitRootLogin no

            If you need root access on a remote machine, log in as an ordinary user, the use su. Reading the whole of man sshd_config would be a good idea too.

            Noddy's guide to shell scripts:

            People who do not know any better start bash scripts with:

            #! /bin/bash

            A better choice is:

            #! /bin/bash -e

            The -e means exit if any command exits with non-zero status. Now when something goes wrong, the shell script will not plough on regardless doing stupid things because of the earlier failure. There is also some chance that the last text send to stderr will contain something useful to diagnose the fault. With the -e switch, today's disaster can be caught with:

            [ -n "$VARIABLE" ]

            A better choice would be:

            if [ -z "$VARIABLE" ]; then

            echo >&2- "$0: Environment variable VARIABLE is empty"

            exit 1

            fi

            Finally, if you do not know how bash will expand something, ask it with echo:

            echo $(type -p rm) -rf ~

            1. PJI

              Re: @someonelse

              And never use bash or csh for scripts. Use Bourne or posix sh or ksh, very careful

              Never ever rely on the PATH variable. Put explicit command paths in variables and use them.

              Use the ...&&...||... syntax frequently to check for existence and return status.

              Check you understand scope (in which bash is fundamentally broken) and conform to Posix for portability.

              Make sure you have not imported aliases, e,g, by sourcing your .profile. Do not source .profile or similar.

              Test all code and then get someone else to review and test.

              Do not cut and paste from the internet. Learn and understand how, or leave alone.

              It's never "just a script". A script is a program and must be designed, implemented and documented as such, with review and testing, if for anyone other than some private task.

              If the would-be author can not write correctly in his native language, why should he be any better in an artificial one?

              1. Havin_it
                IT Angle

                Re: @someonelse

                Just wondering on your first point: is there any way to make the shebang discriminate in favour of "true" /bin/sh, as opposed to (as is usually the case these days) a symlink to /bin/bash? Does bash have an "act like plain sh" switch, for example? Or must a portable script simply include this test explicitly?

                Further, how to ensure no aliases are in effect?

                Lastly, is scope actually OK in sh? Would ne interested to hear how shells differ here.

              2. Solmyr ibn Wali Barad

                @PJI

                "And never use bash or csh for scripts. Use Bourne or posix sh or ksh, very careful"

                Footgun alert - using b/k/sh is not an universal panacea. Can't find an online source at the moment, but I distinctly remember a major comment fail like this:

                /* rm -rf * */

                which was interpreted by ksh as "SELECT /*" followed by "rm -rf". Ouch.

        2. Rusty 1
          Stop

          Re: Classic

          Ideas? Yes. Just don't depend on the alias rm='rm -i'. If you can't use rm safely, you should probably step away from the keyboard altogether.

          What happens when you run rm as a different user or on a different system, that doesn't have the alias crutch?

          I'll bet you even do "yes | rm *" to remove lots of files without having to ack each one, don't you?

  11. arctic_haze
    Facepalm

    Two lessons from this

    One is something I knew for decades: Never run anything as the root user unless you absolutely need to.

    The second one is new to me. Use a different account for backups and do not give your primary account writing permission on the backup volumes.

    1. Anonymous Coward
      Anonymous Coward

      Re: Two lessons from this

      "The second one is new to me. Use a different account for backups and do not give your primary account writing permission on the backup volumes."

      Yes, but for the life of me, why on earth are the back-ups on the same system (even if not, why even mounted!).

      Mount back-up drive - do back-ups - dismount back-up drive

      Easy.

      Why leave the thing there all the time?

      1. Anonymous Coward
        Anonymous Coward

        Re: Two lessons from this

        What if it happens during the backup, in which case the backup device must be mounted, and it would probably have the user's access to gain read access to files and traverse access to directories and before the point where the backup's ownership is changed?

        1. Anonymous Coward
          Coat

          Re: Two lessons from this

          Then have a back-up of your back-up

          Whoa, sounds a bit like Michael Barrymore...

          1. 404

            Re: Two lessons from this

            +1

            As a fan of Robert Heinlein,/Isaac Asimov - I use 'Tell Me Three Times'... 3 backups, different locations, two with air gaps that require humans to plug/unplug at least once a week minimum.

            Does that make me anal or just burned too many times by bad single backups? (remember tape in the 90's? argh). I also sandbox wife's FB laptop lol...

            #paranoid

            1. Jay 2

              Re: Two lessons from this

              Data doesn't exist unless it's in at least two places! Some of mine is in four different places, one of which is in a different building. Given my day job of (paranoid) sys admin, then it would look a bit silly if I were to lose any data.

              Backups, backups, backups etc.

        2. petur

          Re: Two lessons from this

          Running backups while you're busy installing stuff is pretty pointless... My backups always run with apps closed and backup destination only present during the backup.

      2. arctic_haze

        Re: Two lessons from this

        So you backup only one computer? I have a Linux box which is always on which I synchronize all my devices with. Yes, I could unmount remotely the volumes between the backups but I feel this is too much bother. Anyway, if the account doing the backups is allowed to mount them, where's the additional security?

        The problem with my setup is that I sometimes use the box to for example do a quick Internet search. This is the moments when I am vulnerable. That's why I'll probably change the account owning the backup volumes and give my primary account only read access (I sometimes need the backed up files, after all). What's so wrong with that to vote it down?

  12. pop_corn

    Hoho, just last year I did the exact same thing on a windows pc in powershell. Fortunately during my own testing I managed to delete a couple of GBs of my own files before realising what was happening and killing it in the face.

    Backups saved the day and I wrapped it in the same kind of IF they've used pretty quickly.

    1. Anonymous Coward
      Headmaster

      "windows pc in powershell. Fortunately during my own testing I managed to delete a couple of GBs of my own files"

      At least you only lost two MS Office one page documents and not the whole system.

  13. Infernoz Bronze badge
    Facepalm

    This is a why Linux Containers, FreeBSD Jails, and Solaris Zones exist!

    Container technology should not just be for servers, but for most 'user' apps too, so that they can never do frack ups like this, even if they try!

    Anything risky should be sand-boxed; it is not OK to install/run all applications in user accounts, let alone root.

    I run my server software in isolated environments precisely because of this huge potential data loss risk.

  14. Rufus McDufus

    I've not fallen for the old unset/empty variable problem with rm, but back in the nineties on an early Solaris or possibly SunOS 4.x box I wanted to remove some hidden directories so executed something like 'rm -rf \.*' . Whether due to a bug or my misunderstanding it also started removing everything in '..' and continued going upwards to the root directory and traversing into other subdirectories. I wasn't popular. And it was my job to put it right - which I did!

    1. Anonymous Coward
      Anonymous Coward

      Funnily enough, last year on my new slackware install, I added a few lines in /etc/rc.d/rc.0 to clear out /tmp on shutdown.

      echo "Clearing out /tmp/*."

      if [ -d /tmp ]; then

      rm -rf /tmp/*

      fi

      That worked fine until about three weeks ago I done a ls -lsa in /tmp and found loads of [dot]files that didn't get removed.

      So I changed the rc.0 script to this:

      echo "Clearing out /tmp/*."

      if [ -d /tmp ]; then

      rm -rf /tmp/.*

      fi

      ARGH - luckly when shutting down I saw 'can't delete parent directory' message (saved, phew).

      Research revealed I needed:

      echo "Clearing out /tmp/*."

      if [ -d /tmp ]; then

      rm -rf /tmp/.??*

      fi

      1. Ken Hagan Gold badge

        "Research revealed I needed: [...] rm -rf /tmp/.??*"

        Thanks. I'll bear it in mind.

        However, is there a sane use-case for the rm command accepting ".."? (For that matter, accepting any path that is either the current working directory or one of its parents would seem to me to be overwhelmingly likely to be a pilot error rather than a really clever piece of scripting.)

        1. Anonymous Coward
          Anonymous Coward

          Well, unix is unix. rm is just a hammer, and whatever you hit with it gets hit. It does what you tell it to do. Don't use a hammer if you need a screwdriver (nicknamed 'the London hammer').

          Also remember a lot of bash commands are actual chars - '[' and '.' being two, let alone the directory parent/child directives '.' and '..'

          Easy to mess up.

          1. Hans 1
            Boffin

            >Well, unix is unix. rm is just a hammer, and whatever you hit with it gets hit.

            Watch out you must, young padawan, for confused Linux with UNIX you have.

            Try "rm -rf /" on Solaris, or look it up.

          2. Adze

            No it's not... Birmingham has a screwdriver though, several in fact, which largely explains why Land Rover shut lines aren't pebble tight, let alone water tight.

        2. Kanhef

          There might be a use for upward directory traversal for specifying files to delete (e.g., "rm ../*.c"), though it would probably be safer to navigate upward first, then call rm. The -r option definitely shouldn't attempt to process . or ..; on the Mac/BSD implementation, it throws an error to even try "rm -r ..", but it will accept "rm -r ../*".

  15. Stevie

    Bah!

    [Insert generic Linux expert "This is the user's fault" snark].

  16. MatsSvensson

    Tsk...

    Our bathtub-shelf-toasters work just fine and are perfectly safe, as long as the user just RTFM.

    And should a rare accident occur, its just normal natural selection doing its thing.

    /Your friendly Linux developer

  17. raving angry loony

    released code?

    There's no fucking excuse for that kind of code. If you're going to do rm -rf, then you'd bloody well better check the arguments. Yes, I've made that kind of mistake. When I was a newbie and not allowed anywhere near production code.

  18. Anonymous Coward
    Anonymous Coward

    Do you trust your software?

    First off I'm surprised that even made it into the wild, but that begs the question. Do you trust your software ?

  19. Law

    script write should have checked.... but

    Where the hell are the code reviews at steam?! Should have been seen with at least one review, especially with the comment.

    1. Vic

      Re: script write should have checked.... but

      Should have been seen with at least one review, especially with the comment.

      I was working somewhere last year where a near-identical piece of code was submitted for review.

      Of course, every reviewer screamed at it. But the coder complained to his management that he was getting a hard time, and management then *insisted* that the code be used. So it went live.

      That particular code didn't go to external customers, but it will cause the system to melt down eventually...

      Vic.

  20. Werner McGoole

    Handy hint no. 37

    If you're going to do:

    rm -rf $somewhere/*

    then using "set -u" beforehand might save you a lot of bother in the event that $somewhere didn't get defined or you mis-typed it.

    1. PJI

      Re: Handy hint no. 37

      I forgot that one. Set -u can be very useful, especially when you forgot to define $LS and the wild card evaluates to rm or some such. :)

  21. Anonymous Coward
    Anonymous Coward

    Recommended reading

    Www.thedailywtf.com for horror stories in coding.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Recommended reading

      Yes, something like that for the Reg could be fun. There's not much point doing a straight-up carbon copy of thedailywtf but I'd like to do something similar. Answers on a postcard, etc.

      C.

  22. Anonymous Coward
    Anonymous Coward

    Achievement unlocked: The Scientist!

    Experiment with things you don't fully understand; then watch the devastation unfold!

    ("#scary!" shows that even though they were aware of the risks, they couldn't be arsed to deal with it. Whoever introduced that bit of code should be fired for their attitude and incompetence.)

    1. Ken Hagan Gold badge

      Re: Achievement unlocked: The Scientist!

      I doubt that an employment tribunal would reckon you had reached the required standard of proof there. "#scary!" is a comment and therefore non-executable. It proves nothing except that the author has a different sense of humour from you.

      Legend has it there was once a comment in the UNIX kernel that said "You are not expected to understand this.". See http://cm.bell-labs.com/who/dmr/odd.html for an explanation by one of the authors. Would you sack him?

      1. Vic

        Re: Achievement unlocked: The Scientist!

        Legend has it there was once a comment in the UNIX kernel that said "You are not expected to understand this."

        I once left the following comment in code I wrote for a customer :-

        # This next bit is evil. Look away now.

        Vic.

        [ It was evil - it was a nasty perl bless to sort out an error in CGI.pm, which doesn't return filehandles from multi-part uploads in the way the docs say it should. ]

  23. Anonymous Coward
    Anonymous Coward

    If Valve is like every other large company out there...

    Then there will be tiers and tiers of management, all 'adding value', a bloated and useless HR department that somehow just keeps on hiring regardless of the bottom line, an infinitely large Corporate Communications team issuing endless trite and patronising motivational messages, and about 3 developers actually doing the work on the product. Things like this are bound to happen. The miracle is that they don't happen more often.

    1. Anonymous Coward
      Anonymous Coward

      Re: If Valve is like every other large company out there...

      Amusingly, this is the absolute opposite of Valve's stated management structure- they claim that there isn't one. Nearly everyone writes software and they have some very bright people too. Google their "staff handbook" and it explains this in some detail.

      It's also worth reading between the lines of Rich Geldeich's blog (a recently departed employee) as that hints at what replaces that lack of management structure in practice: "pretty much everyone is constantly trying to climb the stack rank ladders to get a good bonus, and everyone is trying to protect their perceived turf. Some particularly nasty devs will do everything they can to lead you down blind alleys, or just give you bad information or bogus feedback, to prevent you from doing something that could make you look good (or make something they claimed previously be perceived by the group as wrong or boneheaded".

      AC because when you work in an industry with a monopoly...

  24. Curtis

    linux n00b

    well, as a total n00b who's just now trying to understand Kubuntu and Kali (thanks, Windows 8, for the motivation!) I can take this as some powerful lessons.

    1) don't play with rm -rf (also learned from the BOFH)

    2) don't map my backups if I'm not actively backing up

    3) don't trust Steam.

    1. Hans 1
      Linux

      Re: linux n00b

      I agree with 1, somewhat with 2 (I use rsync, no mounts), but 3 ? Shit happens, bugs happen ... wait for Steam to mature a bit before you trust it, I would say.

      In Linux/FOSS you have bleeding edge and stable, bleeding edge has more goodies, yet, shit can happen that YOU are expected to fix. Stable is as its name implies, pretty stable. Choose bleeding edge, learn a whack in no time and have fun.

      It is like skiing, take the red or black slope on your first day, you learn how to ski in no time ... follow everybody else down the baby slopes, you'll never learn.

      ;-)

  25. jammydodger

    Thankfully its linux

    Not sure if it's been mentioned further up the tree but "rm -rf /*" will do nothing if your logged in as a user other than root if your not and you are logging in as root to play games serves you right.

    Mind you its still shabby programming.

  26. Anonymous Coward
    Anonymous Coward

    Safe-rm

    This made me sudo apt-get install safe-rm :)

  27. naive

    Don't use rm directly

    In case of mass removal of files in scripts, it is safer to make a habit of using find:

    find ${start} -type f -user ${user} -name <reg-exp> -print | xargs rm -f

    Not 100% water tight, but at least one has to think what to fill in for find to limit the scope of the removal, which is a bit safer then only assuming a variable, or the current working directory, has a certain value. Adding the user parameter may limit the damage in case of accidents if use of root privileges can be avoided, and so does -name.

    1. Vic

      Re: Don't use rm directly

      find ${start} -type f -user ${user} -name <reg-exp> -print | xargs rm -f

      Don't do that!

      You will get unexpected results if you have filenames with spaces in - potentially leading to a name-clash that will wipe out the wrong directory.

      If you must use such things, use the -print0 flag, and supply the -0 arg to xargs.

      Vic.

  28. Rick Giles
    Pirate

    Yes, but...

    Is there a corresponding problem for Windows? If not, can we make one? :D

    Oh, wait... there is one. It's called Windows...

  29. batfastad

    Bumblebee

    Reminds me of this epic thread on a commit... https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/commit/a047be85247755cdbe0acce6f1dafc8beb84f2ac

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like