Halftruth?
Apparently that was already fixed.
https://github.com/SteamDatabase/SteamTracking/commit/155b1f799bc68f081cd6c70b9af47266d89b099d#diff-1
Linux desktop gamers should know of a bug in Valve's Steam client that will, if you're not careful, delete all files on your PC belonging to your regular user account. According to a bug report filed on GitHub, moving Steam's per-user folder to another location in the filesystem, and then attempting to launch the client may …
SteamOS was still in beta isn't?
So, while unfortunate as this is (someone got his drives completely wiped of his user accessible data) and the person who wrote the script should know a bit better, in the end you're not supposed to run beta stuff on your production data.
DOH!, Clearly I wasn't paying any attention to the article.
What always gets my attention about the Steam client in Linux, its not so much that it has lots and lots of scripts (and obviously this amounts to bugs like this one), it is the fact that it does not use the package system for updates, only uses it for bootstrapping the install.
Two reasons. One, Steam has its own content distribution system separate and apart from any Linux package manager (and well predates Steam on Linux, for that matter). Second, game updates can be very piecemeal, particularly when the update concerns game content rather than program code, so Valve recently updated its content system to reflect this. It reduces update package sizes most of the time: a kind consideration for people with limited bandwidth allotments.
As per subject.
Running a command like that without testing to see what $STEAMCLIENT actually points at beforehand is pretty irresponsible.
I've never heard of this sort of nonsense before and certainly never done anything like it myself 8)
As for running beta on a "production" machine: I doubt many people can afford a testing gaming rig. The bloke even had a backup and this wiped that out. Shame his backups weren't owned by another user - say "backup" but even so it's not his fault really.
Cheers
Jon
".. the programmer knew it."
Just because someone can create a script it does not mean they really know what they are doing.
Last place I contracted for the "trained and qualified programers" were doing some really stupid $%#& in the scripts they wrote. Like deleting a directory that was being mounted by /etc/fstab onto another part of the filesystem at startup. When the system booted it would hang due to a missing mount point. Not a big deal, unless it is a headless system in an embedded environment. So about twice a week I would have to spend 40 minutes to pull the device apart to get at the CPUs VGA connector so I could fix it.
I pointed out to the author of the script, described as a "Skilled programmer", exactly what the issue was and how to fix it (I have 15 years working with *NIX OSs and he knew it). He told me he had effectively zero experience with Linux and had just cut and pasted stuff from the Internet to make the script.
He didn't change the script, his excuse was that since they were setting up some new update manager the directory at issue wouldn't get deleted anymore, since it was being deleted by a different script during updates.
Due to that and many other similar issues I didn't renew my contract and moved on.
Yep, and it is even worst timing:
http://www.gamespot.com/articles/valve-steam-machines-will-be-front-and-center-at-g/1100-6424591/
So basically, they're gonna unlock the steam machines from their stasis, running on SteamOS, which is a modified debian.
And they stupidily screw up in the steam Linux client !
I'm sure the dev's butt has already been nicely kicked.
"I've never heard of this sort of nonsense before"
I have. If memory serves, an early version of 12 Tone Systems' Cakewalk music/audio production software would, after finishing an installation, delete C:\Temp and all files in it - which was not a problem except if there was no C:\Temp folder, in which case it would delete C:\. And that *was* a problem...
To be fair, I'm a bit inexperienced at *nix admin - if I wrote something and thought it was 'scary', I'd probably get someone else to have a look at the code and see if it can be done in a less scary way.
The most potentially dangerous, buiggy code out there is written by people who don't think anyone knows better than them, and don't bother checking.
Steven R
OliverJ - it's your responsibility, as the computer owner/user to do backups and verify their veracity and recoverability.
This goes double if it's a machine you make earnings on.
Otherwise, every tech support shop out there would be sued out of existence for that *one* time they acci-nuke a HDD.
Steven R
Which is why you take a backup before doing anything precarious, like moving a folder and symlinking it back, not knowing if it'll mount or be seen correctly.
All code is, in some respect, shit. Backups aren't hard to do, but everyone learns that only just *after* they needed to know it...
HTH.
Steven R
Which is why you take a backup before doing anything precarious, like moving a folder and symlinking it back, not knowing if it'll mount or be seen correctly.
Did this user not mention that his files were deleted from a backup drive mounted elsewhere on the system?
Ergo: he was taking backups. Then Steam's client decided to delete those too.
I'd agree with others: gross negligence. It's not like the user was keeping his files in /tmp and there was a copy stored on an external drive. (Like some keep theirs in the "Recycle Bin" / "Trash")
AC, Steven R - I respectfully disagree. The programmer knew he was doing it wrong ("scary"), but obviously didn't act on it. More importantly, this issue raises the question how this code got through quality assurance in the first place.
This takes the case from the "accidental" into the "gross negligence" domain. IT firms need to learn that they have to take responsibility for the code they dump on their customers.
And regarding your argument of making backups - that's quite true. It's good practice to make backups, as it is good practice to wear seat-belts in your car. I do both all the time. But this does not mean that the manufacturer of my car is allowed to do sloppy quality assurance on the ground of my requirement to wear seat belts to minimize consequences of an accident - as GM is now learning...
Apart from expressing the fact that the script writer was a total jerk - I could forgive it if it weren't so clear they realized it was dangerous and couldn't be arsed to do a 10 second google to see how to phrase it - I'm like to know what people recommend here. Removal of backup devices or media is obviously good, but what are the additional strategies here to defend against executables you want to trust, but not completely?
Back up to tar files (preserves permissions and owners), which themselves are owned by 'backup' and/or not writeable? Run such executables as a different user? Chroot them?
This post has been deleted by its author
Traditionally, in the UNIX world where you normally have more than one user on the system, you backup the system as root. Tools like tar, cpio and pax then record the ownership and permissions as they create the backup, and put them back when restoring files as root. This also allowed filesystems to be mounted and unmounted in the days before mechanisms to allow user-mounts were created.
The problem is that too many people do not understand the inherent multi-user nature of UNIX-like operating systems, and use them like PCs (as in single-user personal computers). To my horror, this includes many of the people developing applications and even distros maintainers!
There is nothing in UNIX or Linux that will prevent a process from damaging files owned by the user executing the process. But that is not too different from any common OS unless you take extraordinary measures (like carefully crafted ACLs). But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.
But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.
Not much of a consolation these days when a desktop Linux is likely used as a single-user machine where all the valued bits likely belong to said user, while the system itself could probably be reinstalled fairly easily...
Anyway, I know one guy who'll be doing all his Steam gaming on Linux with a separate user that isn't even allowed to flush the toilet on the system...
best practice is to check what $STEAMROOT is and if it is sane
change to $STEAMROOT
remove files from $STEAMROOT
if $STEAMROOT is not sane (/ or ~) you throw an error telling the user that $STEAMROOT can't be located.
This IS NOT rocket science, almost everybody has been doing this for 30 years on UNIX,
But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.
You think so?
A year or so ago our sysadmins started getting calls from users (this in an office with 100+ Unix developers) about missing files. The calls quickly turned into a queue outside the admin's offices. Running a "df" on the main home directory server showed the free space rapidly climbing...
Some hasty network analysis eventually led to a test system running a QA test script with pretty much the bug described here. It was running as a test user, so could only delete the files that had suitable "other" permissions, but it was starting at a root that encompassed the whole NFS-automounted user home directory tree. The script was effectively working its way through the users in alphabetical order, deleting every world-accessible file in each user's home directory tree.
Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done. Fortunately our admins are competent at keeping offline backups.
And the problem here is typified by your statement 'could only delete the files that had suitable "other" permissions'.
Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".
With regard to running the script as root. You're not that familiar with NFS are you?
If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root. Anybody who sets up a test server to have root permissions over any mounted production filesystem deserves every problem that they get!
There are people who have been using NFS in enterprise environments for in excess of quarter of a century. Do you not think that these problems have not been addressed before now?
Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".
They're not my users, they (and I) are, for the most part, senior developers who are well aware of how umask works, and may (or may not) choose to share files.
With regard to running the script as root. You're not that familiar with NFS are you?
I am, as it happens. At the kernel level.
If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root.
And that is, of course, exactly how our systems are configured.
It is also why I said that running the script as root would have been less serious, since not only would it have been potentially less serious for the NFS-mounted files, it would have permitted the test server to destroy itself fairly quickly as it wiped out /usr and /etc. Instead the faulty script (running as a QA user) didn't destroy the critical system files, it only destroyed those files that people had left accessible. The server remained up.
There are people who have been using NFS in enterprise environments for in excess of quarter of a century
True. I'm one of them.
Do you not think that these problems have not been addressed before now?
Indeed they have, and fortunately by people who read and understood the problem before making comments.
I stand by every word I said. I do not think that your post is as clear as you think it is.
You cannot protect from stupidity, and setting world write to both the files and the directories (necessary to delete a file) is something that you only do if you can accept the scenario you outlined. Just because you have "experienced" developers does not mean that they don't follow bad practice ("developers" often play fast and lose with both good practice and security, claiming that both "get in the way" of being productive). And giving world write permissions to files and directories is in almost all cases overkill. Restrict the access by group if you want to share files, and give all the users appropriate group membership. It's been good practice for decades.
You did say "Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done", but this is probably not true. You did not actually point out that root would not traverse the mount point of the NFS mounted files, but you did say "starting at a root that encompassed the whole NFS-automounted user home directory", implying that it was not the root directory of the system that was being deleted, but just the NFS mounted filesystems.
From personal experience, I have actually seen UNIX systems continue to run damaging processes even after significant parts of their filesystems have been deleted. This is especially true if the command that is doing the damage is running as a monolithic process (like being written in a compiled language or an inclusive interpreted one like Perl, Python or many others) and using direct calls to the OS rather than calling the external utilities with "system".
Many sites have home directories mounted somewhere under /home, so if it were doing a ftw in collating sequence order from the system root, it would come across and traverse /home before it would /usr (the most likely place for missing files to affect a system), so even it it did run from the system root, enough of the system would continue to run whilst /home was traversed. Not so safe.
Run all executables which are supplied without Source Code in a chroot environment which is on its own disk partition. Such a location is secure against a program misbehaving, because no file anywhere else on the filesystem can be linked into or out of it. Hard links cannot transcend disk partitions, and symbolic links cannot transcend chroot.
Run all executables which are supplied without Source Code in a chroot environment
Whether you have the source code is only relevant if your theorem prover can prove that shit won't hit the fan (however defined) when the program corresponding to said source is run. This is unlikely to be in the realm of attainable feats even in best conditions and even if said open sourced program has been written in Mercury.
And don't keep your backup volumes attached for longer than is necessary to run a backup.
exactly! plug the media in or otherwise make the connection to it avaiable first, then do the backup, finally disconnect from that media... and no, rsync can also kill ya when it sees the files is should be keeping updated are gone and removes them from the remote...
If you're running Steam on Linux, it's probably best to make sure you have your files backed up and avoid moving your Steam directory, even if you symlink to the new location, for the time being.
Better advice might be to hold off on using Steam until the programmer responsible has been hunted down and re-educated.