Reply to post: Re: Fundamental problem in vulnerable OS protected by AV @Prst. V. Jeltz

Don't touch that mail! London uni fears '0-day' used to cram network with ransomware

Peter Gathercole Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Prst. V. Jeltz

Here is a on-the-back-of-a-napkin solution for you.

Each user can only access their own files, which are stored in a small number of well defined locations (like a proper home directory).

Store the OS as completely inviolate to write access by 'normal' users. Train your System Administrators to run with the least privileges they need to perform a particular piece of work.

Any shared data will be stored in additional locations, which can only be accessed when you've gained additional credentials to access just the data that is needed. Make this access read-only by default, and make write permission an additional credential. This should affect OS maintenance operations as well (admins need to gain additional credentials to alter the OS).

Force users to drop credentials when they've finished a particular piece of work.

If possible, make the files sit in a versioned filesystem, where writing a file does not overwrite the previous version.

Make sure that you have a backup system separate from normal access. Copying files to another place on the generally accessible filetree is not a backup. Make it a generational backup, keeping multiple versions over a significant time. Allow users access to recover data from the backups themselves, without compromising the backup system.

Make you MUA dumb. I mean, really dumb. Processing attachments should be under user control, not allowing the system to choose the application. The interface allowing attachments to run should be secured to attempt to control what is run. Mail can be used to disseminate information, but by default it should be text only, possibly with some safe method of displaying images.

Run your browser (and anything processing HTML or other web-related code) and your MUA in a sand-box. There needs to be some work done here to allow downloaded information to be safely exported from the sandbox. Put boundary protection between the sand-box and the rest of the users own environment.

Applications should be written such that all the files needed for the application to function, including libraries should be encapsulated in a single location, and protected from ordinary users. The applications should be stored centrally, not deployed to individual workstations and run across the network with credentials used to control the ability to run the applications. The default location that users will save data to in all applications should be unique to the user (not a shared directory), although storage to another location should be allowed, provided that the access requirements are met.

Use of applications should be controlled by the additional credential system described for file access.

Distributed systems should not allow storage of local files except where temporary files are needed for performance reasons, or they are running detached from the main environment. These systems should be largely identical, and controlled by single-image deployment, possibly loaded at each start-up. This allows rapid deployment of new system images. The image should be completely immune to any change by normal users, and revert back to the saved image on reboot.

For systems running detached (remote) from the main environment, allow a local OS image to be installed. Implement a local read-only cache of the application directories which can be primed or sync'd when they are attached to home. Store any new files in a write-cache, and make it so these files will be sync'd with the proper locations when they are attached to home. Make the sync process run the files through a boundary protection system to check files as they are imported.

OK, that's a 10 minute design. Implementing it using Windows would be problematic, because of all of the historical crap that windows has allowed. A Unix-like OS with Kerberos credential system would be much easier to implement this model in (I've seen the bare-bones of this type of deployment using Unix-like systems already, using technologies such as diskless network boot and AFS).

Not having shared libraries would impact system maintenance a bit, because each application would be responsible for patching code that is currently shared, but because the application location is shared, each patching operation only needs to be done once, not for all workstations. OS image load at start-up means that you can deploy an image almost immediately once you're satisfied that it's correct.

Users would complain like buggery, because the environment would be awkward to use, but make it consistent and train them, and they would accept it.

BTW. How's the poetry going?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019