Procedures
"... some ransomware will quietly encrypt and decrypt data on-the-fly for months in a bid to spoil backups."
Regular sample recoveries are a good idea.
Security bod Jada Cyrus has compiled a ransomware rescue kit to help victims decrypt locked files and avoid paying off crooks. The kit sports removal tools for common ransomware variants along with guides for how to perform the necessary tasks. Cyrus recommends users not pay ransoms as doing so sustains the criminal business …
I'm not sure how a sample recovery would tell you anything. If the file is getting encrypted but remaining useable for months - how will restoring a backup change anything?
Admittedly I dont know how the malware could encrypt quietly - ie have the file still open normally , yet still be encypted when it wants it to be - unless theres an active transparent decrypter running - but this would only work if the files are only accessed by the one infected machine - which is not that likely in a commercial environment
They are - and yet still can be, in some circumstances impractical.
For a business user? One would hope so - though all too often the backup is carried out religiously and trial restores, um, not so much. But for the domestic, Jill (or Joe) Public user?
Even if they do backups (or if their helpful IT relative set them up with a backup), how many have 'spare' systems to restore to (I'm sure I don't have to explain why a test restore onto the source machine has potential problems)? How many would know _how_ to restore? How many would know if their backups are all or nothing in restore terms, or more granular (specific files)? How many would know the difference between an absolute and a relative file path, and how to make sure a test restore _doesn't_ get copied over the current 'live' version? And that's before we get into the complexities of whether just restoring a file here or there lets you know if things still actually work in terms of a full system restore, in the absence of a second system to restore the test to.
Someone in my company managed to infect one of the less important network drives with one of these today. They're right in the middle of restoration of last night's backup.
I'm fairly surprised something like this hasn't happened sooner. Everyone here has full access to every file on almost every drive (although cross-office access is more restrictive). I've accidentally moved entire project directories into other project directories before. It wouldn't take much for someone to accidentally lean on the delete key and wipe out everything that wasn't nailed down.
Damned if you do, damned if you don't.
Amazingly how common so many businesses are open like this.
Lock it down and all the users do is complain how much it "stops them doing their job". Infection causes a service outage and all you hear from the users is how much it "stops them doing their job and why weren't we protected against this".
The 'test your restores often idea' is obvious but could this not be made easier by a regular scan by a utility that can test if your online files are encrypted or not and flag up accordingly? Such a utility would need to detect and bypass silent decryption being done by malware.
How does one detect that a file is encrypted? It is just a sequence of 1's and 0's until an application decides how to process it. Detection online just moves the problem further down the stack. Take an xlsx file as an example. It is just a zip file holding a set of XML documents and other artifacts. What makes it valid? A valid to an online scanner? Is a valid zip file header enough? If so you can expect the encrypted xml document to be added to a valid zip file. It is a seriously hard problem to solve. Regular test restores to clean VMs are the best we have at the minute.
Yes , "dont panic" and "remove machine from network and image it"
are great tips .
What this story isnt telling us is how you decrypt your files. Or is the intention just to prevent more files getting encrypted.
also what I'd like to know is - If you see files getting encrpted on shared network drives how do you know where the infected machine is? My companys response recently was to run around unplugging the PC of anyone who was kind enough to alert I.T to the issue!
"also what I'd like to know is - If you see files getting encrypted on shared network drives how do you know where the infected machine is?"
What I do is to check the properties of the file for the last person that modified it, chances are that the virus is on the device that they're logged in to.
Just unplug the network and check the local drives as they'd likely encrypted also.
All the ones I've dealt with so far have scanned the drives\folders alphabetically, maybe a user based quota system with an network E Drive for all users containing some very large files of different types that, when infected, will bump the user over their quota limit thus stopping the ability of the virus to modify any more files ?
Either that or something that can monitor open files per user and if too many get modified too quickly alerts the user
ISTM that it's time to rethink the whole architecture of applications and OS.
What I have in mind is that permissions would be based on a combination of user ID and application ID. For instance only Twitbook would be able to write to Twitbook storage. If Facegram needed to read something from Twitbook's storage it would have had to have been given permission as to what it could read, it would only be able to read from a specific user's storage and it wouldn't be allowed to write back.
A way of implementing this would be to separate applications into front-end and back-end with back-end being something along the lines of a kernel module. The actual kernel itself would have much reduced facilities; it might be able to enforce quotas but it wouldn't be able to duplicate or over-ride the back-end kernel modules' reading and writing privileges. In some respects a micro-kernel architecture would fit but any existing micro-kernel would have to be enhanced with the extended permissions.
Ideally this should prevent any rogue app getting in and over-writing everything. At worst if, for instance, a rogue managed to pass itself off as Instanter it wouldn't be able to encrypt Twitbook & Facegram data.
This post has been deleted by its author
well in *nix space there is Copy-on-write. It is probably ancient tech as NetApp had a really good version of it and they did not invent it.
I'm using ZFS on Linux. Perfect? No. But it works. There is BTRFS, but....well I'm not touching it for a while...
And yes, you need to backup offsite to tapes, or you really don't care about your data *enough*.....
P.
And yes, you need to backup offsite to tapes, or you really don't care about your data *enough*...
Or at least off-site. Using tape for backups is more of a corporate approach, but many people being targeted by this malware are home users. There are plenty of free and commercial options available for regular folks, so it is still good advice.
sigh
Fond memories of the days when blank CDs cost 20c and you could fit all of your important stuff on one or two or three of them. Use write-once CD blanks (never re-writables) and every time you make a fresh backup, thow the older set into a shoe-box. When the shoe-box is full, put it in the shed and buy some new shoes. Result: an endless set of incorruptable backups, proof against anything bar fire, a maniac with a hammer, or your girlfriend having a little tidy-up.
sigh