...I had expected better from my alma mater.
University College London is tonight tackling a serious ransomware outbreak that has scrambled academics' files. It is feared the software nasty may be exploiting a zero-day vulnerability, or is a previously unseen strain of malware as antivirus defenses did not spot it in time, we're told. Eggheads at the UK uni are urged to …
But thinking back to when I was there (87-90), all the academic work was done on various UNIX boxes, and the only PC's were a couple of horrible Olivettis kept up in the attic to transfer files for personal use. These were so ridden with viruses, that you had to bring your own DOS floppy with you.
Remembering even more, the UNIX boxes for undergraduate use were so over subscribed as to be unusable (mainly Sun 3/50's with no local discs), so I ended up porting my 3rd year project to RISC OS to get it finished and then porting it back to UNIX. They also expected the dissertation to be written with nroff, but my pre-release copy of Impression desktop publishing software was a lot easier, and produced a lot better results when it came out of their postscript printers.
I know university research systems well at many places. They have no defences at all once something is in.
Yup. AFAIK, most of them are still based on the "hard shell, soft centre" principle, and with email and messaging every terminal is now a potential gateway for Internet malware.
I'll give them a ring later and see if they can do with some help (and, more importantly, if they want help because outsiders trying to tell you your job can be irritating, however well intended)
"The AV didn't catch it so it must be a 0day!"... Very funny!
Duh, obviously malware authors check their stuff against all relevant AVs before sending out a spam campaign or serving up drive-by-downloads. And if there are any detections, they make sure they go away before proceeding - I'd expect that there are very well established procedures for this.
The college dont seem to know the difference between the terms "zero day exploit" and "malware"
Either that or I dont.
A 0day , as i understand it it a vulnerability that has only just been discovered - a security hole.
That is totally different from the payload delivered.
And "Hey click on this email - its fun" is not a 0day exploit , its probably the oldest exploit of them all.
It's an email with a URL download link to a malware java script (DONT EXECUTE IT obviously):
I have uploaded it to Kaspersky, Microsoft and Sophos.
Title of email in my case was "Copy of K9b Form assessed by : James Eley-Gaunt"
With blurb telling me in an email to click on the link.
Mind you, I would have thought that "Copy of K9b Form assessed by : James Eley-Gaunt" would pretty much flag this as suspicious in most intelligent people's minds. Eggheads my arse.
"Mind you, I would have thought that "Copy of K9b Form assessed by : James Eley-Gaunt" would pretty much flag this as suspicious in most intelligent people's minds. Eggheads my arse."
To be fair, there doesn't appear to be any suggestion that it was actually anything to do with any "eggheads". Indeed, they specifically mention "students and staff", which means this could have been caused by literally any person on campus with an email account, including drunken teenagers and the janitorial staff. That said, the stereotype of the eccentric professor exists with good reason; just because someone is intelligent in some respects doesn't mean they're not a complete idiot in other ways. Given that many academic staff can be quite elderly and in non-technical disciplines, there's no reason to expect them to be any more competent with computers than the general population - if you struggle to explain these kinds of issues to your grandmother, why expect a 70 year old lecturer on Abyssinian pottery to be any different?
" if you struggle to explain these kinds of issues to your grandmother, why expect a 70 year old lecturer on Abyssinian pottery to be any different?"
Usual ageism - apart from the fact that a lecturer would be unlikely to have reached 70 before being retired.
http://www.bbc.co.uk/news/business-40286280 should give you food for thought: the biggest rise in victims was in the under 21s.
I got this one several times yesterday (several different email accounts) and I've had a lot of similar ones over the last month or so, all from (apparently) compromised Australian Sharepoint Online accounts....
I didn't click on it, obviously, because I'm not a total fucking retard. All the ones I received yesterday had the same name, James Eley-Gaunt. Did all these 'eggheads' think that sounded like someone they knew?
The only solution is to ban that socialist Linux software and only allow the use of the industry standard Microsoft Windows.
Substandard trolling. Too obvious and no capital letters, spelling mistakes or swear words. Worse, it's not even funny.
That's a downvote from me.
If AV is your primary defense against this type of attach, then you've got a problem.
There will always be a lead time between the appearance of this type of attack, and AV systems identifying and blocking it and becoming effective when it is deployed. This will be unlikely to be less than 24 hours, and probably much longer as organizations rarely provide daily AV updates.
It really surprises me that we have not seen more sophisticated malware, with constantly changing content and delivery vectors. I know that AV systems are trying to become heuristic to avoid that type of threat, so they make an attempt to programmatically identify suspicious traffic, but this can lead to false positives.
OS and application writers (of any flavor) should make sure that easily exploited vulnerabilities (like allowing mail attachments to be able to execute code) are either not present (preferably) or patched very quickly, and administrators should make sure that access to data is controlled and segregated to limit the scope of any encryption attack (at this point, running your MUA in a sandbox looks good!).
Whenever I see "Avoid messages with a subject line of..." then it is clear that the malware writers just aren't really trying very hard. Fortunately. Maybe they don't have to because the attack surface is so large.
You're not offering any solutions there , just restating the eternal battle lines.
Mail attachments cannot execute code - it takes a person to get the file out of the email and run it.
Administrators *do* make sure access to data is limited.
Unfortunately, many of the organizations I've worked at recently have nearly wide-open file-shares, such that my account would have been able to damage a significant proportion of the data.
As a long-term UNIX admin, I'm used to have files locked down by individual user ID, with group permissions to allow individuals to access those extra files they need, at the appropriate access level. With some skill, it is possible to devise a model where by default you have minimal access, and you acquire additional access as and when you need it, with additional access checks along the way (think RBAC with you having to add roles to your account as you need them).
The windows permissions model is much more flexible than UNIX, so not using it properly to protect information is almost criminal. Too many organizations (but not all, I admit) do not use it to it's fullest capabilities.
There have been several vulnerabilities published where just displaying an HTML mail can execute code. In addition, launching an application to handle an attachment is merely one click in many mail systems, especially when the actual attachment type can be obscured. Thus, building a sandbox for the mail system and applications that handle attachments (what I was aiming at) is do-able, History indicates that vulnerabilities like this have happened in the past, and I do not have confidence that there are not more to find. Ease of use always seems to have triumphed over security in much software.
The recent attacks appear to hinge around being able to launch client-side code without sufficient control, in an environment where the users credentials are sufficient to do significant harm. The results appear to suggest that sufficient care had not been taken to segregate data access, contrary to your assertion that administrators do, If they had, the results would not have been nearly as bad as reported.
IMHO, security should be paramount in this day and age, and usability should always be secondary.
Unix != linux, just in case you can't read. Plus, there is no one ACL system that spans all UNIX-like OSs.
What I wrote is totally true. You've just responded to a different statement, one that I did not say, The original UNIX permission model is weaker than current Windows without any question,
Even on Linux, ACL support largely depends on the underlying filesystem, and both apparmour and SELinux can be, and often are, disabled.
Oh, and because I am a long-term AIX system admin, I've actually been aware of filesystem ACLs since before Linux went mainstream (JFS implemented them on AIX 3.1 which was released in 1990), and RBAC since AIX 5.1 (sometime in 1999 or 2000). I've also used AFS and DCE/DFS, both of which has ACL support and used Kerberos to manage credentials since about 1993,
At the risk of being confrontational, when did you start using computers?
Here is a on-the-back-of-a-napkin solution for you.
Each user can only access their own files, which are stored in a small number of well defined locations (like a proper home directory).
Store the OS as completely inviolate to write access by 'normal' users. Train your System Administrators to run with the least privileges they need to perform a particular piece of work.
Any shared data will be stored in additional locations, which can only be accessed when you've gained additional credentials to access just the data that is needed. Make this access read-only by default, and make write permission an additional credential. This should affect OS maintenance operations as well (admins need to gain additional credentials to alter the OS).
Force users to drop credentials when they've finished a particular piece of work.
If possible, make the files sit in a versioned filesystem, where writing a file does not overwrite the previous version.
Make sure that you have a backup system separate from normal access. Copying files to another place on the generally accessible filetree is not a backup. Make it a generational backup, keeping multiple versions over a significant time. Allow users access to recover data from the backups themselves, without compromising the backup system.
Make you MUA dumb. I mean, really dumb. Processing attachments should be under user control, not allowing the system to choose the application. The interface allowing attachments to run should be secured to attempt to control what is run. Mail can be used to disseminate information, but by default it should be text only, possibly with some safe method of displaying images.
Run your browser (and anything processing HTML or other web-related code) and your MUA in a sand-box. There needs to be some work done here to allow downloaded information to be safely exported from the sandbox. Put boundary protection between the sand-box and the rest of the users own environment.
Applications should be written such that all the files needed for the application to function, including libraries should be encapsulated in a single location, and protected from ordinary users. The applications should be stored centrally, not deployed to individual workstations and run across the network with credentials used to control the ability to run the applications. The default location that users will save data to in all applications should be unique to the user (not a shared directory), although storage to another location should be allowed, provided that the access requirements are met.
Use of applications should be controlled by the additional credential system described for file access.
Distributed systems should not allow storage of local files except where temporary files are needed for performance reasons, or they are running detached from the main environment. These systems should be largely identical, and controlled by single-image deployment, possibly loaded at each start-up. This allows rapid deployment of new system images. The image should be completely immune to any change by normal users, and revert back to the saved image on reboot.
For systems running detached (remote) from the main environment, allow a local OS image to be installed. Implement a local read-only cache of the application directories which can be primed or sync'd when they are attached to home. Store any new files in a write-cache, and make it so these files will be sync'd with the proper locations when they are attached to home. Make the sync process run the files through a boundary protection system to check files as they are imported.
OK, that's a 10 minute design. Implementing it using Windows would be problematic, because of all of the historical crap that windows has allowed. A Unix-like OS with Kerberos credential system would be much easier to implement this model in (I've seen the bare-bones of this type of deployment using Unix-like systems already, using technologies such as diskless network boot and AFS).
Not having shared libraries would impact system maintenance a bit, because each application would be responsible for patching code that is currently shared, but because the application location is shared, each patching operation only needs to be done once, not for all workstations. OS image load at start-up means that you can deploy an image almost immediately once you're satisfied that it's correct.
Users would complain like buggery, because the environment would be awkward to use, but make it consistent and train them, and they would accept it.
BTW. How's the poetry going?
There are lots and lots of constantly changing malware. Down to new variants generated every couple of minutes.
As to heuristics, it's basically as useless as signature scanning when applied at the AV/file scanning level. The same principle applies - malware author tests his code against AVs, if it's detected then follow various procedures until it's no longer detected. At most you can hope to increase the time taken to get rid of detections by making them less deterministic during testing.
My first uni (Warwick 98), in order to get access to your email, first you rebooted the Windows PC in the lab to a DOS terminal emulator, which then allowed you to log in to one of the servers and run pine.
This wasn't just CompSci students - everyone had to get email like this. Very weird seeing all these 'regular' people tapping away in consoles, and suddenly being slightly in demand as the guy who can get your email back working again by typing "pine" in the shell after they quit accidentally.
If only people today would reward me for fixing IT issues with alcohol and vague promises of sexual encounters. Actually, considering the people I work with now, I'd be OK with just the alcohol.
Pfft. Youngster. When I was a Warwick student (94 graduate), there was a single room of Windows PCs in the basement of CSV. Unless you had access to the small number of Sun workstations, you read your email the same way as almost everyone else: on a VT220 (or on an ADM3e if you were in DCS).
At university and in my first job (also at a uni) is was normal for "normal" undergraduates and staff to log in via telnet* and use Pine. I suspect that if you showed them a terminal now they'd refuse to use it as it's "too hard".
*Showing my age here, wasn't it nice to not have to care about security as much
I spent far too many years of my life working the email at London's top engineering and science uni... One of the biggest problems was that we could put all the security we wanted, but academic's are the very definition of special snowflakes. We could not dictate to them which clients they used (we had fellows who refused to upgrade from pine to alpine and this was 4 or 5 years after the final release of pine....) We also had the ridiculous situation where every system had to have a corresponding MX record because academics liked to run to their own mail servers (which we had 0% control over)
I'm more shocked that this hasn't happened more frequently.
You'll be happy to know that department mail servers have been mainly shut down and all UCL email now goes through outlook.com. Which is fun on the days when it doesn't work. Why it should matter what client someone uses (universities being large organisations with users with quite a variety of needs) so long as it can talk IMAP/SMTP is less clear.
Client mattered when it came to integrating other services and maintaining support. Which led to headaches in other projects (hello archiving and stubbing of mails..) Not to mention the people complaining when Eudora wasn't formatting mails correctly or kept crashing, like it was my problem they were still using a buggy client 2 years after support and development stopped (answer to which was, "they are academics, of course it was my problem!") Not to mention the inconsistent handling of attachments, especially the way that macmail dealt with inline attachments at the time...) I'm just glad I no longer have to touch clients with a bargepole :)
"Client mattered when it came to integrating other services and maintaining support."
Possibly in more ways than you realise.
One of my clients (that's client as in customer) had a system where files were emailed for processing and I had a specific client configured to feed into the remainder of the processing pipeline. You could have had a similar situation where one of your users was receiving files from a remote telescope or particle physics experiment. Universities are apt to use computing to much more varied ends than a commercial business.
It could also, of course, have been the case that your users didn't trust you. I'm sure anyone on KCL who didn't trust their computer services to store data felt vindicated.
"We could not dictate to them which clients they used (we had fellows who refused to upgrade from pine to alpine and this was 4 or 5 years after the final release of pine....) We also had the ridiculous situation where every system had to have a corresponding MX record because academics liked to run to their own mail servers (which we had 0% control over)"
I wonder who develops some of the clients and servers. It could even have been some of your users.
The End User Services made a special point of sharing the pain with anyone who had survived their datapocalypse. You'd think they'd be more sensitive about things if they, say, came across of box of backup tapes in a hidden corner of a store room during a clear out.
Biting the hand that feeds IT © 1998–2019