Microsoft on Monday warned of a vulnerability in Windows applications made by third-party developers that allows remote attackers to execute malicious code on end-user PCs. The company's security team is still investigating whether any Microsoft programs are susceptible to the so-called binary planting or DLL preloading attacks …
"Microsoft's advisory repeated Moore's previous guidance that admins disable WebDAV and block outgoing SMB connections on ports 445 and 139."
Assuming any malicious DLL is somewhere outside of the immediate network, then surely these ports will be blocked anyway at the LAN firewall. No?
Single point security?
If you are relying on the firewall to protect you why bother with domain logins and passwords?
Secrity is multi tiered for a reason.
Yes, those ports should be blocked, incoming and outgoing, everywhere. I know I do. But I also know it's not the default, not even a default, just like egress filtering too often simply isn't being done, and therefore you can't count on it.
As a sidenote: Those ports are pretty necessairy to make the "Common Internet File System" work. What's "Common" and "Internet" about it? Because it's proprietary and needs to be blocked everywhere on the internet? This apparently makes sense to a certain vendor's marketeering department.
The mind boggles
"Loading dynamic libraries is basic behavior for Windows.."
And they still can't get it right!
I'm still confused as to how an application can affect the rest of the operating system in a well designed environment.
Oh, hang on, it's Windows isn't it :-S
Though NSA told them about th problem twelve years ago .....
> And they still can't get it right!
You are right, see pages 105 ff. in http://packetstormsecurity.org/NT/audit/NSAGuidePlus.PDF
best regards, Hans Adams
An old issue since the seventies of the last century
when I am asked to execute an order, I have to limit myself to orders given by those who I trust or have to trust (instructors, some [computer] programs, I myself?). Never I may follow those who want me to commit a crime ;->
A software system run under and exposing a certain identity may only trust its own components (libs, taks etc...) or components from other identies the executing identity trusts.
Commonly a software system only needs its own libraries (and subtasks) and those offered by the identity "system". Why should an unidentified component be trusted?
Micorosft itself has built up a whole PKI and forces vendors to sign their components digitally just to avoid execution of untrusted components. Microsoft statement,“Hence, this issue cannot directly be addressed in Windows without breaking expected functionality. Instead, it requires developers to ensure they code secure library loads.", foils the security policy of Microsoft itself.
Evolving dangers by executing untrusted components have been known since the early sixties of the last century, IBM and BULL published basic results around 1966....Some of these results founded the security design of the AS/400 under OS/400.
RBAC/MAC forces a linking policy for processes / tasks run under certain and/or predefined identities. Given this failure Microsoft can not obey basic requirements to fulfill higher protection profiles.
Solaris 10 never allows linking of untrusted libraries against setuid programs or programs executed under uid 0, see http://docs.sun.com/app/docs/doc/816-5165/ld.so.1-1?l=all&a=view, keyword "security".
In summary Microsoft did not learn its lessons from the last half century, and even foils its own security policies --- once again an epic fail.
I doubt I am the first commentard to point and laugh, but sadly this is all so predictable. While the fault is not directly due to MS in most cases, they share a lot of the blame for the architecture and generally lax security approach.
I run a w2k PC occasionally which, of course, is no longer supported but at least I have always blocked all but essential out-going ports (e.g. DNS, http/https, NTP) at my router, so it is not such a worry. But it is something to ponder upon, a long standing vulnerability that is not ever going to be fixed, as for the link parsing one uncovered recently. For XP it is the case that without SP3, no support. No doubt to the wailing and gnashing of some companies' IT folk's teeth.
Tux, since if it matters, I could (or pay someone more competent) to back-port any necessary open source fixes.
Problem affects probably 90% of applications
>The attack works because many applications ignore best security practices and search for the >library based only on the file name, rather than the full directory path, the advisory said. When the >current working directory is set to one controlled by the attacker, it's possible to cause load a >malicious file.
This is a bit disingenuous. This is the way Windows loads DLLs for you. So the chances are that *any* program which:
a) has dlls
b) has an associated file type
is vulnerable to this is very high. Since it's the way Windows works and has worked for years, I can see why people didn't change it. I would put the blame on MS, although they can do nothing to fix it now, unless they want to break a lot of applications.
Technically, unix is also vulnerable if you have set up your LD_LIBRARY_PATH to include the current directory before the /var/lib paths, however here you can customize it and no sane person would do this. Maybe MS could give out a patch that lets you turn off this behaviour as an option, so that people can test whether their applications still work after applying it.
As far as I can imagine scenarios, I cannot see one where an application would prefer to use a DLL in the current working directory over a DLL in the directory where its own EXE is. However programmers are inherently devious, so there will probably be lots of programs doing it anyways.
Not quite true. A bug of Linux, seldomly seen on UNIX (tm)
> Technically, unix is also vulnerable if you have set up your LD_LIBRARY_PATH to include the
> current directory before the /var/lib paths, however here you can customize it and no sane person
> would do this...
No! This is a bug / feature of GNU/Linux, NOT necessarily one of UNIX, as UNIX is a trademark of an specification formerly by USL. Solaris is the best known implementation of UNIX (as defined by USL), therefor one should follow the precautions Solaris has taken.
See http://docs.sun.com/app/docs/doc/816-0210/6m6nb7md6?l=all&a=view ...
A setuid-program should never follow LD_LIBRARY_PATH .....
Solaris allows to jail applications, which are regarded not beeing full trustworthy, in zones. Applications requiring untrustworthy libs would end in jail called zones. Period.
100% Microsofts' fault, a transparent solution is possible!
Microsoft should have provided a trusted component service which can generate and securely store a signature for unsigned *.exe files, DLLs, and other executable components. A command line program should be provided to manually approve files, and an auto-signing facility added to standard Windows Installers; both limited to Administrator users, and supporting Anti-malware software.
When an application attempts to load a component, like a DLL, the component would first be checked to see if was signed by a approved signer (not just Microsoft!) and the signature is correct, or has a stored and matching trusted component signature; if not, the load would fail with an appropriate error for untrusted files, show an approval prompt with relevant DLL details (to shame the developers), or log an appropriate event to the Event Log.
The same Administrator access gateway, or kind of gateway, as in Windows Vista and Windows 7 could be use used for this.
This should not be that hard to do and make it even harder for malware to install and run, but should not rely upon .Net, given .Net can still causes absurd problems, even in Windows 7!
I call fail on your fail
I can (almost) see a zealous sysadmin spending time authorising and approving hundreds of DLLs. Once. but on tens or hundreds of servers? or do we then have to share trusted information between servers? new attack vector?
And that's not to mention an end-user who has trouble understanding what a program is let alone DLLs
any well designed OS would look in the working directory for the .dll file , cant find it and then say
"OI ! .dll file not found".
Any program wanting to use the M$ system .dlls should tell the OS "I want mfc42.dll loaded " and the OS gets it from the systems directory.
But then think of all the .dll hell stuff would have been missed on the PC using world.
Bet some ****er finds a way for us poor linux users to load the wrong library or something.. maybe
Yes, it's mitigated by the fact that on Solaris it's not possible to run anything with elevated privileges, but the potential to run something which you didn't want is still there. but in case, you will not want to change LD_LIBRARY_PATH to include the current directory. You could wreak havoc by replacing e.g. libcmdutils.so with something nasty. And since we're talking about viruses/trojans, as soon as you execute for one user you can assume there is an exploit which allows privilege escalation.
However, the case is moot since you would never change LD_LIBRARY_PATH to include an untrusted directory or the current directory, just like the current directory is not in the PATH.
Installed the patch and broke chrome
I installed the patch MS have provided (KB 2264107 and set the registry) so far everything is working fine with one exception:
Google chrome (installed in per-user mode)
I can't see any use for loading a dll from the current working directory anyway.
You should be aware that...
Folks should be aware that installing the Microsoft software tool mentioned in the article that changes the way Windows searches for DLL files will screw up a lot of 3rd party software, at least on Win XP systems. For example, on both of the Win XP Pro systems I experimented with the tool, Adobe Flash could not update itself once the way Windows searches for DLL files had been changed. I am sure many other 3rd party software packages will have the same issue.
remote exploit without any user intervention, again
"allows remote attackers to execute malicious code on end-user PCs."
And do I remember some days ago when a writer at theregister was worried about local privilege elevation bug in linux kernel?
To some people there are bugs in my systems and then there are bugs in other systems and the latter ones are the news.
Remote exploit to anyone is about worst failure you can have. If it happens without _any_ user intervention, like here, double that.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire