Adobe (and I guess MS as well) put font handling in the kernel from NT 4.0 to gain speed at the expense of having privileged-based protection, and against Dave Cutler's original micro kernel plans. What could possibly go wrong?
Oh yes, this...
Get patching: Google Project Zero hacker Mateusz Jurczyk has dropped 15 remote code execution vulnerabilities, including a single devastating hack against Adobe Reader and Windows he reckons beats all exploit defences. The accomplished offensive security researcher (@j00ru) presented findings at the Recon security conference …
ironically enough, when office 2013 came out we got rid of adobe reader and used word to open PDFs (mainly so the teachers can edit PDFs as appropriate). Only the drama and music teachers have adobe reader on a couple of machines due to various locked down PDFs for manuscripts and scores.
"Also because it is opensource, you can fix it yourself"
I am a staunch supporter of Open Source, but I have to say that arguments like this help no one and only serve to ruin the image of Open Source in the people's minds when they find out what is involved to "fix it yourself". That argument just alienates people that would otherwise love Open Source because they have neither the time nor skills nor inclination to write and/or apply patches to random pieces of software.
The counter to this is that although you (and I, I will admit) may not have the skill to fix problems like this, we do have the ability to aid someone who does, with a formal or informal contribution of either money or equipment, and it does not even have to be the developer with the Open Source software model.
I suppose that you could give Microsoft and Adobe money and ask them to do the same, but I suspect that it would disappear into the general coffers, and not significantly affect the quality of the code.
A "formal or informal contribution of either money or equipment" = payment. No matter how you try and frame it, it comes down to offering something in return for a service.
The bugs in question have been sitting in multiple versions of Windows and Adobe's software for well over a *decade*, yet it took Mateusz Jurczyk—a professional hacker—to find them. Are all programmers the world over supposed to be able to match Mr. Jurczyk's abilities as a matter of routine?
Even if I had the entire source code to Windows, OS X, and the latest spin of Gentoo sitting on my hard disk, I wouldn't have the faintest idea where to even *start* looking for backdoors and the like, let alone how to fix them if I came across one. My current expertise lies in writing tutorials about flinging sprites around a screen using C# and Unity, not in untangling the source code to OpenSSL. And I once spent 15 solid months doing nothing *but* debugging other people's code, so thanks, but no thanks; I'll leave that to hardcore masochists.
Yes, it means trusting businesses, but how is that any different to trusting random folk on the Internet? I'm not wealthy enough to have money to throw at random strangers whose CVs could be complete and utter fabrications for all I know. When it comes to programmers with security skills, performing due diligence isn't optional.
So, I can just as easily have Apple, Microsoft, etc. "fix their source code" instead, with no need for me to have access to it. Better still, I actually get free, and (usually) timely patches and updates from both companies without having to lift so much as a finger!
I believe the current fashion among young whippersnappers is to add a "W3wt!" at this point, or something equally vacuous.
You've missed the point - but to be fair, so did the OP.
Finding exploits doesn't require the source code, but fixing exploits does.
It's also much easier to fix an exploit than to find one. Eg a use-after-free
Once an exploit is found, there are two scenarios:
A) Closed-source software. Only the organisation that owns the software can choose to spend the resources to fix it.
B) Open-source software. Any entity can choose to spend the resources needed to fix it.
If you depend on that software, then under (A) you can request that the owner fixes it. If they do not, then you can either stop using the software or live with the consequences of the exploit.
Under (B), you can request that the organisation that made it fixes it. If they do not, then you can arrange for somebody else to fix it.
Under (A), if the entity that owns it has lost the source code or closed down, you are done for.
Good try, but expecting amateurs to fix industrial strength cryptography code is a bit much. I understand the principles involved, but none of the maths.
The only maths needed to understand or fix Heartbleed is basic arithmetic. It's a read past the end of an array.
The hard part about Heartbleed was finding it - and even that shouldn't have been hard, if the commit had been reviewed in the first place, or if anyone was fuzzing new OpenSSL features as they were added.
Heartbleed happened because:
1) The code in question was written by a typical C programmer, i.e. one who prefers ad hoc, terse, poorly-structured code to the carefully considered and properly-designed sort. In that it matches the rest of the OpenSSL source base. I have much respect for Eric Young and Steve Henson, for their technical accomplishments and knowledge, but the fact is that their code is an ugly mess. As is most of the C I've seen (and I've been working with the language since the mid-80s).
2) The DTLS Heartbeat code wasn't properly reviewed when it was submitted. That may be partly because it was written by the author of the spec; it's probably mostly because the OpenSSL team was badly understaffed and undercompensated at the time. But this is what happens when you accept patches without thorough review.
3) Despite OpenSSL's widespread use, no one tested the feature thoroughly when it was added - at least no one interested in publishing the vulnerability. OpenSSL is widely used, but mostly because people need to tick off a "secure communications" checkbox. It's used grudgingly, not because it makes anyone's life easier. And so people don't want to test it. They just hope it works.
Once Heartbleed was announced, it was quite easy to identify the mistake, and fixing it was trivial.
This post has been deleted by its author
I have yet to see a _SINGLE_ large corporation where "reliability and security" of the developer's code is fed back into his rating.
It is actually trivial - the source code control system can trace a particular commit to a particular person - that should go automated on his current perf review regardless of how old is the code in question. In reality - it never does.
To be effective, you would also need the chain of decisions that led to that particular "problem." Typically that only occurs when a truly catastrophic event occurs. (Aircraft crashes, Shuttle blows up, bridge collapse, ....) You can fire everyone over a period of time and still have the problem found in the management side that forces the situation in the first place.
BTW, I like including tolerances (sanity checks) even in software. If something is passed into my design that is unexpected, there's certainly a problem. Notify the operator and make damned sure that this is really the intent. It's the same order as not crossing a bridge under certain conditions (wind, earthquake, ...).
Should be secure, but aren't necessarily. There's been a slew of security patches for various bitmap loader libs this year. (PNG anyone?) Much better odds than PDF though.
Open-src font libs are also potentially vulnerable to similar attacks, and the PDF readers on Linux... yeah they've got major problems too.
While X is safely outside the kernel where is should be, I think the Linux kernel still has some font handling stuff for the console, RHEL certainly passes in font arguments into the kernel by default, I guess the driver for the graphics card needs some.
I tend to rip this out as graphics heads are well into the realm of fishes with bicycles IMHO when it comes to servers, but it seems I'm in a minority.
Not for the first time do I wish that Windows had ceased support for Adobe fonts when TrueType was introduced.
I suppose that, with the benefit of hindsight, embedding Adobe's crapware in your O/S and giving it kernel privilege always was a one-way ticket to fuckup city.
That the problem with deciding between "Increase security just a bit" and "substantially improve performance"... While the performance hit is fairly trivial nowadays, there was never a reason to change it. Users demand things be fast and pretty, they don't care about security...
MS have completely re-written all of Windows from the ground up at least twice since this bug came in and they've managed to inadvertently re-introduce the flaw on each occasion.
No they haven't. They've significantly rewritten large parts of Windows, but they haven't "completely re-written all" of it. There's still plenty of old code. It's absurd to believe that even the big Windows rearchitecting moments involved rewriting every single line of code.
That's why ATMFD.DLL still has a copyright date that starts with 1993.
And, of course, ATMFD.DLL has a copyright notice that says it belongs to Adobe. They wrote it. Microsoft just sticks it in Windows.
Biting the hand that feeds IT © 1998–2019