Could we fucking kill it already?
And rewrite from scratch. It cannot possibly become worse.
Developers behind the widely used OpenSSL encryption library have warned that they will issue fixes for a mix of bugs next Tuesday (1 March). The patches will land right in the middle of the RSA Conference, infosec marketing's version of the Superbowl. It's understood the bugs are significant (as in, patch as soon as you can …
LibreSSL is still forced to use OpenSSL broken ass api for compatibility reasons. At least there is nss (among many others) and I do believe the LibreSSL folks did also release a library with a sane ssl api. Which is good for new apps but sadly the trojan like OpenSSL is in enough products you will more than likely at least be downstream patching for years to come.
It's already happening: LibreSSL.
LibreSSL was not "written from scratch". And it has problems of its own, including bugs (e.g. CVE-2015-5333), lack of FIPS 140 validation (which makes it useless for businesses that have to sell to the US Federal government), somewhat complicated licensing, and source code that suffers from the disease known as KNF.
OpenSSL remains by far the most complete open-source SSL/TLS implementation available. Many people can get by with an alternative; others cannot. These calls to "just replace OpenSSL" are ignorant grandstanding.
"lack of FIPS 140 validation"
The NIST website shows that the vast majority of OpenSSL installations in the wild are not validated either, the validation applies to a very small set of hardware + software configurations. Judging by the short list of valid configurations it looks like vendors paid NIST to validate specific configurations - is there something stopping Vendors from submitting LibreSSL for validation ?
>>Many eyes, all rubbish at spotting security vulnerabilities.
>.. is used by huge amounts of servers and is at the centre of many security systems, it is in the spotlight of all the researchers
Finally answering the question can a big enough hairball spaghetti code base from hell (which OpenSSL is) be patched sufficiently even with the whole internet trying. Perhaps but you are going to be seeing critical/severe OpenSSL CVEs for years to come and the vast majority won't be in code written in the last few years or going forward.
Do you believe most of these bugs are found looking at the code? Vulnerability researches have a plethora of other methods to find vulnerable code, even in non open source one (but there are people outside Microsoft who have access to Windows code - just you need to be eligible for that). The only real drawback is you have to wait for a fix - you can't fix it yourself (if you're able to do it properly).
And these bugs have been in OpenSSL for years and years, until Heartbleed opened the worms can...
"Many eyes, all rubbish at spotting security vulnerabilities."
Vulnerabilities were spotted only for the OpenSSL maintainers to ignore/reject them and the patches.
Now tell me, how long does it take for a multi-billion dollar company stop rendering fonts in ring 0 ?
Keep in mind this company has over 30 years of experience writing OSes and attracts the very best talent (according to the folks who work there), they were advised not to do this, and their font rendering code has yielded multiple drive-by priv escalation exploits.
"Many eyes, all rubbish at spotting security vulnerabilities."
Actually, as shown here, it works quite well, when someone does look.
Probably one of the happier fallouts of the whole NSA/Snowden affair, instead of just passively assuming that someone's looking, more are actually getting into the code looking.
(Now, what do you say about closed source software, where you can't look, and there's TLA pressure to put backdoors in?)
Really, why ?
Long after kingdom has come and all cows have returned home, will there be terrible bugs in this, affecting any platform, due to a memory allocator compatible with Win16, MPE 6, VMS 2 and probably ENIAC as well.
It needs to die, really, and be forgotten.
"due to a memory allocator compatible with Win16, MPE 6, VMS 2 and probably ENIAC as well."
The motivation for OpenSSL was to stop programmers form trying to roll their own crypto, then they go and write their own memory allocator. Seems like they really need a big ol' dose of self awareness...
... all these tards coming here moaning about how shit it is when it was written by a bunch of guys in their spare time as a hobby thing. It's not their fault the entire web decided to use it without checking they knew what they were doing.
You think it's shit, you do better, in your spare time, for no pay. Go on, I fucking dare you.
Fair enough. Don't blow your own trumpet when you're surrounded by Miles Davis.... on the other hand don't rely on something written by people who were likely drunk when they wrote it to secure your website.
Honestly.. people rely on this for security and nobody checked the code until now? It's written by people IN THEIR SPARE TIME for fuck's sake. Oh yeah, I just bought this amazing sports car. The guys who designed it did it in a shed in whitstable.. it hasn't actually got an MOT yet but I'm hopeful....
I write an open source application. My code is shit and I know it but nobody relies on it for security. What makes people think that open source is somehow good? Usually it's people like me who haven't a fucking clue what they're doing. We write stuff we want because we can't get paid for doing it.
@FatGerman - What makes you think the professionals are writing anything better? Sure, the sports car from the shed in whitstable doesn't have an MOT, but neither does the Ferrari because there aren't any MOTs.
Writing security software is hard, like consistently getting a hole-in-one every time you hit a golf ball, in a hurricane, with a large army on the course trying to stop you.
how the OpenSSL developers kept shouting about how great their software was
Citations, please. Prior to the massive cash infusion after Heartbleed, OpenSSL had one full-time developer: Steve Henson, who is not inclined to make public pronouncements, and generally restricted his comments on openssl-users and openssl-dev to technical matters.
As I recall, Eric Young sometimes debated broader questions on the lists (I remember his weighing in on an exchange about whether OpenSSL needed a gather-write API to help apps avoid Nagle / Delayed ACK interaction), but I can't think of any occasions where he claimed the OpenSSL source code was "great".
By that logic, if I complain about (say) the sausage I bought at my local butcher, he can say "You think it's shit, you do better!" (I certainly cannot).
Given the importance of SSL I do not find it unreasonable to ask for a new attempt. Have read the comments up to here, thanks for pointing at LibreSSL.
... all these tards coming here moaning about how shit it is when it was written by a bunch of guys in their spare time as a hobby thing.
Be that as it may – yes, it was a dismal state of affairs – the project has now had money thrown at it and it still sucks. Version names like 0.9.8zg FFS
Still poor design is poor design. LibreSSL wasn't forked for fun but after a thorough code review which determined that a new start of a less ambitious project would be better.
the project has now had money thrown at it and it still sucks.
The post-Heartbleed OpenSSL releases have seen a wide range of less-severe issues corrected. The 1.1 branch has removed a ton of old code and greatly cleaned up the API, with extensive consultation with users (via openssl-users, and if you use OpenSSL and don't subscribe to that list, then you deserve what you get).
"still sucks" is, of course, a meaningless subjective evaluation. By various objective metrics, OpenSSL has gotten much, much better over the past two years.
Version names like 0.9.8zg FFS
That is the stupidest objection I've ever seen to a software project. And 0.9.8 is no longer being maintained, so it's also irrelevant.
How about borrowing the 'allele' from the biological sphere. As in you impliment multiple versions of the same functionality in differing languages and mix-and-match on the end system. That way a bug in one process on one particular machine won't lead to a potential global infestation.
If we continue your analogy, this is like one version not only not functioning correctly, but producing poison that seeps into the rest of the body. With software security, the more you have running, the bigger the chance of a serious failure happening.
As in you impliment multiple versions of the same functionality in differing languages and mix-and-match on the end system. That way a bug in one process on one particular machine won't lead to a potential global infestation.
Ironically, this was a key argument for us to stop using Windows everywhere (we call it the domino effect, and it was demonstrated during the I Love You virus attack). The only issue that we now have is that we have a backbone based on one type of tech because you better control one type of tech well than multiple types of tech halfway, but by using Open Standards it now doesn't matter what desktops we use. Thus, graphics people use Macs, most of operations and security use a mixture of OSX and Linux and the admin staff seems to be still most comfortable with Windows (that is W7, I think we may give them a W8.1 upgrade but that's as far as we dare to go because we really, really, really do not like W10, and neither do the lawyers who actually read the licenses we have to agree to).
It might work... if you've got time for a few million generations of evolution, and you don't mind a high failure rate. Remember, every individual that doesn't survive to reproduce is a failure, and in some species each individual produces millions of eggs with, on average, one surviving.
Bruce Schnieer notes "Complexity is the enemy of security" He's right, as usual.
and HTTPS / SSL is a perfect example: the session key HAS to be generated by the CLIENT
when the session starts you have only half of a PGP secure link: client has the public key for
the server and the the sever holds the corresponding private key
what this means is: the client can authenticate a message from the server -- but the server
cannot authenticate the client except for the use of a user ID and password .
for this reason the session must start with the server sending an signed copy of its letterhead
to the client . the client can authenticate this -- using the X.509 certificate it thinks*
belongs to the server it is attempting to connect
if the letterhead authenticates then the CLIENT can generate a session key, encrypt it for the
SERVER and send it. it cannot be done in reverse because the server does not have a public key
for the client. end of story.
simplicity is the answer.
* x.509 certificates are printed and broadcast like losing lotto tickets. we must develop a
process wherein the CLIENT has a PGP key and is able to SIGN for TRUSTED x.509 certificates.
this will require the development and deployment of a KEK device: you cannot use smart phones
for this: you must use a single purpose device so that updates can be STRICTLY controlled .
"You can either make something so simple that there are obviously no bugs, or so complex that there are no obvious bugs"
OpenSSL falls into the later category, the biggest issue is that they don't deprecate, so SSLv2 code is still in there (this is where the most serious bugs are this time), what is changing is that the default installs are no longer including weak ciphers and old transport methods.
Really old bugs (like Heartbleed) are likely to be found eventually, and this isn't the end of it - when people complain "all those eyes and these things are not found", they are wrong - they have been found, this is the point - they have just been found, the problem is this code is released, it's out there, running so it seems to be a problem.
But closed source "appears" safer because a patch gets released and it silently fixes stuff where people don't have the source in front of them to go "Look, what an idiot... that's baaaaaaad code", do you think that every MS patch isn't patching *exactly* the same sort of mistakes, in fact it's only the reverse engineering crowd that are highlighting the overflows etc. (without the benefit of the source) and building exploits, the difference is, by then it's too late, it's patched.
More code will be patched, more bugs found, and the bigger it gets, the more eyes will find issues, there *is* a case for security by obscurity (which MS do well), but not as a substitute for testing and going through the code - which of course MS do, when code crashes or acts unpredictably, that's those bugs manifesting, the difference is, Joe Smoe doesn't have the code in front of them to work out why, MS do, and their motivation is to fix it, not to exploit it - get OpenSSL to do something unpredictable or crash, you have the code and it's more fun to find an exploit.
"Joe Smoe doesn't have the code in front of them to work out why, MS do, and their motivation is to fix it, not to exploit it - get OpenSSL to do something unpredictable or crash, you have the code and it's more fun to find an exploit."
At best the lack of source code deters skiddies looking for a quick hit. Folks who are skilled in the art of computing really don't need it, and modern disassemblers make life very easy for the folks who can be arsed to use them. It really isn't rocket science folks.
Witholding source code from customers just makes it harder for them to help the vendor.
Biting the hand that feeds IT © 1998–2019