TIme to switch from windows to linux , Oh wait.
Kerberos bypass, login theft bug slain by Microsoft, Linux slingers
A vulnerability hidden in Kerberos code for more than 20 years met its end in patches issued this week by Microsoft and several Linux vendors. Having found the flaw three months ago in Heimdal, an open-source implementation of Kerberos, Jeffrey Altman, founder of AuriStor, and Viktor Dukhovni and Nicolas Williams from Two …
COMMENTS
-
-
Friday 14th July 2017 18:40 GMT Alistair
@AC:
Actually, in some cases, YES.
https://access.redhat.com/security/cve/cve-2017-11103
Statement
This issue does not affect the version of MIT Kerberos implementation as shipped with Red Hat Enterprise Linux. This issue also does not affect the version of Samba as shipped with Red Hat Enterprise Linux.
-
-
-
-
Friday 14th July 2017 10:38 GMT anothercynic
@Anonymous
Regarding your question about whether MIT Kerberos is affected:
The Orpheus' Lyre bug arose independently in multiple different Kerberos 5 implementations, including one by KTH Royal Institute of Technology in Sweden (Heimdal) and one by Microsoft.
The current release on the Kerberos consortium page is dated March. So it may not necessarily be affected, or it is affected and the release is not out yet. CentOS last saw an update of its KRB5 packages in December.
-
Friday 14th July 2017 15:18 GMT Anonymous Coward
Re: @Anonymous
https://bugzilla.samba.org/show_bug.cgi?id=12894
If your samba is built using MIT Kerberos, upstream Heimdal advisory says: "The MIT implementation is not vulnerable, and looking through its version history, never had been."
That text is from the https://www.orpheus-lyre.info/ page. It goes on to say:
On the other hand, Heimdal has been vulnerable since late 1996. We checked Darwin (OS X), and found it to be vulnerable. At that point we decided to start a disclosure process. Jeff managed all the disclosures, and just in case, disclosed to Microsoft so that they could check their implementation. Surely enough, Microsoft's implementation turned out to be vulnerable, also since the beginning. (Microsoft first started using Kerberos in Windows NT5, aka, Windows 2000.)
-
-
-
Saturday 15th July 2017 10:56 GMT Anonymous Coward
Re: MIT is not vulnerable
Kerberos tickets allow you to extend them and put extra attributes in them. The issue is that the data in the extended part of the ticket is not encrypted by default.
The base MIT Kerberos supports but does not put data in the extension so is not vulnerable. Microsoft and others do use the extension. As they don't explicitly encrypt the extension data everything in it is potentially vulnerable.
When you look at the source code you don't spot it because it's not a coding bug: it's a protocol design error.
-
-
-
-
Friday 14th July 2017 06:43 GMT John Smith 19
What an interesting set of comments.
"Microsoft had more money and more automated tools, and they could not find it,"
Did they go looking for it?
At Microsoft, he said, "They were very proud of the fact that they wanted everyone to reinvent the wheel.
So probably not then. After all users don't usually buy Windows, the PC mfgs do.
"The fact that this has been around for as long as it has been in open source, I think, is just one more case that should debunk the theory that open source programming is in some way more secure than closed source programming."
OTOH if you were really worried about the security of this section of code with a FOSS implementation you could eyeball the code yourself. Obviously everyone trusted everyone else and no one was responsible for checking it, so that's who checked it. He doesn't like this free software idea much, does he?
"That suggests the specification provided insufficient guidance. "
So who developed Kerberos again?
Lots of fail to go round on this one, by everyone.
-
Friday 14th July 2017 08:38 GMT Updraft102
Re: What an interesting set of comments.
"The fact that this has been around for as long as it has been in open source, I think, is just one more case that should debunk the theory that open source programming is in some way more secure than closed source programming."
That's a logical fallacy. Open source is a defense against security flaws, but the protection it offers is not absolute (life's funny that way, not offering very many absolutes).
Of course there will be examples where non-absolute protection measures don't work, but that doesn't mean the protective effect doesn't exist at all. By the same logic, you could cite examples where police body armor failed to prevent the injury or death of officers, and then claim that body armor doesn't have any protective effect for police officers at all. Door locks don't stop all break-ins, seat belts and airbags don't stop all auto accident deaths, eating healthy and exercising doesn't prevent all heart attacks... there's no end to the examples you could think up.
-
Saturday 15th July 2017 07:48 GMT h4rm0ny
Re: What an interesting set of comments.
>>"That's a logical fallacy. Open source is a defense against security flaws, but the protection it offers is not absolute (life's funny that way, not offering very many absolutes)."
I really don't think that in practice it is a defence. Proprietary / Open Source are as likely or unlikely to be bug free as each other because the deciding factors are the number of developers, age of code base, pace of development, code review practices... And none of these are determined or even significantly influenced by the Propriety / Open Source split.
What Open Source protects against is not bugs but deliberate subversion. It is a great deal harder to hide backdoors in Open Source than in Closed Source. Massively so. THAT is what Open Source provides (well, along with surety of future availability), not protection against bugs. The latter is just a sales pitch by the over-enthusiastic.
-
Sunday 16th July 2017 21:53 GMT Updraft102
Re: What an interesting set of comments.
That is a pretty good point, H4m0ny, and there is probably a good bit of truth in what you wrote, though I still think there is an inherent level of protection in having the code open. By definition, all code is open to its own developers, whether open or closed source, so that would serve as the baseline for bug-finding. Closed code goes no further than that, generally speaking (there are exceptions, of course, but auditing projects and what don't tend to favor one side or the other). The goal of the closed-source people is to have as few people outside of their employ looking at the code as possible.
With open source, there are people who are not devs who get to see that code-- anyone who wants to, in fact. How many that actually amounts to would naturally vary by project and from day to day, but certainly it would have to reduce the risk of undiscovered bugs or vulnerabilities compared to the baseline (having the code seen only by its devs).
If there are bug bounties, it gets even better for open source. While hackers have done quite well in discovering exploits in Windows without having the source, they'd do even better if they did have it, as the bug-bounty participants generally do (do closed source outfits even have them?) .
Bug bounties give otherwise uninterested parties who have an interest in coding and a level of knowledge to match a reason to scrutinize the code.
That's not the same as saying that open source software has less bugs by any objective measure. That would be something to be determined with statistics, and I suspect that answering the question as to which one has fewer bugs per unit of (lines of code, compiled binary size, developer-hour, or whatever else they may consider) would generate as many questions as it would answer. While I do like a lot of things about FOSS (I am using a Firefox-based browser now, and I use Linux part time), there is no doubt in my mind that the closed-source method does some things better, but this does not appear to be one of them.
Still, the basic reason for my post, to skewer the assertion from the closed-source person that this Kerberos bug shows that open-source offers no protection against security bugs, I think, has been accomplished. As others have noted, the vulnerability was discovered in the open-source implementation first, then extrapolated from there to closed-source. It kind of proves the opposite of what the closed-source guy said. It may have taken 20 years, but it did get discovered; how long would it have remained in Microsoft's products if not for this discovery? Maybe a week, maybe another 20 years.
-
-
Sunday 16th July 2017 00:37 GMT Anonymous Coward
Re: What an interesting set of comments.
"Open source is a defense against security flaws"
It really isn't. We have had numerous major flaws in Open Source code recently that have a) apparently been obvious and b) have been there for many years.
Open Source just makes it easier for a well funded attacker to find new undiscovered holes.
-
Sunday 16th July 2017 21:57 GMT Updraft102
Re: What an interesting set of comments.
"Open Source just makes it easier for a well funded attacker to find new undiscovered holes."
Yes, but often that well-funded attacker is a white hat trying to earn a bug bounty.
There are security holes in any sufficiently complex software, but the really big ones, like EternalBlue (the basis for WannaCry), tend to flourish in the darkness of "security by obscurity." It has never really worked.
-
-
-
Friday 14th July 2017 10:58 GMT Ian Michael Gumby
@John Smith,,, Re: What an interesting set of comments.
Whoa!
You have a lot of misconceptions on software is developed. Especially these days...
Altman believes that the longevity of this particular vulnerability challenges the notion that open source code is magically more secure than closed source code. "The fact that this has been around for as long as it has been in open source, I think, is just one more case that should debunk the theory that open source programming is in some way more secure than closed source programming."
This is a very telling and very significant statement because the myth of superiority of FOSS has been promoted with no counter example. Now you have one.
You seem to think that anyone can just open up and look at some goop (you call code) and immediately understand what is going on and what the author's intentions are? There are two fallacies here. One that the person attempting to debug the code knows what to look for in the code and is familiar with the underlying problem that he or she is trying to solve. The second... That the coder actually took the time to write clean code that is easy to read, understand and debug.
Back in the 90's I stopped taking on projects written in C++. Not because of the language, but that I got sick and tired trying to figure out and fix poorly written and documented code that was full of bugs.
Many concepts of software engineering are not being taught properly or if taught at all.
Seriously, I doubt you've ever really worked with Kerberos and could walk through the code. Or have the free time to do so.
-
Friday 14th July 2017 18:59 GMT Alistair
Re: @John Smith,,, What an interesting set of comments.
@IanMichaelGumby:
Altman believes that the longevity of this particular vulnerability challenges the notion that open source code is magically more secure than closed source code. "The fact that this has been around for as long as it has been in open source, I think, is just one more case that should debunk the theory that open source programming is in some way more secure than closed source programming."
This is a very telling and very significant statement because the myth of superiority of FOSS has been promoted with no counter example. Now you have one.
--- Just because it took a while is neither here nor there, since ... Ummm... The bug has been found....
<who'da thunk it>
While I'll agree with and argue that it takes someone with more than just "coding ability" to chase down some of the more exotic errors in programs out there, one of the issues I have with some of these recent "OMG TERRIBLE HOLE, been in OPENSOURCE for EVARRR" cases we've seen is that for pretty much *every one* of them, there have been "No known cases where it has been exploited" --- and the folks that are reporting them and making so much of them seem to be implying that BOTH:
a) it was only because they and they alone are capable of finding the problem
AND
b) anyone and their brother in the skiddiot world could have exploited them at any time.
Now. Can we start putting some *logic* to the terrorizing of the population and stop this shit please?
-
Friday 14th July 2017 23:18 GMT Tomato42
Re: @John Smith,,, What an interesting set of comments.
> This is a very telling and very significant statement because the myth of superiority of FOSS has been promoted with no counter example. Now you have one.
except by the very nature of close source software we don't have the full picture, in turn leading to
https://youarenotsosmart.com/2013/05/23/survivorship-bias/
So sorry, but because that kinds of bugs are found regularly (people are actually looking for the bugs), fixed quickly (not after months and months, if not years of inactivity from the vendor), FLOSS is more secure.
-
-
-
-
Friday 14th July 2017 11:07 GMT Ian Michael Gumby
@ One who crashes in flames...
And you've missed the point.
I've had several clients in the past say that they want to use FOSS and shunned the use of proprietary systems. I asked why and I got the same blathering that it was more secure and better code because you can see the code...
The fallacy is the following:
1) It assumes that you or your staff are skilled enough to actually read the code to understand what is going on.
2) It assumes that if there is a problem, you could fix it. Meaning that you actually have the skills to understand the problem and fix it. (95% of those who use the software don't.)
3) It assumes that you can fix it... meaning that if there is a problem that you could fix the code which would then violate your support contract because you're using a modified version and the company that is selling support has no way to support you or your code.
One of the reasons some companies looked to FOSS and supported FOSS is that it reduced their cost of development and support because the cost of supporting those developers is split among several companies that used the tools but did not get revenue from the tools. (e.g. Google, Facebook, Amazon, etc ...) It also lowered their overall IT budget for staffing.
-
-
Friday 14th July 2017 07:32 GMT Anonymous Coward
Open Source is necessary, but not sufficient
For code to be secure, someone with sufficient skill and motivation needs to be looking for vulnerabilities.
I'd say that open source access is a necessary condition, but you still need the person with skill and motivation. Maybe we need more funded (government funded?) bounty programmes?
-
-
Friday 14th July 2017 09:57 GMT Doctor Syntax
Re: Open Source is necessary, but not sufficient
Sorry, access to the code is a necessary condition. No need for the "access" to be open to everybody
It's a statistical thing. The more eyeballs that can access it the greater probability that the right pair (or single eyeball, let's not be binocularist about this) will come along.
-
Friday 14th July 2017 11:16 GMT Anonymous Coward
Re: Open Source is necessary, but not sufficient
You do realize that many experts who have the 'right eyes' are prohibited from looking and commenting about the code, or don't because there's no personal bebefit for them to do so.
If you've ever read the Apache contributor agreement, you're indemnifying the code being submitted as your own that you own all of the rights to the code and you are giving those rights to Apache.
Suppose you're an expert on databases. You are exposed to a company's IP. You then write something for an Apache project that exposes that IP. Most companies will just fire you. Others may sue you and Apache for damages where Apache will be forced to back the code out, and you will end up in a very expensive hole.
This is why many experts who work for software houses don't write or contribute to Apache projects because they have no upside and unlimited downside. Instead they lecture and write books.
If you're a contractor and you're exposed to a client's IP, You won't just get fired but you will get sued.
-
Friday 14th July 2017 11:37 GMT Anonymous Coward
"It's a statistical thing."
If, and only if, open access to the code meant more eyeballs - which probably is not happening for several reasons. The fact code is open access doesn't mean more and more people will read it. They could, but do they do it, really? How many get asleep, or spend time on a beach or the garden reading Kerberos code? Even most open source developers are more interested in developing new code, than reviewing old one.
You need to have specific reasons to look at code *and* spotting bugs, especially when they're not obvious. And you also need specific skills when the code and underlying requirements are complex and not so obvious. So, statistically, how many good and competent eyeballs happen to look at open source code?
Sure, sometime you may stumble upon a bug while perusing some code, but it happen less than many thinks. Most vulnerabilities today are found with different approaches, i.e. fuzzing. Reading code and understand its behaviour fully when it is executed, maybe with unexpected inputs, is not so easy.
Sure, having the code to check exactly what happens *after* helps - but Windows code is accessible - you just need to be approved. And reliable security researches have that access.
-
Friday 14th July 2017 14:22 GMT Ian Michael Gumby
@LDS ... Re: "It's a statistical thing."
Its not the quantity but the quality of the eyes...
Many moons ago, I wrote some code for a client. I was pulled off the project to work on another client's systems. A couple of months later, I got called back in to the office because the rest of the team had 'fixed' my code and in fact broke it.
Why? Because they didn't read the comments and understand what I did and why I did it.
BTW, the original code worked as designed and was sound. (Someone wanted an enhancement.)
And it wasn't just one guy, but half a dozen people looking over the code. I then had to go back to the original code. Spend a long day walking them through the code, showing them why it works and then showing them how to make the mode for the new feature request.
The point is that unless you have a set of eyes attached to a brain of someone who knows what they are doing... you will end up with a mess.
-
Friday 14th July 2017 14:35 GMT Ian Michael Gumby
@LDS ... Re: "It's a statistical thing."
Here's a better example...
Suppose I can show you a Tablet of Ancient Sumerian text. I then show you a translation of the text.
Then we have 100's of people view both the text and the translation. They find nothing wrong.
Contrary, I show the text to two individuals who then say that the translation is wrong.
By your reasoning, 100 vs 2, you'd go with the 100 people. Yet the group of two actually know Sumerian. So what good is a number of eyes when they don't know Sumerian?
And that's the point. Your code is no better off when your set of eyes lacks the knowledge and skills to comprehend the material.
The danger is that this simple fallacy and assumption that just because you know how to crank out code, regardless of quality means you are capable of doing a proper code review.
-
Friday 14th July 2017 18:33 GMT Mark 110
Re: @LDS ... "It's a statistical thing."
Doesn't the fact the code is open present a better chance the 'wrong person' can find the vulnerabilities. They are the ones with the motivation to go looking.
Something like Kerberos thats always been assumed to be secure the 'right' people have never looked at.
I agree its better the right people can look, but is it good that the wrong people can also look?
-
-
Sunday 16th July 2017 22:02 GMT Updraft102
Re: "It's a statistical thing."
"And reliable security researches have that access."
They may have that access, but closed-source shops tend to believe in security by obscurity, so an undiscovered bug isn't a bug at all... it's nothing. Even if the security researchers can be trusted and NDA'd to not disclose what they see, it still means things that were non-problems being turned into actual problems.
-
-
Saturday 15th July 2017 07:34 GMT JimC
Re:It's a statistical thing.
No. The more eyeballs that *do* access it the greater the probability.... A million people with my level of skills could have looked at that code and it would have made not a blind bit of difference. The only eyes that count are those with enough expertise in the particular area to make a useful contribution.
-
-
-
Friday 14th July 2017 10:14 GMT phuzz
Re: Open Source is necessary, but not sufficient
If "open source access is a necessary condition" for finding bugs in code, surely that would mean nobody could ever find bugs in closed source code, and thus it must be totally secure?
Open source code makes it easier to find bugs (and much easier to fix them), but they can be found by a sufficiently motivated individual with no access to the code.
-
Sunday 16th July 2017 16:48 GMT Wensleydale Cheese
Re: Open Source is necessary, but not sufficient
"Open source code makes it easier to find bugs (and much easier to fix them), but they can be found by a sufficiently motivated individual with no access to the code."
There are plenty of bugs which manifest themselves as incorrect program behaviour; you don't need access to the source code to identify these.
Your best bet in the closed source world is to devise a simple reproducer, and I would argue that that is often the best bet for an open source project.
Unless you have the time, inclination and skillset, with open source you are possibly going to get a fix faster by submitting your reproducer to someone who is already familiar with that code. They will quite probably get to the problem far faster than you would.
-
-
Friday 14th July 2017 07:53 GMT sitta_europea
This has nothing to do with open source, nor closed sauce, nor tomato source.
Kerberos implementations are buggy because Kerberos is a ridiculously complicated solution to a rather simple problem. It's really only when you start talking SMB that you need to bite that particular bullet, and SMB itself is a whole nother can of security worms anyway.
Now who do we have to blame for this?
-
Friday 14th July 2017 08:50 GMT Anonymous Coward
And what with would you replace it?
Also, Kerberos is in no way tied to SMB - SMB can use it just it can use any other authentication/authorization protocols - and it uses it only in an Active Directory domain because Kerberos is the protocol used by AD. Remove the domain, and SMB falls back to NTLM.
Any time you're going to use SSO in an AD domain (and not only) you're going to use Kerberos, although probably through an higher level API like GSSAPI.
Anyway, authentication/authorization may look a "simple problem", but building a strong and reliable protocol and implement it is not.
-
-
Friday 14th July 2017 10:38 GMT Anonymous Coward
Using <> Auditing
What percentage of users ever audit? You need more than "Next, Next" skills to 1) understand the Kerberos protocol vs just using it, 2) recognize a flaw when it's staring at you in the source and 3) actually LOOK the source code in the first place. Add to that a certain amount of "it's been around for years so it must be secure" thinking, and you're here.
-
Friday 14th July 2017 21:29 GMT John 104
Dick?
"We will never be reimbursed for the cost to our lives and the lost time to our companies for having done this favor to the world," he said. "As a society, we need to understand what the costs of this work are."
I'm sorry, but if you don't enjoy doing this sort of thing, that is your own fault. No one is asking you to do it (but we are glad you did). Find something else to do with your time and get paid for it, or quit bitching about not getting compensated for what you do in your spare time.
I spent 5 years restoring a vintage boat. It cost me a lot of money and time. In the end, I have a beautiful boat that I enjoy motoring about with the family in. When I got done with it, I didn't bother crying to the classic boat community about how I'll never get reimbursed for my time on the project. Or how the world needs to be informed about the cost of restoring old shit. I did it because I enjoyed the work and the reward.
Perhaps a lesson in humility is in order here...
-
Saturday 15th July 2017 07:44 GMT JimC
Re: Dick?
The huge difference is that society needs this work to be done. If I don't restore the classic boat in *my* garage (struggling to get the right wood) society is no worse off. The point he's making is that if things need to be done within a timescale society needs to figure out a way to pay people for them, because there are few things like the thought of being paid to alter the priority folk put on a project. If society offered to pay me for restoring my boat you can bet it would be finished a whole lot sooner.
-
-
Friday 14th July 2017 21:29 GMT Hans 1
The fact that this has been around for as long as it has been in open source, I think, is just one more case that should debunk the theory that open source programming is in some way more secure than closed source programming.
adding that some affected code may never be fixed because the vendors no longer exist.
Who has enough brainpower to understand that the second quote COMPLETELY NEGATES the first ? If it is open source, a window cleaner can patch it, provided he knows where to download the diff, apply diff, recompile. There will be Microsoft Cleaner and Surface Expert-proof howto's on the web minutes after the patch has been released.
If it is closed source and the vendor is dead, you will have to invest in another solution.
-
Saturday 15th July 2017 13:37 GMT JimC
re COMPLETELY NEGATES
Meanwhile, back in the real world...
With closed source if a vendor goes toes up then if their product range has any value then it will be sold to another company and you have a support path provided you trust the new company.
If it hasn't any value then you should be ditching it anyway.
Whereas
With Open source if a development team loses interest then if you are lucky maybe someone else will take it on, and if they turn out to be any good then you have a support path should you wish to wait to find out whether they are in fact adequately competent.
If not then you should be ditching it anyway.
As has been stated numerous times forming your own development team to take on development of an open source product is simply not a sensible option for 99% of companies out there.
-
Sunday 16th July 2017 16:58 GMT Wensleydale Cheese
Re: re COMPLETELY NEGATES
"With closed source if a vendor goes toes up then if their product range has any value then it will be sold to another company and you have a support path provided you trust the new company."
Not necessarily. There may be bits of software (e.g. library routines) in those products which were originally licenced from a third party. Any software licencing agreement I have ever seen has had a clause stating that the licence is terminated when either company goes bankrupt.
-
-
-
Sunday 16th July 2017 03:35 GMT Joe Montana
Open source
// Altman believes that the longevity of this particular vulnerability challenges the notion that open source code is magically more secure than closed source code. "The fact that this has been around for as long as it has been in open source, I think, is just one more case that should debunk the theory that open source programming is in some way more secure than closed source programming."
Only that's exactly what happened, someone unconnected with the developers was able to view the open source code and identify the flaw. The only problem is how long it took.
This vulnerability may never have been found in the similarly affected closed source implementations without the source code, meaning only those organisations that have the src (criminals, the nsa etc) would have the advantage. Open source puts everyone on the same level.