back to article FreeBSD bug gives untrusted root access

A security bug in the latest version of FreeBSD can be exploited to grant unprivileged users complete control over the operating system, a German researcher said Monday. The flaw is present in FreeBSD 8.0 and is known to affect versions 7.1 and 7.2 of the open-source OS, Nikolaos Rangos told The Register. He said it was " …

COMMENTS

This topic is closed for new posts.
  1. ElReg!comments!Pierre
    Linux

    Ah-ah! {points}

    Hey it's not that often that a penguin lover gets to tease Beastie on a security issue; I'll go get my coat in due time, just let me savor my few CPU cycles of glory...

  2. Anonymous Coward
    Flame

    Best interests?

    I am all for full disclosure, crikey imagine what it would be like if all systems were locked up like OSX and WIndows, but this need to bleat from the roof-tops about every little security issue, is getting tedious now. Yes, you need to ensure people know about it, yes it's good ensure that the core dev teams get to know about these things ASAP, but it just seems like these "security researchers" are more interested in making a name for themselves rather than fixing serious problems.

    1. Anonymous Coward
      Linux

      umm

      Making it widely known is often the only way to get it fixed, and get people to patch their systems, otherwise you get the MS-with-IE approach of leaving remote root holes in the wild for a year.

  3. amanfromMars 1 Silver badge

    In IT Processing one can fully expect Change to be Employed and Unfurled at an Exponential Pace

    "FreeBSD's Percival has issued an advisory here. It includes a patch that he says may or may not be final. He writes: "It is even possible (although highly doubtful) that this patch does not fully fix the issue or introduces new issues -- in short, use at your own risk (even more than usual).""

    The real truth of the matter is that it is most probable (and therefore also guaranteed likely) that such a patch does not fully fix the issue and introduces new issues

  4. Chronos

    Patch for 7.1

    Just in case you're not subscribed to the security@ list, Eygene has just posted a link for 7.1 (and 7.0, which is EOL)

    http://codelabs.ru/fbsd/patches/vulns/freebsd-7.0-rtld-unsetenv.diff

    And yes, Pierre, it's a nasty that caught us all with our pants down, mainly because it was disclosed to a public list first. There are established methods to report vulns like this directly to secteam@ which are clearly laid out on the www. When someone discloses in this manner, we're all playing catch-up, users and project alike. It's irresponsible, but I suppose we should expect it from time to time.

    Kudos to cperciva & secteam for responding to this so quickly.

  5. Mage Silver badge
    Black Helicopters

    Local Access needed?

    If someone has local access all bets are off anyway...

    1. Chronos

      Not really

      All you need is access to a local *account*, not physical access. This is in the same league as the (now fixed) NULL pointer dereference vulnerabilities of a couple of months ago. Coupled with another vulnerability in, say, Apache (this is just an examplia gratia) that can inject executable code, you can get owned remotely. We really need to come up with a better term than "local" for these types of vulnerability. IMHO, "potential remote" is a more accurate fit.

      What I can tell you is that the emergency patch does indeed stop the published exploit code dead (frantically tested on my testbox at 6AM with a very quick and dirty regression test set). Whether it patches all cases of this vulnerability remains to be seen, but at least it guards against cut/paste s'kiddies for now.

    2. Anonymous Coward
      Anonymous Coward

      there's a difference

      ..between local and physical access. The former should be fine, on any sensible system.

  6. Anonymous Coward
    Unhappy

    Ho hum.

    As a FreeBSD user and admin, I don't mind the teasing. The FreeBSD security officers have a pretty good track record of fixing such things quickly, and they're keeping it up. Kudos to them. It's not even the local root break as there are numerous way to mitigate this sort of thing.

    If there is anything to gripe about it's about these ``security researchers'': I have no quibble with full disclosure, yet I have a quibble with these people: I think it a poor show to not contact the security team first. Especially since they're known to be responsive to inquiries. A poor show indeed.

    1. Anonymous Coward
      Linux

      The teasing is all par for the course..

      ..as is the undoubtedly rapid and comprehensive fix that the FreeBSD security team will put out. It's not like FBSD devs and users have any need to be crazy or defensive, it's a great *nix, with great developers. I suspect the people involved are smart enough to know that nothing's perfect, and make good rapidly when they do make a stupid cock-up like this.

      Note that it's far from frequent, this sort of thing in FBSD.

      (Disclaimer, I am a penguinista, a mac owner and a doze user, but I do still have an FBSD box ticking away in the background, and it never misses a beat)

  7. Tom 7

    would need local access to the vulnerable machine.

    So I can compromise a machine I can carry off?

    Scary!

    1. Baying Lynch Mob
      Stop

      local access != physical access

      Why is there always somebody who thinks "local" is nothing to worry about if the machine is /physically/ secured?

      It doesn't mean that you have to be sitting in front of the machine. It only means that you have to have logged in somehow - this can be a malicious user on a multiuser system, someone using a weak or sniffed password, or an attacker who has exploited something else (most often a poorly-written PHP script running on the webserver) to get shell access.

  8. Jason Bloomberg Silver badge
    Linux

    Won't affect me

    I expect we'll hear a chorus from Linux users about how this is minor and won't affect them as they don't allow rogue code which could exploit the bug to get on their systems in the first place.

    Fine words but I doubt many can actually prove that everything they install isn't potentially rogue. Sure, they can look at the open source code, but just how many do or can understand it fully ?

    Many use apt-get and similar and trust it to always be safe.

    I know Linux users who would swear blind their firewalling is 100% secure as they only allow connections out of their boxes or in response to such outward initiated connections. Obviously rogue code which initiates communications over port 80 will only ever be written to target Windows boxes.

    There's a level of denial, smugness and complacency which I find worrying from some Linux users.

    1. Anonymous Coward
      Anonymous Coward

      RE: What ever I'm replying to

      Bollocks mate.

      I'm a Linux stalwart and have always had the highest respect for *BSD. I have several pfSense boxes deployed because I can't do the same function easily via Linux (reliable multi link internet routing with a wizzo firewall).

      However, I'm not a *BSD desktop/general purpose user/developer/deployer because I prefer the Linux pace of development.

      There's a level of flinching ... some BSD users 8)

      Personally, I rejoice in the choice that is available via the open source route. I have the highest regard for the demonstrable stability and security of those varieties of *BSD I have come into contact with (OpenBSD and pfSense mainly)

      I call foul on the bird brains who announced a "flaw" without going through the established route of declare to "vendor", wait, announce.

      Their method was clearly karma whoring.

  9. Blake St. Claire

    @Anon Coward 10:094

    > As a FreeBSD user and admin, I don't mind the teasing.

    I second that. I've been using BSD since 386BSD 0.1 and using FreeBSD as my broadband firewall/gateway since 3.x or 4.x days and never had my system compromised.

    I know plenty of Linux users who can not say the same thing.

  10. Neal 5

    Oh my oh my

    How can the tears of laughter turn so sour so quickly.

    Now stop crying you Linux admins or whatever you call yourselves. Now you're complaining it wasn't reported in the right fashion, and that you need local access for the vuln to work. That being the same for almost all Windows and Apple vulnerabilies.

    The truth is, any improperly secured box, running any flavour OS, will be vulnerable at some point.

    To be fair, who cares how the bug is reported, you certainly don't complain about the way MS bugs are reported, what matters is fixing it, turn your focus in the correct direction, please.

    1. Chronos

      Who cares?

      We do. The responsible and established method of reporting vulnerabilities that potentially affect a large installed base of machines is to report via the relevant project's security contact. In this case, all the information you could possibly need to report in a responsible manner is here:

      http://security.freebsd.org/

      The FreeBSD secteam always credits researchers who report responsibly with finds, so I see no reason why the researcher in this case chose to make public full exploit code without giving the project a chance to mitigate this issue first. Ego or mischief? You decide.

      "Har har, you're not a secure as you thought you were" isn't very constructive, especially when the FreeBSD community is much more amenable to the idea that bugs exist, nobody is perfect and the best direction to expend effort is in finding and fixing them.

    2. CD001

      Linux !== FreeBSD

      Linux !== FreeBSD

      Local !== Physical

      Reporting !== Responsible Reporting

      And finally:

      > The truth is, any improperly secured box, running any flavour OS, will be vulnerable at some point.

      Whilst that may to true to a certain extent, "Vulnerability" is not only not equivalent to "Severity" but it's not even really related... even Microsoft have learned that running in root at all times is actually a really bad idea.

      1. Anonymous Coward
        Anonymous Coward

        I agree...

        I agree with what you said above, although I would point out that MS have been trying for years to make people use user accounts for user work, it's been the OEMs who have put on badly cofigured installs of Windows who have been making their default accounts admin users, because it's easier to support.

        I have also yet to come across a piece of Windows software which really requires you to run as administrator - every time someone has come up with an example, it's just because the installer script is rubbish, rather than the software itself (which is still a problem, but far easier to sort out than a re-code). Usually this can be overcome using regmon.

  11. Peter Kay

    @Jason Bloomberg

    If Linux users have any sense, they'll be keeping very quiet following the recent Fedora fiasco...

  12. Stevie

    Bah!

    So the problem is in the run time link editor? Simple fix then.

    Static link everything and disable the RTLE. Q.E.2.

    Not only will your FREEBSD machine be unhackable, but all your warez and pron-servers will run faster too.

  13. Anonymous Coward
    Thumb Down

    kaspersy got hacked again

    http://www.hackersblog.org/2009/12/03/kaspersky-com-pt-hacked

  14. Arthur Coston

    BSD is good. OpenVMS better. All we can do now is preserve our past

    Sorry, too long. Request of oldtimers.

    In a typical day I probably use 10-15 different operating systems and HW architectures, have been doing this professionally for over 40 years. Today I have needed to use several flavors of MS Windows (always assume they are insecure and hacked), several BSD and Linux systems of various flavors (assume relatively secure, still wary and careful), routers and firewalls (very wary since they face the net), an iPhone (Apple overconfident, likely safer than alternatives), and OpenVMS (highly secure out-of-the-box, assume secure and safe against almost any attack short of stealing the machine and replacing some of its hardware),

    Even with OpenVMS, as soon as you allow any type of executable or scripting you greatly increase your risk of being compromised somehow, though VMS makes it very difficult to escape beyond the initial user account. Almost all browsers are very insecure, even when we thinked they have been secured. While add-ins from Adobe, et al for Flash, PDF, etc. are among the most common attacked because of ActionScript, and while JavaScript is much the same, disabling these functions or using tools like FlashBlock or Noscript only affects the obvious issues; AV & FW product suites (e.g. Norton) help protect against additional attacks, but you are still vulnerable.

    If there is ANY dynamic content, ANY executable component on the web page (e.g. XML, CSS, XHTML), ANY item included by reference (e.g. CSS, image) or implicitly (e.g. font), or ANY way to trigger substitution or chaining of URL/URIs, or ANY access to the environment/system beyond the current browser subwindow, or ANY possibility that an unsafe object will be "rendered" (e.g. XML/JS embedded in an image, maybe sneaking in as a color match or a playlist) -- if you allow anything beyond simple HTML implemented in lynx or w3m, you are can not be secure. Period. (BTW I am currently using w3m on Linux using a restricted account and posting this with the vi text editor.)

    Finally, a couple of comments about what makes OpenVMS and a few other systems so much more secure than MS Windows, Linux, BSD, or any of the other Unix type systems, even when using additional security applications. By the time any of these opsys were designed, we knew a lot about to do and even more about what not to do if one was building systems with a reasonable level of security. DoD was a driving force and funding source for a lot of R&D projects (e.g. Multics) with many best practices reflected in the "Orange Book" and related documents.

    These efforts plus the widespread use of timeshared systems and terminal networks in the 1960's and 1970's had exposed a majority of the problems still plaguing us today. We also knew how to mitigate and minimize these problems. So, what went wrong? How did we fail so badly as an industry?

    A couple of basic issues, then and now, prevent widespread adoption of secure systems, and probably preclude anything ever being secure in the future. First, almost no one is willing to pay for security until it is too late; it is too late after the first insecure design decision. The gov/mil markets and a few others would pay for a while, but even they were eventually forced to use Commercial Off-the-Shelf (COS) for most applications, losing any prospect of secure systems. As a result, we have now have ships, planes, and critical national infrastructure controlled by systems using MS Windows running on hardware made in insecure foundries in India, Malaysia, China, or who knows where.

    The second component was the rise of Unix and the C language at research and university sites. Unix, like the earliest computers of the 1950s and like the later PCs, was designed for a single user or a small number of trusted users with almost know regard to issues of security. Very simple versions off root/user (all privs vs limited) were included mostly to protect the user from himself, possibly the next user. and maybe a hardware failure.

    They also implemented almost no validation of arguments in calls for system services; if you crashed the system, you were the one hurt, fixed the problem, and tried to be more careful next time. Even worse, the C programming language lacks even the most basic features required for any useful application: character strings, I/O, error checking and handling. Sure, with enough skill, care, and time almost anything is possible -- but not for most people most of the time. Since most implementations of Unix/C library and system calls don't validate their arguments, particularly those zero-terminated strings and various pointers to data structures, then any chance of security is lost before you begin.

    Unix/C became "hot" at many of the newly-founded computer science departments because if was "free" to universities from AT&T, it was somewhat portable and could run on relatively inexpensive hardware, and because it came with source code, allowing almost anyone with minimal training to immediately begin doing operating system R&D, something almost none of the new generation of CS professors had done, most lacking any work experience outside grad school and almost none knowing real-time, process control, or security.

    So while the relatively small group of first (1950's) and second (1960's) gen programmers were using hard-earned lessons to design the HW/SW using best practices, our labors would eventually be supplanted by the much larger market fueled by lower price, quicker results (even if wrong), and unprepared to comprehend the problems and the true costs that would be paid. Workstation vendors (Sun, Apollo, HP, even part of DEC) were happy to push the merits of the Unix-based solutions versus the user being tied to any propietary system (happier to avoid the most of the SW development costs for opsys or compilers for COBOL, PL/I, FORTRAN). The Unix vendors seemed oblivious to the PC market using exactly the same low cost, commodity HW and SW arguments to destroy most of their markets, even using still worse quality software, poorly tested, badly designed, and barely useable. But it was cheaper and you could even do some things yourself. So what, if your spreadsheet was full of errors and the cross foots never matched.

    What is really sad is that most of the arguments used by each successively gen of HW/SW against the previous one or two gens are mostly lies. For example, OpenVMS sources were always available, though most only got the microfiche; OpenVMS as portable enough that versions for three radically different HW designs were sold commercially; OpenVMS was one of the first systems certified as meeting the "UNIX"/OSF standards; while initial cost of OpenVMS systems was higher than comparable UNIX systems, most of this was artificial in failed attemps at market segmentation and operational costs were much lower than similar UNIX or Windows systems (I think the number of people required were 3 and 10 times more), and most of the innovations in Unix/Linux/Win desktops and systems (e.g. I18N, clusters, transactions, file system, remote desktop) had been in OpenVMS for many years. Better technology rarely beats lower price, aggressive marketing, and the next generation of trendy users.

    At this point, I am not really bitter about how things turned out in the industry, it was probably more-or-less how the world now works. I do get frustrated when I have to spend several hours trying to isolate the cause of a functional or security issue, only to find it caused by a design that should never been allowed in the first place. First rule of automation, "Does this really need to be automated (computerized)?", alternate version "Why does this need a web interface?". By default, the answer should be an emphatic NO!

    I worry when I look at my customers, their applications, and the prospects in the future when I and others with critical KB are no longer available. Some of their systems control critical infrastructure (power, water, financial transactions, oil and gas distribution, manufacturing, etc.), have been operational for 15-20 years, and might have planned operational lives even longer into the future. What will happen when we and other specialty suppliers are no longer around? We caught a small glimpse of this when dealing with the Y2K problem, but that was easier because there were still many with the technical and institutional knowledge still around.

    We see this increasingly in many industries as revenue from new sales and support contracts eventually falls below the level needed to sustain a small business able to continue maintaining software and specialized data collections. Who will pay to support the archive of a web site or publication that has ceased operation? Who will respond the next time a hurricane destroys many of the oil platforms if those like us are no longer around?

    I see it as a duty to make sure that my unique and extensive collection of HW/SW/docs is placed in the trust of an appropriate university entity as soon as possible since one never knows how much time remains. I had one 1950's system I thought safely stored be sold by the government for scrap metals and vowed never to allow that to happen again. As a result, I now have a significant collection to protect properly while I still can.

    Some of you has similar responsibilities; think about what might be lost forever if you fail to document your unique knowledge of what really happened before about 1980, how things operated in "olden" days, and to ensure that all the programs and documentation from before PCs are preserved. I was recently surprised at how much effort was required to get a working IBM 1401 system and there were over 20,000 of them made. I was really happy when I was able to confirm that a 1403 printer could still play "Anchors Aweigh". That was by far the most widely used computer of its time, and it was still barely possible. Act before it's too late.

This topic is closed for new posts.

Other stories you might like