back to article ASLR-security-busting JavaScript hack demo'd by university boffins

Researchers in Europe have developed a way to exploit a common computer processor feature to bypass a crucial security defense provided by modern operating systems. By abusing the way today's CPUs manage system memory, an attacker can discover where software components, such as libraries and RAM-mapped files, are located in …

  1. Notas Badoff
    Terminator

    Spooky!

    I thought I was buying a motherboard, not an Ouija board...

  2. chuckufarley Silver badge
    Mushroom

    Java*.*

    Nuke it from orbit..it's the only way to be sure!

    1. bazza Silver badge

      Re: Java*.*

      That is certainly becoming increasingly necessary.

      Articles like this show why the browser "promise" that it is safe to download and run arbitrary executable Javascript code and to do so by simply viewing a website is actually more of a "myth". Arguably it's far more dangerous than a Java plug in, because it's baked in to the browser. Noscript and things like it are themselves a poor substitute for removing Javascript altogether.

      If the edifice of browser security really does crumble completely because of things like this, it is going to be an almighty mess to sort out. Almost the whole of the Internet assumes that Javascript is available. If that is shown to be hideously dangerous it would be impossible to rectify quickly without breaking almost everything. For example, all of Google's services would stop working if you removed Javascript.

      Massive Strategic Cock-up

      When the world decided that it was just fine to go down the route of client side execution, it rashly assumed that this could be made secure. Well, it cannot.

      The proper answer is server side execution, with standardised remote display protocols being the only thing that the browser has. Things like X come to mind.

      The need for client side execution has come to an end given that we all have broadband Internet.

      It's far easier to prevent arbitrary code execution on a client if the only attack vector is a dumb remote display protocol instead of a full execution environment like Javascript. Protocols can be checked and asserted, but who can tell if a piece of Javascript is malicious or not?

      Honestly now, what's wrong with the idea of having a HTML frame in which one has an X window (well, a thoroughly modernised equivalent) dishing up an application display from the server, instead of having that application running as Javascript in the browser? Sure, it means the server might have to do a bit more work, but that's becoming less and less of a problem. Javascript code on some websites is getting huge nowadays, surely it'd be quicker to run the application server side and push the display out to the client?

      You'd also not be giving away one's application source code to each and every client.

      1. Charles 9

        Re: Java*.*

        Not true. Not everyone has broadband. Plenty are stuck on dialup, satellite, or low-end wireless. Plus what's stopping X servers from being attacked, not to mention servers full of juicy information. Frankly, I'd say the horse of privacy has bolted and will never return. Even if consumers abandon the Internet en masse, high speed private and government network will continue.

        1. bazza Silver badge

          Re: Java*.*

          "Not true. Not everyone has broadband. Plenty are stuck on dialup, satellite, or low-end wireless."

          Well I wish someone would tell the world's website developers that.

          "Plus what's stopping X servers from being attacked"

          It's easier to secure a protocol (which is all that X has) than to guard against malicious code run by a browser that will happily run arbitrary code (javascript). Besides, I was advocating a modernised take on X with at least some security built in (unlike the original X).

          "not to mention servers full of juicy information"

          They do anyway regardless. But an X server isn't in the same category as a server that stores information (such as a website), and is anyway running at the client end, not the server end.

          "Frankly, I'd say the horse of privacy has bolted and will never return. Even if consumers abandon the Internet en masse, high speed private and government network will continue."

          This isn't about privacy.

      2. MacroRodent

        Re: Java*.*

        > Honestly now, what's wrong with the idea of having a HTML frame in which one has an X window (well, a thoroughly modernised equivalent) dishing up an application display from the server, instead of having that application running as Javascript in the browser?

        Then the black hats will simply proceed to crack your instance of the server side application. That may or may not be more difficult, depending on the competence of the application developers with respect to security (usually dismal), and the competence of the server managers (yeah, right...).

        Also you need bigger servers. Client-side computing distributes some of the load.

        1. bazza Silver badge

          Re: Java*.*

          "Then the black hats will simply proceed to crack your instance of the server side application."

          Er, the point is that a client user wouldn't have an instance of the app at all on their own hardware. The app would remain on the server.

          It's far easier to secure a fully defined remote display protocol than to guard against arbitrary code that may or may not be malicous. If all that's flowing between server and client is information to be displayed and mouse click events then the attack surface is significantly smaller. There would be no arbitrary code running on either server or client.

          The problem for a web browser running arbitrary javascript is that the browser developer has no control over what code actually gets run in the browser. This is a much bigger attack surface, as we are witnessing right now.

          "Also you need bigger servers. Client-side computing distributes some of the load."

          That kinda depends on what's being served up. If a website has some enourmous database and most of its workload is running and querying that (e.g. Google's search), then hosting the application too is small beer in comparison. If someone visits something like Google Docs, a vast pile of Javascript code is piped from the server to the browser over an encrypted connection. But if that someone then clicks out of Google Doc without doing much then the amount of javascript served by the server out outweighs the amount of data that it would have taken to convey the display instead. Similarly for Google Maps, etc. As for content websites, like YouTube, they're serving up streams of video data which would be the same amount of server-side work regardless, or adverts (in which case it's someone else's problem).

          Client side computing does distribute the load, but for the client, who generally has a battery powered mobile these days, that's a bad thing. Hence functionality-deficient mobile websites. Hence native apps. Hence separate iOS and Android native apps, and the two teams needed to develop and maintain both.

          It would be far easier if you only had to develop the app once and have it displayed to a remote client through a standard protocol.

      3. coconuthead

        Re: Java*.*

        "The need for client side execution has come to an end given that we all have broadband Internet."

        No. Unless you've found a way to send a signal faster than light, you still have the latency problem. For example, many text boxes such as the one I'm typing into are Javascript and it would be intolerable to wait for a ~1s round-trip for each keystroke to be echoed.

        I'm not saying I agree with the wholesale move to client-side - what has effectively happened is that a general mechanism was let loose on the world instead of the standards designers thinking about the actual real-world problems - but all that standards work that was never done would still need to be done. And there would be a lot of it, with standards for many different application domains, and therefore a large attack surface.

        1. patrickstar

          Re: Java*.*

          You solve this by having somewhat intelligent protocols, so text input and other basic building blocks of UI interaction are handled on the client.

          Sure - there's still an attack surface, but it's significantly smaller, and any bugs are much less likely to be exploitable when you don't have a Turing complete language to work with.

          As to users on dialup and similar - today's web drunk on JS and media is already all but useless for those. Just the JS to drive a fancy "modern" site is frequently several megabytes. Not to mention the ads (and sites that refuse to work if you don't load them).

          So the choices are either JS monsters that are useless on a crappy connection AND exposes everyone to great security risks, or you have another solution which is also useless on a crappy connection but doesn't hurt everyone's security.

          Fallback mechanism in either case is the same - basic HTML.

          1. Charles 9

            Re: Java*.*

            "You solve this by having somewhat intelligent protocols, so text input and other basic building blocks of UI interaction are handled on the client."

            Anything *smart* you put in would be targeted by the malware writers, such as with malformed input. And then there's the matter of things like video players that MUST be on the client for performance reasons (Don't believe me? Try VNCing a video player...)

            And while you can harden a protocol, protocols are useless without implementations, and it's the implementations the malware writers will target. Remember, there are times when browsers get pwned by malformed HTML: no JavaScript necessary.

            1. Charles 9

              Re: Java*.*

              And BTW, due to the architecture involved, X servers actually reside on the clients. So when I say malware writers target the X servers, I'm referring to the X servers residing on the clients.

            2. patrickstar

              Re: Java*.*

              My point is that JavaScript is basically the reason browsers are still being owned despire all mitigations in place. If you look at any browser exploits caught in the wild recently, you will see a near 100% dependency on JS even when the bugs have nothing to do with the JS engine itself. Even when they target stuff like the Tor Browser where a pretty significant % of users run without JS enabled.

              The reason is that exploiting a memory corruption vulnerability is no longer a matter of blindly firing off the right payload. You need a multi-step process that involves leaking/reading out addresses (to defeat ASLR) and often doing very precise manipulation of the memory to gain usable control. And you basically only have one chance to get it right, so no bruteforcing (like the good remote memcorrupt exploits of old).

              There are exceptions to this of course, but we are talking reducing the exploitable vulnerabilities by several orders of magnitude, as well as removing one of the most significant attack vectors - the JS engine - entirely.

              Plus, with a more "remote desktop"-oriented model, you could keep the client a lot more stable. Not as much need for new fancy features every week when the client is simply doing the presentation. So the amount of vulnerabilities would actually go down over time, instead of basically staying constant as it is today.

              Yes, there are vulns in things like X servers and RDP implementations. But yes, there are significantly fewer of them than in browsers and they are much less likely to be successfully exploited in the wild.

              1. Charles 9

                Re: Java*.*

                "Plus, with a more "remote desktop"-oriented model, you could keep the client a lot more stable. Not as much need for new fancy features every week when the client is simply doing the presentation. So the amount of vulnerabilities would actually go down over time, instead of basically staying constant as it is today."

                But the client will ALWAYS be a target. They'll just target the rendering engine and send it malformed inputs and so on. Hoping for perfect code when it's open to the outside is like wishing for unicorns.

            3. MacroRodent
              Angel

              Re: Java*.*

              Try VNCing a video player...

              Actually, this works if you have a VNC client and server capable of "tight " compression (like TightVNC or TigerVNC): the video (and other photo-like image parts) gets sent compressed with JPEG, so it is somewhat equivalent to streaming "Motion JPEG". Oh, you want to hear the sound also? well.... I think TigerVNC has some solution for this, but have not tried that part in practice.

              1. Charles 9

                Re: Java*.*

                I use TightVNC, but the problem is that even on a home LAN setup, on-the-fly MJPEG doesn't get up to 30fps and involves lot of tearing, plus as you said no sound. Plainly and simply, network bandwidth issues mean you really need the originally-compressed data stream (the optimal way to send the data) sent down the pipe and then decompressed locally to get the most performance. Some things like video and audio rendering are simply best done at the render point.

      4. Anonymous Coward
        Anonymous Coward

        Re: Java*.*

        There is only one answer, and the future is Lynx

        http://lynx.invisible-island.net/current/index.html

        Exposes the ammount of real content on the interwebs

        1. Charles 9

          Re: Java*.*

          I believe Lynx has been targeted, too. Plus that defeats things like gallery sites.

  3. John Smith 19 Gold badge
    Unhappy

    Intersting enabler for other stuff.

    Removing your security slice by slice.

    BTW "27 bits of entropy?" Does that mean this mean ASLR in Windows can randomize the start location anywhere in 2^24 addresses but this SW can identify a marker location in a specific data structure even if the structure start address was anywhere in 2^27?

    Good work by the team.

    BTW. As a core security technique of multiple OSes you can bet this has been under attack by all known TLA's from the time it was announced. "Security by delusion" is no more security than "security by obscurity."

    It's good for people to realize this.

  4. Paul Crawford Silver badge

    Timing attacks?

    Why not modify web browsers to reduce and randomise the time-measuring functions available to any script?

    I mean, when does a web page really need microsecond resolution? If the timing is jittered by a millisecond or so by some pseudo-ransom process would it really break stuff that is talking to the web server via a TCP/IP link with delays typically of the order of 10s of milliseconds?

    1. Charles 9

      Re: Timing attacks?

      An auction site, for starters. Especially when it gets towards the end and there's a rush of bids.

      1. Claptrap314 Silver badge

        Re: Timing attacks?

        Auction sites are random number generators at the end. Just because it is USEFUL to participants who wish to time their bids does not mean it is NEEDED. And human beings can't time beyond 0.01 seconds anyway, so this is really the same thing as the nonsense in the stock markets.

        0 reasons there. Got anything better?

        And don't forget, if you ever do have a true need, you can always ship an independent application to handle the entire interaction. This is a discussion about a general use tool.

        1. Charles 9

          Re: Timing attacks?

          "And don't forget, if you ever do have a true need, you can always ship an independent application to handle the entire interaction. This is a discussion about a general use tool."

          Because you're catering to John Q. Public who doesn't want to get saddled with yet another piece of software. You're talking the Facebook generation here.

    2. Anonymous Coward
      Anonymous Coward

      Re: Timing attacks?

      "when does a web page really need microsecond resolution?"

      I'm still trying to work out why websites need baked-in hardware 3D rendering, much less why they needs timers with resolution so fine they can time operations on components deep inside the CPU.

      1. Charles 9

        Re: Timing attacks?

        If you want faster rendering, especially with vector graphics like SVGs, which is being demanded, you need to get close to the metal. 3D is easier for modern GPUs to grok.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like