back to article Cross-site hacks and the art of self defence

Hackers can force your browser to send requests to any site they want. It's not even hard - all they have to do is get you to view an email or a web page. Unless the site is specifically protected against this - and almost none are - then attackers can make your browser do anything you can do, and they can use your credentials …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Alert

    Good article

    A combination of a per session token which is submitted with each request and a sessionID cookie work well as a combination. Also expire sessions after a certain minimum of time (easily done with a session cookie), 'cos you know your users won't log out.

    Icon, I've seen way to many CSRF holes on popular sites, you've been warned

  2. Lyle
    Flame

    Security

    Can't you also write your form-processing so that if the form request has come from anywhere except your own site, it won't run?

  3. Anonymous Coward
    Alert

    Barclays online bank - Just Yesterday!

    http://singlecell.angryamoeba.co.uk/post/47754707/barclays-online-banking-uses-sketchy-external-stats

  4. Jon
    Paris Hilton

    Referer field?

    Could this be detected and stopped by checking the old-fashioned HTTP Referer field? E.g. the only site that should have Netfix "add to my queue" links is netflix.com; if the Referer specifies a different host then the request is being forged. Netfix could detect this and deny the request.

    Or is it possible to fake the Referer field using Javascript?

  5. Anonymous Bastard
    Boffin

    @ Security By Lyle

    Do you mean the http referrer header?

  6. Ke

    Re: Referer field?

    Agreed - a simple solution is to require a same domain referrer when preforming tasks.

    AFAIA this cannot be forged by an un-tampered browser and should be a defacto condition in any sensitive app or task.

    Great work bringing it into the open - but surely this should be developer common sense?

  7. Geoff Mackenzie

    @Anonymous Bastard

    Most pedantic, I like it!

    The problem with verifying using the referrer is not all browsers set it. Although it's certainly fakeable, I'm not aware of any way to fake it with JavaScript, but browsers that look after their users' privacy shouldn't be sending it anyway.

  8. Lyle
    Happy

    @Anonymous Bastard

    Well, technically I meant HTTP_REFERER (Don't you just love specs that can't spell?)

    Regarding followup posts, I wasn't aware that some browsers don't send the referer, I thought that was a standard part of the HTTP request, to know where the request had come from.

    Glad others thought it was a good idea too!

  9. Saul Dobney

    Responsibility of web-apps to avoid this

    For a web-developer, don't perform actions on GET variables without a check token. GET should be for display only otherwise.

    Secondly for CMS builders don't allow anyone to include a <FORM> or <INPUT> variables in a user-defined content area. One big reason is that it makes it simple for a rogue user to fake the login page for the CMS and harvest usernames and passwords. Second reason is it allows people to build fake Google Checkout or Paypal pages that redirect to harvester sites. And, naturally, never allow the user to include JS in any way.

  10. Kier
    Alert

    Can't rely soley on HTTP_REFERER

    HTTP_REFERER is a good defence, but can not be the sole defence, because certain systems remove the referrer from requests (yes, I'm looking at you, Norton Internet Security).

    GET should never do anything but fetch content for viewing. http://example.com/script?do=delete&what=all_my_stuff is going to be bad news.

    POST should always be used for data-altering actions, but as forms can be forged and auto-submitted using javascript, this should not be relied upon as the sole defence either.

    If you avoid auto-submission of forms by disabling javascript, you're still not safe from form forgery. Consider this:

    <form action="bank.com" method="post">

    <input type="hidden" name="do" value="transfer" />

    <input type="hidden" name="account" value="12345678" />

    <input type="hidden" name="value" value="5000.00" />

    Rate this image:

    <input type="radio" name="rating" value="3" />Great

    <input type="radio" name="rating" value="2" />Mediocre

    <input type="radio" name="rating" value="1" />Terrible

    <input type="submit" value="Go" />

    The only secure defence right now is the security token.

  11. Anonymous Coward
    Unhappy

    http_referer

    We wanted to use http_referer in a number of our apps, but they get clobbered by reverse proxy servers, which we make liberal use of.

  12. Jeremy

    No no no

    Web security 101. Never implicitly trust anything that comes from the browser. Ever. That includes the referer (sic) header...

    By all means use it as a part of your system, churn requests from a bad referer but don't automatically accept them just because it's right.

    The randomly generated, expiring token concept is much stronger because your own server generates the check value and the client has to mirror it back to you. If something is interfering with the process, it probably won't know the correct check value and you can churn the request. In the case of the referer string however, the client always knows what the 'correct' value is supposed to be (the URI of the page you're attacking), leaving it open to manipulation by the user, malware, browser addons, and so on...

  13. James Butler
    IT Angle

    Huh?

    This is a phishing attack, right?

    The article is very confusing ... it seems to imply that any old server can host files that automagically grab your bank credentials (from where, it is not indicated) and initiate a transfer, or that any old server can host a specially crafted IMG tag that goes on to hijack your browser and all logged in sessions (how it could do that based on some script is not indicated).

    I just don't see how this could be anything other than a typical phishing scenario. What am I missing? Is this supposed to be something new, or just a reminder to validate all user input all the time?

  14. ziggyfish

    Web programmers fault.

    As Jeremy stated, you should never trust outside input data (outside of your system). As for severity, it's only hackable, if the incompetent web programmer doesn't escape or validate the input.

  15. Sitaram Chamarty
    Alert

    Here's a firefox plugin that will help

    There's a somewhat (sadly) unknown Firefox addon called SecureBrowse (disclosure: my idea, and my colleagues developed it; I am too old to learn JS now!) that can really help.

    https://addons.mozilla.org/en-US/firefox/addon/5967

    SecureBrowse simply enforces the kind of paranoid "delete cookies etc", and "don't stay logged in to bank sites" and so on behaviour that I found myself doing anyway.

    We released an FF3 version, but it was sent back to the sandbox because it clashed with some other extension. We're looking into it but meanwhile you can get the FF3 compatible version from the sandbox (easy enough to find but I'll happily provide instructions if someone asks... sitaramc at gmail dot com or will even send it to you)

  16. amanfromMars Silver badge
    Alien

    Phishing for Tiddlers...... and Landing a Shark ....

    ....Don't Panic or you'll Capsize the Craft and Lose Everything to Nature.

    "The article is very confusing ... it seems to imply that any old server can host files that automagically grab your bank credentials (from where, it is not indicated) and initiate a transfer,..." ... By James Butler Posted Friday 29th August 2008 20:17 GMT

    No just any old server, James, only those IntelAIgently ReDesigned.

    "What am I missing? Is this supposed to be something new,..." You have missed very little and it is surely something new. It is also Beautifully Plausibly Deniable and/or Transparently Admissible, dependent upon Audience and their Need to Know. Some would be just Too Confused if they be Told and thus is IT a Kindness to Hold One's Counsel if One is Privy to Automagic Information.

    :-) Go on, I dare you........ Check your Bank Account, Online, 42 See if you are an Asset for Automagical Support or a Liability to be Artificially Controlled.

  17. Chewy

    HTTP_REFERER

    Most internet security packages allow the referrer to be wiped as most web developers know.

  18. Nick Clarke
    Boffin

    Cookie + request parameter

    A useful trick is to use a random-id cookie, and have javascript add a copy of the cookie value to the page submission as a form field or GET url parameter. Then have the server reject the request if either the cookie or form field is missing, or they do not match.

    This blocks a lot of CSRF attacks, because although the browser will happily include your server's cookie for requests sent to your server from a page on another domain, javascript on that page cannot access the cookie for your domain due to the same origin policy, hence it cannot set the extra form field.

  19. Anonymous Coward
    Anonymous Coward

    What a misleading title

    There is no defence if the site is not secure then your account on the remote host can be abused. You cannot control that.

    Telling people to logout doesn't solve the problem if the logout system is faulty either.

    Cookie deletion is probably the best way of SELF defence, the point is you cannot rely on a site that is unsecure to do it right. Logout is part of the remote system, something you cannot affect or ensure they have done correctly.

    There are other hooks that systems use as well, so you cannot really have a silver bullet here, it is all about trusting the remote is doing everything right.

    I know these articles are meant primarily as infomercials, but a bit more effort with less sensationalism could be used. The problem is this type of article seems to suggest there is something the user can do, and therefore if they don't they share in the liability, but no, it is the fault of the remote server with your account on it, so let's not confuse that point.

  20. Danny
    Boffin

    xhr

    Everything can be faked. See http://en.wikipedia.org/wiki/XHR

    Headers, GET, POST, refer[r]ers, fetch, response - javascript and XHR can do pretty much everything. As the article says the request comes from *your* browser, at your IP addr and using your (re)login credentials stored in permanent cookies. Add a 1x1 pixel iframe and the hosting site (hosting the attack code - the one you are visiting) can do pretty much anything it wants. The target website cannot know if this is a genuine request from you or a forgery. Hence the problem. Hurrah for Web 2.0

    When browsing:

    (1) Disable JS, Flash and other random downloaded code exectutors to mitigate the XHR problem.

    (2) Log out when done and delete cookies - tell the browser to only use session cookies (Mozilla allows this).

    (3) Only keep one browsing window open when visiting important sites to limit cookie exposure (and delete them when done)

    (4) As suggested, use multiple browsers. I use Konqueror (no JS, no plugins, cookies) and Mozilla (JS, Flash, session cookies only) this way.

    For web dev:

    (1) Don't use JS and recommend user's disable JS (a long shot but hey it's you that's being ripped off.)

    (2) Add the following iframe breakout script to every page

    <script type="text/javascript">if (top!=self) top.location.replace(location.href);</script>

    If anyone tries to put your site in an (invisible or otherwise) iframe it will become pretty obvious to the user who will then (hopefully) contact the hosting site to say that something is horribly wrong and the webmaster can go and fix the XSS to whatever they are hosting.

    location.href could be replaced with a redirect to some other site (so as not use your cookies) or to a page on your site that deletes your cookies on the browser to make them safe.

    (3) Reconfirm credentials often. Keeping the sessions short to mitigate time at risk. And ask for passwords to authorise actions on really sensitive pages.

    (4) As already suggested, random tokens to return to the site as GET or POST parameters will stop the less sophisticated attacks that do not scan for tokens. An advanced JS script with XHR can load a page, scan for the token value, insert it, post it back @Nick Clarke: a neat idea but a preprogrammed script could search for the JS rather than hidden input tags. It would need to be a big XSS hole but it's possible.

  21. Anomalous Cowherd Silver badge
    Thumb Down

    @ Danny

    It annoys the buggery out of me when people suggest disabling JavaScript. It's 2008, JavaScript is a core part of the web and disabling it is going to instantly reduce anything more than the simplest brochureware site to a big bag of fail.

    This advice is worse than useless, doubly so as it's typically given to people that know next to nothing about the intertubes. All it does it reduce the web to one large error.

  22. Danny

    @Anomalous Cowherd

    Go and read the forums at the hacker sites and consider whether one should be doing everyday browsing with JS enabled?

    Yes, it annoys me that in 2008 some sites resort to "enable JS or fuck off" for stuff that doesn't even need to be. JS links anyone? JS breaks the Web for the blind, disabled or anyone using a stylus. You know, that accessibility thing. Web 2.0 be damned. Does El Reg demand its users be JS-enabled? Even Gmail has a non-Web-2.0 interface that works really well.

    JS is necessary for media type stuff hence my keeping Mozilla JS+Flash enabled to visit utoob. Actually, the new look Beeb site is a good example of providing accessibility and (semi-nagging) relevant alternatives to embedded Flash. The JS provides additional functionality not critical functionality. Web 2.0 stuff is pretty groovy and the slick interface seductive but there is nothing JS can do that cannot be done by a scripting server (I do it everyday).

    JS has been given too much power and now it's being abused by those who want to steal your money. It's your choice: keep your accounts secure or create another million zombies.

  23. Jon

    @Danny

    Even better, why not pull the network cable out the back as well...

  24. amanfromMars Silver badge
    Pirate

    RAIR Force Blue Skies Thinking ... Per Ardua ad MetaAstra

    "It annoys the buggery out of me when people suggest disabling JavaScript. It's 2008, JavaScript is a core part of the web " ... By Anomalous Cowherd Posted Monday 1st September 2008 12:37 GMT

    I agree, Anomalous Cowherd.

    The Web is as IT is, with many Offering Gadgets. May the Best Gadgets Win so everyone Wins with Win Win. They are not Competitive Gifts they are Complementary and Complimentary Services which Big Business Thinks to Charge you for. And All they Really Only Want is Money. Let them go to the Banks and let the People Enjoy Themselves with their Money ..... Pretty Good Paper......... for IT is Only a Perverting Control Invention ..... an Artificial Intelligent Design and thus Oxymoronic.

    After All, whenever IT is Spent IT Just goes right back to Source for ReSpending........ so why all the Taps that Turn the Flow Off? It is not Logical.

  25. Danny

    @jon

    If that's how you ensure your own security, knock yourself out.

    I notice that you've not addressed the charge that running untrusted code on your own machine is dangerous. Do you wonder why noscript is the most popular add-on for Firefox? Or why IE is now copying it?

    As hinted by the article, CSRF is likely to be the next big thing and provides some solutions. And, much like disabling VBscript in Word for those that have discovered the joys of finding Trojans where they least expected them, disabling javascript is a solution to CSRF.

    Sure, we could outlaw cookies (another solution), but then we have the session id problem. Embedded tokens only leads to either session fixation or a broken back button (the most used button on a browser) and does not fully fix the CSRF vulnerability. SSL ids for session ids is a good solution but has it's own penalties. It's a tricksy problem and the devil is in the detail.

    It is an imperfect universe filled with Windows and bad people who just won't play cricket. This isn't helped by client-side scripting languages with too much power and users who opt for convenience over security. I know, guns don't kill people but until there is sandboxing or virtual browsers or whatever it is a solution that works 100%.

  26. Anonymous Coward
    Happy

    It might seem overkill, but...

    For my money, the absolute best way to keep this sort of crud out of corporate networks is to not give users Web access at their corporate desktops.

    It seems like an absolute pain for the first week or two, but trust me, once you get into the habit of "I need to check something on El Reg, so I'm popping round the corner to use the Web" you really don't miss it.

    Plus, you don't spend all day soaking up bandwidth on Facebook.

    Put 1 Internet terminal out for every 5 or so regular desktops. They don't need *any* apps on them other than a virus scanner, a recent browser, Adobe PDF reader, and the drivers for a local printer (yes, users will want to print those PDFs out). Having USB ports for those of us that need to download drivers etc will be a help too. Other than that, you really don't need a lot more. If you want to put Linux on them instead of Windows, fill your boots. Connect them to their own *completely separate* Internet LAN that doesn't touch your corporate LAN at all. Have a plan to flatten and reimage them every so often (how often depends on the sort of users you have).

    I know your servers will still need Internet access for email, patch downloads, and such, but that's what DMZs were invented for.

    Trust me, I work in just such an environment, and it really isn't that big a deal.

  27. amanfromMars Silver badge

    Trust me ...... he's a pretty straight kind of corporate guy

    " It might seem overkill, but ... once you get into the habit of "I need to check something on El Reg, so I'm popping round the corner to use the Web" you really don't miss it. Trust me, I work in just such an environment, and it really isn't that big a deal." ..... By Anonymous Coward Posted Monday 1st September 2008 21:03 GMT

    What a good little programmed robot you are , AC.

    Ps Where is the Google Chrome Download site hiding ITself? Or is it an AutoMagical Upgrade through Mozilla Firefox. Although that would be Real Cheeky Dominant Bonobo ProAction and QuITe Refreshingly Innovative in the Novel Virtual Leadership Stakes/AIReality SweepStakes.

  28. Anomalous Cowherd Silver badge

    @Danny again

    Might be overflogging this particular horse, but:

    "Web 2.0 stuff is pretty groovy and the slick interface seductive but there is nothing JS can do that cannot be done by a scripting server (I do it everyday)."

    Technically you're correct, but then by that argument we'd all be dashing out CGI in Perl, using Gopher or working on Wyse terminals. UI evolves, and users want slick sites that verify their form submissions (shock, horror) *without* having to do a round trip to the server. It's quicker, simpler, involves a much cleaner architecture and is no less secure if competently programmed.

    Start viewing a website as an application to be developed by a programmer, rather than a sequence of pages with forms hacked together by a designer and you might see my point.

    Incidentally take a look at http://www.extjs.com/deploy/dev/docs/. It's slick, functional, cross-browser and looks great.

  29. James Butler

    So...

    This *is* a simple phishing issue? While I appreciate amanfromMars' response, it does little to answer my question. Beyond strenuous validation of user input, what's left to do? It really seems like a lot of fuss over nothing, as long as user input is validated.

    I mean, it's my form working with processing on my server ... I don't care if someone tries to host the page in an iframe on their site ... their site still isn't going to be able to read my server's sessions or my server's cookies. I fail to see how the user's credentials could be snagged without (a) a successful phishing excursion and (b) the user then entering said info into the phished form.

    In the absence of (a), this doesn't work. Am I right?

    If the phishing is unsuccessful, then this is a non-issue, right?

  30. Danny

    csrf is suble

    @Anomalous Cowherd

    Nothing technical about it. It can be coded either way. The server has all the same information. With broadband, conventional page reloading is not the tedium it once was with dial-up. I'm in agreement about the slick interface for an app feel. There are some very good ones out there. For the discussion of the article, however, the former does not demand JS, the latter cannot function without it. And JS opens up the prospect of CSRF...

    @James Butler

    It has nothing to do with phishing nor input validation. It is impersonation and forged requests. My website can be perfect but can still be the target of attacks from another site with XSS holes. But (and it's a big but) the requests are from my customers. I cannot distinguish real requests from forged ones unless I make user access such a pain that nobody wants to visit. With CSRF you'll realise there is almost no easy defence against it /and/ keep all your users. Hence the doom and gloom of the article.

    To recap, it happens like this. We have four agents in this scenario.

    - Bob, at home, using his browser.

    - SiteA, perfect, with no XSS holes or anything. Perhaps a bank.

    - Sid the bad guy who want to rip off Bob's account at SiteA.

    - SiteB that hosts the malware created by Sid. A forum or blog perhaps. Could be a site owned or 0wn3d by Sid and/or has exploitable XSS holes. The malware is a combination of html (iframes etc) and javascript as needed to execute the CSRF.

    (1) Sid puts malware on SiteB.

    (2) Bob visits SiteA and logs into his account. SiteA sends Bob a cookie (a random token or nonce) so that further requests from his browser do not require him to re-enter his password for every requested page (a convenience feature).

    (3) While still browsing SiteA, Bob opens up a new tab and surfs into SiteB.

    (4) His browser downloads a page from SiteB along with Sid's malware. Remember, the browser will not allow SiteB to access SiteA's cookies. Not now. Not ever (bugs in IE excepted). His browser displays the page. The html part of the malware open a 1x1 pixel iframe to http://SiteA/... Bob can't see it as it's so tiny and tucked away. The javascript part of the malware has access to the cookie for SiteA and any request from within the iframe to SiteA will include this cookie (same-domain rules). The malware builds a request and sends it to SiteA, say a form POST. (GET attacks that do damage should be a thing of the past now.)

    (5) SiteA receives this request. It is coming from Bob's browser at his IP address along with the Bob's cookie for SiteA. The server checks the cookie and sees that it's a valid cookie for Bob's current session and executes the request. Fill in the blanks of what the request could be. SiteA logs the request, IP address etc.

    (6) Much later Bob complains to SiteA that his bank account is now empty. SiteA examine the logs and sees that it was Bob who made the request and tells him tough bananas.

    Hopefully it will be clear now why the various solutions in the article have been given. It's the result of using iframes, xhr and javascript to sidestep same-domain rules set up to protect cookies (themselves used to overcome the limitations of stateless pages).

    While it is up to SiteA to do all they can to thwart this (they don't want customers being ripped off) you need to be aware of what might be going on while browsing. If you browse one site at a time, logout and delete cookies before going somewhere else then leaving JS on is just fine. If you like to have many tabs open (I do) then you need to make the necessary security adjustments. You could keep one browser strictly for online banking.

    A long post but I hope it clears up why this is important. It's not that I hate JS (or Web 2.0), it's that it's too dangerous to just give it free reign for no really good reason.

  31. James Butler

    Still Smells Like Phish

    "The html part of the malware open a 1x1 pixel iframe to http://SiteA..."

    Fairly tricky, as drawing from the history is quite problematic in this case, and how else would Site A's URL be available to Site B?

    "The javascript part of the malware has access to the cookie for SiteA and any request from within the iframe to SiteA will include this cookie (same-domain rules)."

    Hold on. How can Site B access the cookie set by Site A (IE bugs aside)? And if Site B cannot access the cookie, how does the iframe'd page pass that info over to the parent hosted by Site B? Any request from within the iframe that holds Site A would necessarily originate from Site A, even if it were included in an iframe on a Site B page. Site B can't use Javascript to access Site A's cookie, and if Site B were to try to get the cookie contents, Javascript would refuse to give it.

    This is where it breaks down, for me. Site A's cookies would be unavailable to Site B at all times, so unless the user is inputting data into a Site B page (phishing), how would said data migrate? Not from the iframe to the iframe ... that data's held on Site A's servers and in the cookie. Unless Site B can cause Site A to give up that info, it can't get and use it to send that "malware request" through the iframe.

    From Site A to Site A is where the protection is.

    From Site B to Site A doesn't accomplish this hack without data from Site A.

    How is Site B getting that data? Not from Site A's cookie.

    Are we talking about XHR's open() method? If so, how's the username and password getting in there without reading a cookie? Does it use getAllResponseHeaders()? Then somehow Site B has to send a request to Site A first in order to observe those headers ... without credentials.

    Gotta be a phish. No?

This topic is closed for new posts.

Other stories you might like