Hackers can force your browser to send requests to any site they want. It's not even hard - all they have to do is get you to view an email or a web page. Unless the site is specifically protected against this - and almost none are - then attackers can make your browser do anything you can do, and they can use your credentials …
A combination of a per session token which is submitted with each request and a sessionID cookie work well as a combination. Also expire sessions after a certain minimum of time (easily done with a session cookie), 'cos you know your users won't log out.
Icon, I've seen way to many CSRF holes on popular sites, you've been warned
Can't you also write your form-processing so that if the form request has come from anywhere except your own site, it won't run?
Barclays online bank - Just Yesterday!
Could this be detected and stopped by checking the old-fashioned HTTP Referer field? E.g. the only site that should have Netfix "add to my queue" links is netflix.com; if the Referer specifies a different host then the request is being forged. Netfix could detect this and deny the request.
@ Security By Lyle
Do you mean the http referrer header?
Re: Referer field?
Agreed - a simple solution is to require a same domain referrer when preforming tasks.
AFAIA this cannot be forged by an un-tampered browser and should be a defacto condition in any sensitive app or task.
Great work bringing it into the open - but surely this should be developer common sense?
Most pedantic, I like it!
Well, technically I meant HTTP_REFERER (Don't you just love specs that can't spell?)
Regarding followup posts, I wasn't aware that some browsers don't send the referer, I thought that was a standard part of the HTTP request, to know where the request had come from.
Glad others thought it was a good idea too!
Responsibility of web-apps to avoid this
For a web-developer, don't perform actions on GET variables without a check token. GET should be for display only otherwise.
Secondly for CMS builders don't allow anyone to include a <FORM> or <INPUT> variables in a user-defined content area. One big reason is that it makes it simple for a rogue user to fake the login page for the CMS and harvest usernames and passwords. Second reason is it allows people to build fake Google Checkout or Paypal pages that redirect to harvester sites. And, naturally, never allow the user to include JS in any way.
Can't rely soley on HTTP_REFERER
HTTP_REFERER is a good defence, but can not be the sole defence, because certain systems remove the referrer from requests (yes, I'm looking at you, Norton Internet Security).
GET should never do anything but fetch content for viewing. http://example.com/script?do=delete&what=all_my_stuff is going to be bad news.
<form action="bank.com" method="post">
<input type="hidden" name="do" value="transfer" />
<input type="hidden" name="account" value="12345678" />
<input type="hidden" name="value" value="5000.00" />
Rate this image:
<input type="radio" name="rating" value="3" />Great
<input type="radio" name="rating" value="2" />Mediocre
<input type="radio" name="rating" value="1" />Terrible
<input type="submit" value="Go" />
The only secure defence right now is the security token.
We wanted to use http_referer in a number of our apps, but they get clobbered by reverse proxy servers, which we make liberal use of.
No no no
Web security 101. Never implicitly trust anything that comes from the browser. Ever. That includes the referer (sic) header...
By all means use it as a part of your system, churn requests from a bad referer but don't automatically accept them just because it's right.
The randomly generated, expiring token concept is much stronger because your own server generates the check value and the client has to mirror it back to you. If something is interfering with the process, it probably won't know the correct check value and you can churn the request. In the case of the referer string however, the client always knows what the 'correct' value is supposed to be (the URI of the page you're attacking), leaving it open to manipulation by the user, malware, browser addons, and so on...
This is a phishing attack, right?
The article is very confusing ... it seems to imply that any old server can host files that automagically grab your bank credentials (from where, it is not indicated) and initiate a transfer, or that any old server can host a specially crafted IMG tag that goes on to hijack your browser and all logged in sessions (how it could do that based on some script is not indicated).
I just don't see how this could be anything other than a typical phishing scenario. What am I missing? Is this supposed to be something new, or just a reminder to validate all user input all the time?
Web programmers fault.
As Jeremy stated, you should never trust outside input data (outside of your system). As for severity, it's only hackable, if the incompetent web programmer doesn't escape or validate the input.
Here's a firefox plugin that will help
There's a somewhat (sadly) unknown Firefox addon called SecureBrowse (disclosure: my idea, and my colleagues developed it; I am too old to learn JS now!) that can really help.
SecureBrowse simply enforces the kind of paranoid "delete cookies etc", and "don't stay logged in to bank sites" and so on behaviour that I found myself doing anyway.
We released an FF3 version, but it was sent back to the sandbox because it clashed with some other extension. We're looking into it but meanwhile you can get the FF3 compatible version from the sandbox (easy enough to find but I'll happily provide instructions if someone asks... sitaramc at gmail dot com or will even send it to you)
Phishing for Tiddlers...... and Landing a Shark ....
....Don't Panic or you'll Capsize the Craft and Lose Everything to Nature.
"The article is very confusing ... it seems to imply that any old server can host files that automagically grab your bank credentials (from where, it is not indicated) and initiate a transfer,..." ... By James Butler Posted Friday 29th August 2008 20:17 GMT
No just any old server, James, only those IntelAIgently ReDesigned.
"What am I missing? Is this supposed to be something new,..." You have missed very little and it is surely something new. It is also Beautifully Plausibly Deniable and/or Transparently Admissible, dependent upon Audience and their Need to Know. Some would be just Too Confused if they be Told and thus is IT a Kindness to Hold One's Counsel if One is Privy to Automagic Information.
:-) Go on, I dare you........ Check your Bank Account, Online, 42 See if you are an Asset for Automagical Support or a Liability to be Artificially Controlled.
Most internet security packages allow the referrer to be wiped as most web developers know.
Cookie + request parameter
What a misleading title
There is no defence if the site is not secure then your account on the remote host can be abused. You cannot control that.
Telling people to logout doesn't solve the problem if the logout system is faulty either.
Cookie deletion is probably the best way of SELF defence, the point is you cannot rely on a site that is unsecure to do it right. Logout is part of the remote system, something you cannot affect or ensure they have done correctly.
There are other hooks that systems use as well, so you cannot really have a silver bullet here, it is all about trusting the remote is doing everything right.
I know these articles are meant primarily as infomercials, but a bit more effort with less sensationalism could be used. The problem is this type of article seems to suggest there is something the user can do, and therefore if they don't they share in the liability, but no, it is the fault of the remote server with your account on it, so let's not confuse that point.
Everything can be faked. See http://en.wikipedia.org/wiki/XHR
(1) Disable JS, Flash and other random downloaded code exectutors to mitigate the XHR problem.
(2) Log out when done and delete cookies - tell the browser to only use session cookies (Mozilla allows this).
(3) Only keep one browsing window open when visiting important sites to limit cookie exposure (and delete them when done)
(4) As suggested, use multiple browsers. I use Konqueror (no JS, no plugins, cookies) and Mozilla (JS, Flash, session cookies only) this way.
For web dev:
(1) Don't use JS and recommend user's disable JS (a long shot but hey it's you that's being ripped off.)
(2) Add the following iframe breakout script to every page
If anyone tries to put your site in an (invisible or otherwise) iframe it will become pretty obvious to the user who will then (hopefully) contact the hosting site to say that something is horribly wrong and the webmaster can go and fix the XSS to whatever they are hosting.
location.href could be replaced with a redirect to some other site (so as not use your cookies) or to a page on your site that deletes your cookies on the browser to make them safe.
(3) Reconfirm credentials often. Keeping the sessions short to mitigate time at risk. And ask for passwords to authorise actions on really sensitive pages.
(4) As already suggested, random tokens to return to the site as GET or POST parameters will stop the less sophisticated attacks that do not scan for tokens. An advanced JS script with XHR can load a page, scan for the token value, insert it, post it back @Nick Clarke: a neat idea but a preprogrammed script could search for the JS rather than hidden input tags. It would need to be a big XSS hole but it's possible.
This advice is worse than useless, doubly so as it's typically given to people that know next to nothing about the intertubes. All it does it reduce the web to one large error.
Go and read the forums at the hacker sites and consider whether one should be doing everyday browsing with JS enabled?
Yes, it annoys me that in 2008 some sites resort to "enable JS or fuck off" for stuff that doesn't even need to be. JS links anyone? JS breaks the Web for the blind, disabled or anyone using a stylus. You know, that accessibility thing. Web 2.0 be damned. Does El Reg demand its users be JS-enabled? Even Gmail has a non-Web-2.0 interface that works really well.
JS is necessary for media type stuff hence my keeping Mozilla JS+Flash enabled to visit utoob. Actually, the new look Beeb site is a good example of providing accessibility and (semi-nagging) relevant alternatives to embedded Flash. The JS provides additional functionality not critical functionality. Web 2.0 stuff is pretty groovy and the slick interface seductive but there is nothing JS can do that cannot be done by a scripting server (I do it everyday).
JS has been given too much power and now it's being abused by those who want to steal your money. It's your choice: keep your accounts secure or create another million zombies.
Even better, why not pull the network cable out the back as well...
RAIR Force Blue Skies Thinking ... Per Ardua ad MetaAstra
I agree, Anomalous Cowherd.
The Web is as IT is, with many Offering Gadgets. May the Best Gadgets Win so everyone Wins with Win Win. They are not Competitive Gifts they are Complementary and Complimentary Services which Big Business Thinks to Charge you for. And All they Really Only Want is Money. Let them go to the Banks and let the People Enjoy Themselves with their Money ..... Pretty Good Paper......... for IT is Only a Perverting Control Invention ..... an Artificial Intelligent Design and thus Oxymoronic.
After All, whenever IT is Spent IT Just goes right back to Source for ReSpending........ so why all the Taps that Turn the Flow Off? It is not Logical.
If that's how you ensure your own security, knock yourself out.
I notice that you've not addressed the charge that running untrusted code on your own machine is dangerous. Do you wonder why noscript is the most popular add-on for Firefox? Or why IE is now copying it?
Sure, we could outlaw cookies (another solution), but then we have the session id problem. Embedded tokens only leads to either session fixation or a broken back button (the most used button on a browser) and does not fully fix the CSRF vulnerability. SSL ids for session ids is a good solution but has it's own penalties. It's a tricksy problem and the devil is in the detail.
It is an imperfect universe filled with Windows and bad people who just won't play cricket. This isn't helped by client-side scripting languages with too much power and users who opt for convenience over security. I know, guns don't kill people but until there is sandboxing or virtual browsers or whatever it is a solution that works 100%.
It might seem overkill, but...
For my money, the absolute best way to keep this sort of crud out of corporate networks is to not give users Web access at their corporate desktops.
It seems like an absolute pain for the first week or two, but trust me, once you get into the habit of "I need to check something on El Reg, so I'm popping round the corner to use the Web" you really don't miss it.
Plus, you don't spend all day soaking up bandwidth on Facebook.
Put 1 Internet terminal out for every 5 or so regular desktops. They don't need *any* apps on them other than a virus scanner, a recent browser, Adobe PDF reader, and the drivers for a local printer (yes, users will want to print those PDFs out). Having USB ports for those of us that need to download drivers etc will be a help too. Other than that, you really don't need a lot more. If you want to put Linux on them instead of Windows, fill your boots. Connect them to their own *completely separate* Internet LAN that doesn't touch your corporate LAN at all. Have a plan to flatten and reimage them every so often (how often depends on the sort of users you have).
I know your servers will still need Internet access for email, patch downloads, and such, but that's what DMZs were invented for.
Trust me, I work in just such an environment, and it really isn't that big a deal.
Trust me ...... he's a pretty straight kind of corporate guy
" It might seem overkill, but ... once you get into the habit of "I need to check something on El Reg, so I'm popping round the corner to use the Web" you really don't miss it. Trust me, I work in just such an environment, and it really isn't that big a deal." ..... By Anonymous Coward Posted Monday 1st September 2008 21:03 GMT
What a good little programmed robot you are , AC.
Ps Where is the Google Chrome Download site hiding ITself? Or is it an AutoMagical Upgrade through Mozilla Firefox. Although that would be Real Cheeky Dominant Bonobo ProAction and QuITe Refreshingly Innovative in the Novel Virtual Leadership Stakes/AIReality SweepStakes.
Might be overflogging this particular horse, but:
"Web 2.0 stuff is pretty groovy and the slick interface seductive but there is nothing JS can do that cannot be done by a scripting server (I do it everyday)."
Technically you're correct, but then by that argument we'd all be dashing out CGI in Perl, using Gopher or working on Wyse terminals. UI evolves, and users want slick sites that verify their form submissions (shock, horror) *without* having to do a round trip to the server. It's quicker, simpler, involves a much cleaner architecture and is no less secure if competently programmed.
Start viewing a website as an application to be developed by a programmer, rather than a sequence of pages with forms hacked together by a designer and you might see my point.
Incidentally take a look at http://www.extjs.com/deploy/dev/docs/. It's slick, functional, cross-browser and looks great.
This *is* a simple phishing issue? While I appreciate amanfromMars' response, it does little to answer my question. Beyond strenuous validation of user input, what's left to do? It really seems like a lot of fuss over nothing, as long as user input is validated.
I mean, it's my form working with processing on my server ... I don't care if someone tries to host the page in an iframe on their site ... their site still isn't going to be able to read my server's sessions or my server's cookies. I fail to see how the user's credentials could be snagged without (a) a successful phishing excursion and (b) the user then entering said info into the phished form.
In the absence of (a), this doesn't work. Am I right?
If the phishing is unsuccessful, then this is a non-issue, right?
csrf is suble
Nothing technical about it. It can be coded either way. The server has all the same information. With broadband, conventional page reloading is not the tedium it once was with dial-up. I'm in agreement about the slick interface for an app feel. There are some very good ones out there. For the discussion of the article, however, the former does not demand JS, the latter cannot function without it. And JS opens up the prospect of CSRF...
It has nothing to do with phishing nor input validation. It is impersonation and forged requests. My website can be perfect but can still be the target of attacks from another site with XSS holes. But (and it's a big but) the requests are from my customers. I cannot distinguish real requests from forged ones unless I make user access such a pain that nobody wants to visit. With CSRF you'll realise there is almost no easy defence against it /and/ keep all your users. Hence the doom and gloom of the article.
To recap, it happens like this. We have four agents in this scenario.
- Bob, at home, using his browser.
- SiteA, perfect, with no XSS holes or anything. Perhaps a bank.
- Sid the bad guy who want to rip off Bob's account at SiteA.
(1) Sid puts malware on SiteB.
(2) Bob visits SiteA and logs into his account. SiteA sends Bob a cookie (a random token or nonce) so that further requests from his browser do not require him to re-enter his password for every requested page (a convenience feature).
(3) While still browsing SiteA, Bob opens up a new tab and surfs into SiteB.
(5) SiteA receives this request. It is coming from Bob's browser at his IP address along with the Bob's cookie for SiteA. The server checks the cookie and sees that it's a valid cookie for Bob's current session and executes the request. Fill in the blanks of what the request could be. SiteA logs the request, IP address etc.
(6) Much later Bob complains to SiteA that his bank account is now empty. SiteA examine the logs and sees that it was Bob who made the request and tells him tough bananas.
While it is up to SiteA to do all they can to thwart this (they don't want customers being ripped off) you need to be aware of what might be going on while browsing. If you browse one site at a time, logout and delete cookies before going somewhere else then leaving JS on is just fine. If you like to have many tabs open (I do) then you need to make the necessary security adjustments. You could keep one browser strictly for online banking.
A long post but I hope it clears up why this is important. It's not that I hate JS (or Web 2.0), it's that it's too dangerous to just give it free reign for no really good reason.
Still Smells Like Phish
"The html part of the malware open a 1x1 pixel iframe to http://SiteA..."
Fairly tricky, as drawing from the history is quite problematic in this case, and how else would Site A's URL be available to Site B?
This is where it breaks down, for me. Site A's cookies would be unavailable to Site B at all times, so unless the user is inputting data into a Site B page (phishing), how would said data migrate? Not from the iframe to the iframe ... that data's held on Site A's servers and in the cookie. Unless Site B can cause Site A to give up that info, it can't get and use it to send that "malware request" through the iframe.
From Site A to Site A is where the protection is.
From Site B to Site A doesn't accomplish this hack without data from Site A.
How is Site B getting that data? Not from Site A's cookie.
Are we talking about XHR's open() method? If so, how's the username and password getting in there without reading a cookie? Does it use getAllResponseHeaders()? Then somehow Site B has to send a request to Site A first in order to observe those headers ... without credentials.
Gotta be a phish. No?