top | item 3778158

#1 CSRF Is A Vulnerability In All Browsers

183 points| homakov | 14 years ago |homakov.blogspot.com | reply

First article that makes the point.

238 comments

order
[+] elisee|14 years ago|reply
Just in case it might be a problem for anyone: The article uses the CSRF vulnerability to log you out of all Google services (and says so in a PS at the bottom).

Don't open the article if you don't want to have to log in to Google again afterwards (might be a problem if you're using two-factor auth and you don't have your phone handy for instance).

[+] spindritf|14 years ago|reply
There's an info now at the top of the post

> To stir up your interest - check any google service e.g. gmail, you are logged out.

Great hook btw. Even more impressively, I have all js on his blog blocked through NoScript and it still worked.

[+] hanbam|14 years ago|reply
Looks that Google detects the type of logout and doesn't ask for two-factor authentication in this case.
[+] homakov|14 years ago|reply
hm yep. should I hide that thing? hm.. Sorry guys in advance.
[+] radicalbyte|14 years ago|reply
It didn't log me out. Must be down to Chrome Adblock, or Facebook Disconnect.
[+] marquis|14 years ago|reply
I'm still logged into Gmail on Chrome, so this is an specific browser issue?
[+] tptacek|14 years ago|reply
CSRF isn't a browser vulnerability. It's a serverside application vulnerability.

To say otherwise is to say there there is some trivial policy, just an HTTP header away, that would allow IE, Firefox, and Webkit to coherently express cross-domain request policy for every conceivable application --- or to say that no FORM element on any website should be able to POST off-site (which, for the non-developers on HN, is an extremely common pattern).

There is a list (I am not particularly fond of it) managed by OWASP of the Top Ten vulnerabilities in application security. CSRF has been on it since at least 2007. For at least five years, the appsec community has been trying to educate application developers about CSRF.

Applications already have fine-grained controls for preventing CSRF. Homakov calls these controls "an ugly workaround". I can't argue about ugliness or elegance, but forgery tokens are fundamentally no less elegant than cryptographically secure cookies, which form the basis for virtually all application security on the entire Internet. The difference between browser-based CSRF protections (which don't exist) and token-based protections is the End to End Argument In System Design (also worth a Google). E2E suggests that when there are many options for implementing something, the best long-term solution is the one that pushes logic as far out to the edges as possible. Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security.

This blog post seems to suppose that most readers aren't even familiar with CSRF. From the comments on this thread, he may be right! But he's naive if he thinks Google wasn't aware of the logout CSRF, since it's been discussed ad nauseam on the Internet since at least 2008 (as the top of the first search result for [Google logout CSRF] would tell you). Presumably, the reason this hasn't been addressed is that Google is willing to accept the extremely low impact of users having to re-enter their passwords to get to Google.

Incidentally, I am, like Egor, a fan of Rails. But to suggest that Rails is the most advanced framework with respect to CSRF is to betray a lack of attention to every other popular framework in the field. ASP.NET has protected against CSRF for as long as there's been a MAC'd VIEWSTATE. Struts has a token. The Zend PHP framework provides a form authentication system; check out Stefan Esser's secure PHP development deck on their site. Django, of course, provides CSRF protection as a middleware module.

[+] javajosh|14 years ago|reply
It's clear that developers need a simple way to specify that a piece of API should not be accesible from third-party sites.

I propose a new set of HTTP verbs, "SECPOST", "SECGET", etc that comes with the implication that it is never intended to be called by third-party sites or even navigated to from third-party sites. It is a resource that can only be called from the same origin. Application developers (and framework authors) could make sure to implement their destructive/sensitive APIs behind those verbs, and browser vendors could make sure to prevent any and all CSRF on that verb (including links and redirects).

[+] euroclydon|14 years ago|reply
Is the following accurate:

If a form is served from domain A (via GET) in to an iframe on a page that was served from domain B, then the JS on the page from domain B is prevented from reading or writing data on the page from domain A (unless an x-domain policy is in place) though it may be able to post it.

[+] homakov|14 years ago|reply
I see some points but >CSRF isn't a browser vulnerability. It's a serverside application vulnerability. you didn't prove this one. CSRF is a browser vulnerability. ANd I don't care about another stuff you said further - you probably right that most popular frameworks have the protection out of box - I know it, no surprise here:). But I did pretty wide audit - only rails' protection looks really elegant. Hm.. probably I'm too much fan of rails, true.

And, please >Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security. Surely, I don't mean "Stop secure your apps from CSRF, it's not your problem". I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge. But we are fixing it on the backend(and we will have to for a next 10 years definitely)

[+] majke|14 years ago|reply
> The difference between browser-based CSRF protections (which don't exist)

What about the X-Frame-Options and Origin headers? They are browser-based mechanisms that hint server side, right?

(not for the classic POST case though...)

[+] vectorpush|14 years ago|reply
it took me a long time to understand the point behind CSR (cross-site requests) and CRSF fully enough to find them EXTREMELY malicious.

I think this is a very important line. The sense I get around most of my colleagues is that CSRF exploits are only something "bad programmers" get wrong. Of course, they're all rockstars who've never been exploited (yet/AFATK) so it's not like they need to spend a weekend or five paging through droll security papers. A little modesty would do us all well.

90% of developers just don't care and don't spend time on that.

Indeed. It takes time to learn, time to code, and unless you're working at a big shop, there's little pressure (or even acknowledgement of the need) to get this stuff right.

Keep up the good work OP.

[+] divtxt|14 years ago|reply
CSRF is like a kafka-esque joke.

Here's my take away from every CSRF article:

A malicious site will load your site in an iframe, fill in your form and post it. Fixing it requires some a token in your form, but I can see you don't understand how an extra hidden field in your form will make a difference so you're clearly not going to handle it correctly. You're screwed. Go home.

As far as I can tell, CSRF should have existed since javascript & frames. How have the browser vendors not fixed such a huge insecure-by-design flaw?

[+] Zirro|14 years ago|reply
Am I correct in interpreting that the proposed fix would the be the same as the functionality provided by RequestPolicy (which he mentions in the post)? I've used it for quite a while now, and although it works well for me as a power-user (who is concerned about security), I can't imagine the confusion and pain a user will feel despite the message suggested.

Blocking resources loaded over separate domains breaks a lot of sites today. Few popular sites keep everything under the same domain (CDN´s, commentsystems, captchas and Facebook/Google/Twitter-resources, for example). http://www.memebase.com is probably the worst "offender" I've come across. Hacker News isn't one of them, which I'm happy to see.

Although if this was implemented I could see a lot of sites moving quickly to remedy this, reducing the alerts. It'd still be a pretty hard transition-period, though.

Want to see how much would break today (and if the fix would work for the average user)? Try: https://www.requestpolicy.com

[+] nbpoole|14 years ago|reply
"Am I correct in interpreting that the fix would the be the same as the functionality provided by RequestPolicy (which he mentions in the post)? I've used that for it for quite a while now, and although it works well for me as a power-user (who is concerned about security), I can't imagine the confusion and pain a user will feel despite the message suggested."

That was my interpretation as well and I reached the same conclusion. Having the average user make application-level security decisions is a very bad idea.

RequestPolicy is a wonderful extension and I think its use should be encouraged. But the average user does not understand enough about an application and how it interacts with third-party websites to make informed decisions about whether a particular interaction is good or not. False positives (where the user flags a good interaction) will lead to loss of functionality while false negatives (where the user fails to flag a bad interaction) will lead to security vulnerabilities that website owners can't prevent.

[+] jaylevitt|14 years ago|reply
In fact, if you're using Amazon's CloudFront CDN, and you're using HTTPS, you have NO way to keep everything under the same TLD; CloudFront can only serve its own SSL cert, not yours.
[+] driverdan|14 years ago|reply
As my other comment highlighted, disabling 3rd party cookies will prevent most CSRF. As an added bonus it will also increase your privacy by preventing some (but not all) cross domain tracking.
[+] ricardobeat|14 years ago|reply
No need to go that far. The X-Frame-Options: SAMEORIGIN header, supported by all major browsers, can prevent the majority of these attacks (unwanted GET requests in the background).

https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_respons...

Other than that, it should be hammered into developer's heads that GET should not have side effects.

[+] seanalltogether|14 years ago|reply
Is a GET request in an iframe now considered a CSRF vulnerability? As far as I know, he hasn't actually done any cross site scripting. If i submit this as a link on hacker news and get a bunch of people to click it, have I forged a cross domain request as well?

https://mail.google.com/mail/u/0/?logout

[+] citricsquid|14 years ago|reply
Maybe this is a good time to ask:

I found an xss vulnerability in a website that can be used to cause noticeable problems (enough that fixing it should be a priority) so I contacted the developers behind the site and informed them what caused it, how to fix and an example of it in practice and why it's bad: they've done nothing in over a month. What do I do?

I guess the answer is "forget it", but I feel like if I don't do anything someone malicious will discover the issue and cause harm to users of the website...

[+] akavlie|14 years ago|reply
For those who didn't see the recent kerfuffle: This guy recently found and demonstrated a major Rails exploit on github. He seems to know a thing or two about security exploits.
[+] davepeck|14 years ago|reply
My app's web site is built with Django. I use the built-in CSRF tools. (I should emphasize that my site is strictly HTTPS.)

In theory, no normal user will ever fail CSRF checks. In practice, tons of people have complained that they see Django's (very confusing) CSRF error page when they try to sign up for my service.

This was surprising to me; I thought we were _way_ past this point. Digging into it, I've learned that tons of people use extensions that muck about with cookies in ways that break Django's CSRF feature. I don't really know a way around it.

How common is this, in your experience?

[+] jasonkeene|14 years ago|reply
Yeah this is something I run into often as I don't accept cookies from sites by default and don't send Referer header (both are required for django's CSRF middleware if over https). This is a good read if you are interested in the rational behind these decisions -> https://code.djangoproject.com/wiki/CsrfProtection

As far as a solution for your users, I'd just let them know that you require cookies to login (obviously) and if you are posting over https make sure they have the Referer header which can be forged to just be the domain and not the entire URL if they prefer. I use https://addons.mozilla.org/en-US/firefox/addon/refcontrol/ set to forge for django sites.

[+] huxley|14 years ago|reply
In terms of the error page, you can modify the CSRF error page by setting CSRF_FAILURE_VIEW:

https://docs.djangoproject.com/en/dev/ref/settings/#std:sett...

Update: I noticed that later in the thread you mention that you already provide a custom error page. I'll leave this for others who might not be familiar with custom CSRF error pages.

[+] gibybo|14 years ago|reply
I am not familair with Django's CSRF tools, but you could write your own that didn't depend on cookies. Init a js var with a random token in the html somewhere, then require that the browser includes it with any state changing actions.
[+] homakov|14 years ago|reply
I've been examining django sites either. confirm
[+] javajosh|14 years ago|reply
This attack vector requires:

1) previous authentication to a service.

2) service which supports destructive actions as guessable URLs.

3) "third-party cookie" support in the user agent. [1]

4) a visit to a page with a malicious resource construct (an image, script, iframe, external style sheet, or object). Note that this resource could be generated by JavaScript, although this is not necessary.

Sadly, the first three criteria are widely met. If we are to systematically remove this threat, then we have to look at removing each in turn:

1) Previous authentication to a service can be mitigated by simply logging out when you are done, but this is inconvenient and requires manual user intervention. However, there is an interesting possibility to limit "important" services to a separate process - a browser, an "incognito" window, etc.

2) Services should be built with an unguessable component that is provided just prior to calling by a known-good API, probably with additional referrer verification.

3) It is my belief that disabling third-party cookies is the right solution here: users rarely, if ever, get value from third-party cookies. Denying them would allow API authors to write simpler APIs that do not have a secret component, and would allow users to maintain the same behavior and login to all their services from the same browser.

4) While it seems that little can be done on this front apart from releasing some chemical agent into the atmosphere that made people trustworthy and good, actually it may be possible for browser makers to do some simple analysis of resource URLs to detect possible hanky-panky.

[1] https://en.wikipedia.org/wiki/HTTP_cookie#Privacy_and_third-...

[+] spamizbad|14 years ago|reply
I'm having a little trouble parsing this post. Is he saying he's discovered a variant of CSRF that cannot be stopped by using the Synchronizer Token Pattern? Or has he found something that a lot of site's protection patterns don't follow?
[+] huhtenberg|14 years ago|reply
[ repost from below ]

I just read on CSRF and its mitigation with Synchronized Tokens on [1] and there's one thing I don't seem to understand. What does prevent an attacker from open an original site's page in an iframe and then have a script fill in and submit the form on it? In other words, say I am logged in into my bank's site. I then open a malicious page that has an iframe pointing at http://bank/move-funds that contains a fund transfer form. Wouldn't this page include a correct CSRFToken, making the form readily submittable by a malicious script?

Can anyone comment? It damn sure looks like a big gaping hole that is virtually impossible to plug.

[1] https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%...

[+] nbpoole|14 years ago|reply
Because a script (I assume you're referring to JavaScript) can't fill in a form on or read the contents of a third-party website. That's a violation of the same-origin policy.

CSRF tokens are a well-understood solution to this issue. In order to submit a valid request, you must include what is essentially a secret token that is on the page (although the secret token can just be your session ID). For an attacker to get that token, they would need to be able to do at least one of the following:

A. Guess it, by having you make multiple requests. (so you make the token long enough that it's infeasible to guess)

B. Be able to read it by intercepting the HTTP response or reading it in some way, in which case you have much larger security issues.

C. Be able to read the token in the HTTP request that the browser makes. Again, if an attacker can do this, your session is already compromised.

[+] shimon|14 years ago|reply
Doesn't work, because of cross-domain security policies. Javascript running in http://malicious-site wouldn't be able to read the CSRF protection token in the fund transfer form on http://bank. So the submission wouldn't have the correct token value and the bank would reject the attempt.
[+] brian_peiris|14 years ago|reply
He logs you out of Google with a simple

<img style="display: none;" src="https //mail.google.com/mail/u/0/?logout">

[+] jiggy2011|14 years ago|reply
CSRF is a bit of a pain to work around but how much of a problem is it in the wild?

Most sites where this could do real damage (and have real gains for the attacker), banks etc are going to be well protected.

You could use it to comment spam a blog but that's going to be a crapshoot. Guessing which blog people are logged into etc, you would need very targeted attacks.

Sure , signing out of google is annoying but if you have lastpass or similar signing back in is pretty frictionless.

[+] homakov|14 years ago|reply
>Most sites where this could do real damage (and have real gains for the attacker), banks etc are going to be well protected.

You think so. In "the wild" even serious systems are vulnerable #OpApril1

[+] gibybo|14 years ago|reply
Gmail has had more serious CSRF vulnerabilities in the past - you could use it to download the entire address book of anyone who visited your site.
[+] dfc|14 years ago|reply
RequestPolicy + NoScript are the big reason I have not switched to chromium.

In order for requestpolicy to block this it needs to be in a fairly locked down state too...

[+] someone13|14 years ago|reply
A bit of a note regarding REST:

RESTful services are as vulnerable to CSRF as anything else. See [1] for more information (and I'm really sad that there's no second post, like mentioned). However, since RESTful services imply no state on the server (i.e. no token), the question is, how do you prevent CSRF attacks?

One really simple method is to deny all requests (on the server) with the application/x-www-form-urlencoded content type, and deny all multipart/form-data requests that include non-file parameters, which are the only two content types that can be sent from an HTML form. For your application, XMLHttpRequest can change the content type, and isn't affected by CSRF.

EDIT: Also, sort-of-related: I recommend you set the X-Frame-Options header too, in order to prevent clickjacking. Info at [2].

[1]: http://blogs.msdn.com/b/bryansul/archive/2008/08/15/rest-and... [2]: https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_respons...

[+] jim_lawless|14 years ago|reply
I had envisioned what I think is a more solid defense against CSRF ... I just haven't had time to build a proof.

Earlier commenters have noted that each request back to the server should include an unguessable token that cannot be derived by mining other pages on the site with cross-site AJAX requests.

My hypothetical solution is to embed that token in the prefix to the hostname after logging into the given site. The token would then be sent in the Host: header for all dynamic requests.

Step 1: You log in to www.somesite.kom.

Step 2: You are then forwarded to dynXXXXXXXX.somesite.kom where XXXXXXXX represents a unique, dynamically-generated token tied to your session.

The attacker must now know XXXXXXXX to properly form up a GET or POST request to attack your account.

The site itself could then use relative URL's for dynamic content or could use the appropriate templating system to ensure that any dynamic URL's ( either in HTML markup or script text ) contain the generated hostname.

[+] slurgfest|14 years ago|reply
There is a HUGE vested commercial interest in the CONTINUATION of the insecure status quo, in which all the control is on the server side (hence, with the companies doing tracking and advertising using third party requests, rather than with the end user).

Furthermore, the players funding browser development all share strongly in that vested interest. (Even for Firefox, follow the money - and if Firefox did try to lock down without industry agreement, it would lose, which Mozilla knows).

So you will not see any change. This also explains the degree of heat directed at the suggestion that client behavior could be less insecure by default, with regard to third party requests.

This is not new. Much of HTTP as originally conceived actually dictated a great deal more user control over what happened. Those standards had to be compromised from the word go in order to reach the present state.

[+] txt|14 years ago|reply
Adding a extra token for protection against CSRF attacks will only work if is changed on each request. Some of the biggest sites out there do not do this. I know of one site in particular (I won't name it, but its HUGE) that generates a unique token every time a user logs in. The token doesn't change until the user logs out even if the user closes the browser and doesn't go back to the site for a week, the token will be the same. So it does its job, until somebody like me pokes around and finds a hole that will parse out that token, and generate a form that can make any request on behalf of that user in a iframe without that user knowing a thing. Evil yes, but I found this months ago, and it still works..and I haven't used it in anyway, besides a proof of concept.
[+] eurleif|14 years ago|reply
You shouldn't be able to get the token from another domain, regardless of how long it lasts. How are you able to?
[+] yuliyp|14 years ago|reply
Did you consider reporting it? Many such "huge" sites have bug bounty/white hat programs.