I do not believe any entity in the world has statistics strong enough to make predictions like the expected percentage change in XSS year over year. Everyone claiming to have those statistics has thoroughly confounded their analyses by relying heavily on applications that have been made available to specific tools and companies. But the modal web application deployed on the Internet is the one that has had no security testing whatsoever.
> Lastly, “DOM-based XSS” attacks occur purely in the browser when client-side JavaScript echoes back a portion of the URL onto the page.
This Google Doc has tracked almost all "sinks" and "sources" for DOM-based XSS[1]. They aren't by any means limited to the URL (usually accessed by the `document.location` object).
You're right, I tried to keep this section as brief as I could. DOM Based XSS could happen from any source, but the hardest-to-detect (and very common) variant is using the fragment (the part after the #) to inject the payload, which is never sent to the user.
Why can't we tackle XSS in the browser, by preventing javascript from executing in the <body> (or anywhere other than <head> for that matter)? There is an old memory protection technique of designating the stack & heap (data portions of memory) as non-executable. It seems like a similar idea should apply to the web, where the DOM is effectively a "data" portion, and separate out all executable javascript into a separate section. I know this breaks things like `onclick=` attributes, but can't those be replaced with event listeners? Of course it would be opt-in by setting an attribute somewhere in the DOM (e.g. <body non-executable="true">)
This seems like a fairly obvious idea to me, but I'm not a frontend developer, so I'm looking for someone to tell me why this doesn't already exist :)
"Single Page Apps increase the amount of client side logic and user input processing. This makes them more likely to be vulnerable to DOM-based XSS, which, as previously mentioned, is very difficult for website owners to detect."
Hmmm...assuming your back end has all the requisite validation and other security in place, how can a SPA cause an XSS? Are there any purely client side attack vectors (XSS or otherwise) that need to be considered if your back end is fully protected?
Yes, a very common dom-based XSS vector is against document.hash, which is never passed to the server. Versions of Adobe Robohelp keep getting pwned by this. The article is kind of wrong that attacks against the URL won't be detected by the server since a decent WAF will detect this.
If I can convince a user to click that, and then login, I can steal their username, password or anything else. Basically anything they do in that window after clicking that link can be compromised.
I'm interested in the topic, but found the article quite disappointing. It doesn't really go into the technical details why we have a new wave of XSS vulns.
What I learned only recently: With many modern javascript frameworks many of the assumptions you may have had about XSS in the past are obsolete. The strategies that worked in the past - proper escaping of untrusted input - don't necessarily work any more if you're using something like angularjs.
This article was very much about the data we've collected and our analysis of it, as opposed to our opinions as to why - had to keep it to a reasonable length!
So we kept that section short in the end. I do plan follow up posts that provide my theories as to why it's happening, and I think a best practices guide that discusses template-related XSS is a good idea. In the meantime, you can check out this related post: https://snyk.io/blog/type-manipulation/
IIRC the GitHub Open Source Survey noted that the people surveyed were more likely to trust OSS software in terms of security because of the transparency with vulnerabilities and the community surrounding it.
This article mentions increased use of OSS libs as a rising source of XSS. I'm really not sure what's worse - OSS that can be fixed and audited easily or proprietary software that's closed and lacking visibility.
OSS is no silver bullet - you still have to do your due diligence to have secure system. OSS just gives you an option to "fix it yourself".
Just recently I was reading a library and stumbled upon this interesting crypto tidbit [0] ("XXX get some random bytes instead"). Maybe a paid engineer would've designed it better but history is full of counter-examples (see CVE-2017-5689 [1]).
What a closed source development team provides over OSS is some control over the quality and training of the developers allowed to commit to the codebase (e.g. the company can mandate that all developers have had training in how to avoid common XSS issues), control over the processes to be followed when commiting code, and control over the security tests to be carried out.
Of course as a consumer of software that doesn't help too much 'cause you don't know which companies do a good job and which ones just say they do a good job...
Open source is better in that you can audit it easily. However lets be honest, how many users of open source software actually are able to audit the libraries they use...
So neither option is particularly great at the moment(IMO)
The rise in the client doing heavy lifting via libraries such as React is driving an increase in vulnerabilities.
Developers getting into React don't always realize that all the code is executed in the client and any input validation and authentication they come up with has to also exist on the server storing that data.
While those kinds of "junior developer confused by client vs server" vulnerabilities may be more common, the XSS vulnerabilities described in the article are likely being reduced by libraries like React. You really have to go through some contortions (including manipulating a property called dangerouslySetInnerHTML) to create the kind of insidious XSS vulnerabilities that were commonplace in server-rendered code a few years ago.
It used to be very easy for even experienced developers to accidentally forget to escape a variable somewhere. It took framework developers a while to realize that "escape" should be the default, and now we're at "escape by default and make the developer sign forms in triplicate to override". Which is healthy, I think.
React by default has pretty good XSS protection. That being said, "don't trust the client" has been something developers have struggled with ever since we started writing client/server software.
This has been the case since the advent of client side JavaScript. Validation can/should occur on the client side, but it MUST occur on the server side. These aren't new issues due to the use of new JavaScript libraries - they are problems that might be new to some developers.
It seems to me to be the exact opposite of this. If all of the data going from server to client comes through JSON to javascript, which usually means a JSON serializer and should correctly escape the data since you're not generating the JSON by hand, then there is no chance for traditional XSS attacks since the only remaining vector would be doing manual DOM building by concatenating strings, which you generally don't do in React. Now CSRF attacks I would believe you, but not XSS with React.
Completely agree! The post actually alludes to that a bit towards the end.
> Single Page Apps increase the amount of client side logic and user input processing. This makes them more likely to be vulnerable to DOM-based XSS, which, as previously mentioned, is very difficult for website owners to detect.
The more significant work we do on the client, the more interesting it becomes as an attack vector.
[+] [-] tptacek|8 years ago|reply
Be very suspicious of articles like these.
[+] [-] thephyber|8 years ago|reply
This Google Doc has tracked almost all "sinks" and "sources" for DOM-based XSS[1]. They aren't by any means limited to the URL (usually accessed by the `document.location` object).
[1] https://docs.google.com/spreadsheets/d/1Mnuqkbs9L-s3QpQtUrOk...
[+] [-] guypod|8 years ago|reply
[+] [-] tallclair|8 years ago|reply
This seems like a fairly obvious idea to me, but I'm not a frontend developer, so I'm looking for someone to tell me why this doesn't already exist :)
[+] [-] dfabulich|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] dansingerman|8 years ago|reply
Hmmm...assuming your back end has all the requisite validation and other security in place, how can a SPA cause an XSS? Are there any purely client side attack vectors (XSS or otherwise) that need to be considered if your back end is fully protected?
[+] [-] airza|8 years ago|reply
[+] [-] coderzach|8 years ago|reply
https://example.com/login?vulernable-param=evilcredentialste...
If I can convince a user to click that, and then login, I can steal their username, password or anything else. Basically anything they do in that window after clicking that link can be compromised.
[+] [-] _xgw|8 years ago|reply
[+] [-] hannob|8 years ago|reply
What I learned only recently: With many modern javascript frameworks many of the assumptions you may have had about XSS in the past are obsolete. The strategies that worked in the past - proper escaping of untrusted input - don't necessarily work any more if you're using something like angularjs.
[+] [-] guypod|8 years ago|reply
[+] [-] nate|8 years ago|reply
:) Something got quoted wrong there.
[+] [-] stevula|8 years ago|reply
[+] [-] bnb|8 years ago|reply
This article mentions increased use of OSS libs as a rising source of XSS. I'm really not sure what's worse - OSS that can be fixed and audited easily or proprietary software that's closed and lacking visibility.
[+] [-] hdhzy|8 years ago|reply
Just recently I was reading a library and stumbled upon this interesting crypto tidbit [0] ("XXX get some random bytes instead"). Maybe a paid engineer would've designed it better but history is full of counter-examples (see CVE-2017-5689 [1]).
[0]: https://github.com/nitram509/macaroons.js/blob/master/src/ma...
[1]: https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=2017-5689
[+] [-] raesene9|8 years ago|reply
Of course as a consumer of software that doesn't help too much 'cause you don't know which companies do a good job and which ones just say they do a good job...
Open source is better in that you can audit it easily. However lets be honest, how many users of open source software actually are able to audit the libraries they use...
So neither option is particularly great at the moment(IMO)
[+] [-] bsears|8 years ago|reply
Developers getting into React don't always realize that all the code is executed in the client and any input validation and authentication they come up with has to also exist on the server storing that data.
[+] [-] stickfigure|8 years ago|reply
It used to be very easy for even experienced developers to accidentally forget to escape a variable somewhere. It took framework developers a while to realize that "escape" should be the default, and now we're at "escape by default and make the developer sign forms in triplicate to override". Which is healthy, I think.
[+] [-] efdee|8 years ago|reply
[+] [-] tcoff91|8 years ago|reply
[+] [-] blakeholl|8 years ago|reply
[+] [-] fryguy|8 years ago|reply
[+] [-] tkadlec|8 years ago|reply
> Single Page Apps increase the amount of client side logic and user input processing. This makes them more likely to be vulnerable to DOM-based XSS, which, as previously mentioned, is very difficult for website owners to detect.
The more significant work we do on the client, the more interesting it becomes as an attack vector.
[+] [-] guypod|8 years ago|reply
[+] [-] tallclair|8 years ago|reply
[deleted]