First of all, thanks for the nice writeup. I hate that comments tend to hone in on nitpicking, but so it goes. My apologies in advance.
> If you're just starting out with a new web application, it should probably be an SPA.
Your reasoning for this seems to be performance (reloading assets), but IMHO the only good reason for using a single-page app is when your application requires a high level of interactivity.
In nearly every case where an existing app I know and love transitions to a single-page app (seemingly just for the sake of transitioning to a single-page app), performance and usability have suffered. For example, I cannot comprehend why Reddit chose a single-page app for their new mobile site.
It's a lot harder to get a single-page app right than a traditional app which uses all the usability advantages baked in to the standard web.
I fully agree with this. SPAs are for webapps, not websites.
For websites, you should always use progressive enhancement - there is no reason why you couldn't obtain the same performance gains by progressively enhancing your site with reload-less navigation. That's what AJAX and the HTML5 History API are for.
Especially don't forget that no, not all your clients support JS. And there's no reason why they should need to, for a website.
This was one of the points that stood out for me in the article too, also because I strongly disagreed with it. There is nothing inherently wonderful about doing everything client-side.
You get a much more limited range of languages and libraries to work with. You get to use overcomplicated build and deployment processes with ever-changing tools. You get to reinvent the wheel if you do want to use things like URI-based routing and browser history in a sensible way. In many cases you are going to need most of the same back-end infrastructure to supply the underlying data anyway.
Also, it's tough to argue the SPA approach is significantly more efficient if it's being compared with a traditional web app built using a server-side framework where switching contexts requests exactly one uncached HTML file, no uncached CSS or JS in many cases, and any specific resources for the new page that you would have had to download anyway.
Of course some web apps are sufficiently interactive that you do need to move more of the code client-side, and beyond a certain point you might find it's then easier to do everything there instead of splitting responsibilities. I'm not saying everything should be done server-side; I'm saying different choices work for different projects and it is unwise to assume that SPA will be a good choice for all new projects.
If you're just starting out, chances are you are not (or should not be) making any sort of application where the performance increases by operating as a SPA will be even be noticeable compared to a standard server app.
Plus, I'd argue that you won't really understand what a SPA adds (or takes away) unless you are thoroughly familiar with the traditional model.
Finally, at the end of the day traditional apps are just a lot easier to put together even compared to the latest SPA frameworks, especially if your server side tech is something like Ruby or C#. A beginner will be better served by getting something nice up quickly, before attempting to do it the 'purist' way and possibly getting discouraged by the difficulty.
> I cannot comprehend why Reddit chose a single-page app for their new mobile site
They should but not the way they are making it.
It should be server side render for initial page (not just home page but any page). And mostly changing content through AJAX when you navigate between pages.
SPA is hard specially when it comes to usability. One of the biggest issue I see with SPA is going back. Browser handles back history pretty good for non-SPA. Replicating similar behavior in JS in not easy.
If you need a backend, starting with a SPA has one strong point, decoupling, you can leave your backend unmodified and start writing that native iOS, Android client or desktop client when needed it.
I often find many ajaxy effects on websites don't work work. For example, when I click to expand a comment in Quora, it often fails and I find it much quicker to just open it in a new page.
If you're new to web application development and security, don't blindly follow the advice of someone else who is also new to web application security.
You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.
Security is a hairy issue, and no single blog post/article is going to distill the nuances down in an easy to digest manner.
If you are a business, then definitely yes. But the average self-taught developer will not have the resources available to hire a security consultant.
Instead of throwing money at the problem, you can instead choose to teach yourself more about the subject. We maintain a curated list on Github for people interested in learning about application security for this very reason.
You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.
It is easy to write that, and on the face of it, it's hard to argue against.
The trouble is, those audits and consultants don't come cheap, and if you're new at web apps and working on your first one that no-one has ever heard of yet, there is little really essential that you wouldn't find investing the same time reading the usual beginners' guides to security on-line. It's all risk management, and if you even make that effort you'll already be a significantly harder target than many established sites.
As a corporate lawyer once told me when I was getting the very first contract drawn up for a new business, for a simple supplier relationship, he could certainly charge me five figures and write an extensive document protecting the business against every conceivable threat he could imagine involving that supplier, but until the business had actual revenues worth protecting and the deal with that particular supplier was worth a lot more than the legal fees, he wouldn't advise doing it.
Security is never perfect. It is a deterrent, not impenetrable prevention. So sure, to security people, it is never good enough. To everyone else, a easy to digest blog post might give them food for thought that would make their work one step better than it was before, resulting in security that is still flawed, but better. So why not just accept the post for what it is - some basic advice to do that one better step.
This is something we help with a lot at Tinfoil (https://www.tinfoilsecurity.com). You can read our blog for useful tips and info, but we always recommend actually running our web application scans against your app in order to actually look for vulnerabilities. Is it as good as having 'tptacek or someone else from Matasano looking at it as a human? Not quite, since humans have more ingenuity. Is it better than reading a blog post and trying to follow 'best practices'? Infinitely.
In general, you may be right, but the security suggestions in this particular post are the same I hear from people "who have experience in security." Also, they often encourage readers to basically go out and find the thing everyone says is the best thing (i.e. "When storing passwords, salt and hash them first, using an existing, widely used crypto library.")
I challenge you to point out specific suggestions in this article which are wrong or misleading, or to point out glaring omissions.
If you can afford it, buy a proven security solution. For example use an IBM Datapower or ISAM appliance (or similar from F5). Enterprises will choose something like this to secure their many internal web applications.
This is a bit of a pet peeve of mine, but that banner image is 10 megabytes, it can be compressed down to 2mb without any perceptible loss of quality. Heck it could probably be shrunk further if you can accept a bit more loss because most of the image is blurry and noisy anyway.
> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.
The thing that everybody seems to overlook here: this has serious legal consequences.
You are demanding of your users that they agree to a set of TOS from a third party, that does not have either their or your best interests at heart, and that could have rather disturbing things in their TOS - such as permission to track you using widgets on third-party sites.
Not to mention the inability to remove an account with a third-party service without breaking their authentication to your site as well.
Always, always offer an independent login method as well - whether it be username/password, a provider-independent key authentication solution, or anything else.
> When storing passwords, salt and hash them first, using an existing, widely used crypto library.
"Widely used" in and of itself is a poor metric. Use scrypt or bcrypt. The latter has a 72 character input limit, which is a problem for some passphrases, as anything after 72 characters is silently truncated.
Question about JavaScript and CDN for mobile devices. Should I use a CDN for standard libraries or should I just concat and minify all my JavaScript?
The concat and minify seems better as that reduces the JavaScript libraries and code load to a single HTTP request.
A CDN seems nice in theory. Reality is:
Does the browser have the library cached?
Is the library cached from the CDN that I'm using?
The browser is making more HTTP requests, which sometimes takes more time to request than to download the library.
I agree that using CDNs is a good speed boost. I'm trying to figure out if hoping for a library cache hit out weights a library cache miss.
Just to clarify. General CDNs tend to be a good idea of you are having latency issues.
Standard libraries for major JavaScript project, all served from a single shared CDN (like Google, maxcdn, cdnjs, etc) also tend to be called "CDNs" but this is a little confusing. Yes, these shared files are often stored on a CDN, but that's not the so-called major benefit of these shared hosts. The main benefit is supposed to be that, if everyone references the same copy of jQuery on one of these shared hosts, the when visitors hit other sites, jQuery will already be in their browser cache.
This rarely works in practice. The URLs to these shared libraries. Multiple shared services. Multiple version numbers. HTTPS vs HTTP. The net result is that the probability that someone visiting your site has a copy of the exactly same resource referenced via the exact same URL is very low.
With the overhead of having to do a DNS lookup, a TCP connection, TCP slow start, its rarely worth it. Just concat/minify into your own block of JS served from your own systems. Shared JS hosts/CDNs are a terrible and annoying hack, all in an attempt to save 50KB or so
CDN is a way to go unless you have some very specific circumstances, like increased security requirements or lack of CDN edge location near the majority of your users.
jQuery CDN has something like 99,8% cache hits. And even if the browser doesn't have the library cached, it will have it cached on all subsequent request. Additional roundtrips will be needed on first page load only. Take into consideration that as soon as you make even a small change to your js files, the whole minified and bundled JavaScript will need to be redownloaded.
Also pulling anything from CDN basically means that CDN operator (or anyone that will manage to hack it) can spy on or alter communication between your users and your server.
I suspect, given the reference to sending verification emails, that hashing was what was intended here. As with the use of identity instead of authorization. To be clear, encryption implies you can retrieve the stored value later, while hashing is intended to be one-way.
"When storing passwords, encrypt them first, using an existing, widely used crypto library. If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow."
Can you elaborate on what's so "nope" about that advice? Are you saying one shouldn't encrypt passwords?
I hope that at least some services will eventually consider that for sites that aren't storing valuable data, Passwordless (i.e. emailed, etc. one-time token) and long-lived session tokens are better than even touching passwords.
Personally, I still prefer Persona's privacy-oriented approach to id management, but since Mozilla stopped pushing it, development has slowed quite a bit and widespread adoption will probably never happen.
One thing to note is login redirect. Please be sure that redirect parameter is local URI and don't redirect user to another site.
Maybe even append HMAC signature to that parameter with user IP and timestamp. Might be an overkill, but still be careful with craftable redirects, they might become vulnerability one day.
As a web application developer in 2015+ I would argue that developing with mobile in mind should be required.
At least taken into consideration.
At bare minimum have a pre-deployment test: is my app unusable/does this look terrible on the most popular iphone/android.
For mobile apps that use WebView and/or has the capability to execute javascript or any other language provided by any network available resource I'd like to add:
ALWAYS USE CRYPTOGRAPHY for communication! Simply doing HTTP to HTTPS redirects is not sufficient. The origin request must be via HTTPS. Also make sure the app is properly validating the HTTPS connection.
Sorry I had to shout, but I'm growing tired of downloading the latest cool app that is marketed as secure only to find that it doesn't use HTTPS and as a result I can hijack the application UI to ask users for things like their password, credit-card number, etc., all without them having any way to tell if they are being asked by some bad guy.
I think this is more of a question of what kind of project or team you are working on, not one of experience in web development. Because it seems that you're suggesting beginners use Sails or Meteor (if focusing on JS), which are great and allow for rapid prototyping, but they and other 'high-level frameworks' that implement these methods for you tend to be very opinionated with important details of developing for the web abstracted away.
If you're a student or are serious about learning web development (and want to focus on developing in JS), it would make a lot of sense to dedicate your time to actually learning Node and Express, figuring out all of these hairy details and 'manually' implementing the items in Venantius' list.
Or don't figure out the hairy details, because many of his items have proven and documented solutions in the Node context, and learning how to properly use bcrypt and passport isn't too difficult. These libs are a good middle-ground between low-level details and something more out of the box.
>> When users sign up, you should e-mail them with a link that they need to follow to confirm their email
I'm curious, why is this good? Sure, sending an email to them so they confirm they have the correct email, but what is the benefit of the verification step? Is it to prevent them from proceeding in case they got the wrong email? It would be nice if this was justified in the article.
I would also add, that changing a password should send an email to the account holder to notify them. Then when changing the email address, the old email address should be notified. This is so a hijacked account can be detected by the account owner.
> The key advantage to an SPA is fewer full page loads - you only load resources as you need them, and you don't re-load the same resources over and over.
I don't know much about web development, but shouldn't those resources get cached? Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?
> Forms: When submitting a form, the user should receive some feedback on the submission. If submitting doesn't send the user to a different page, there should be a popup or alert of some sort that lets them know if the submission succeeded or failed.
I signed up for an Oracle MOOC the other day and got an obscure "ORA-XXXXX" error and had no idea if I should do anything or if my form submission worked. My suggestion would be to chaos monkey your forms because it seems that whatever can go wrong can. Make it so that even if there is an error the user is informed of what is going on and if there's something they can do about it.
> Avoid lazy transition calculations, and if you must use them, be sure to use specific properties (e.g., "transition: opacity 250ms ease-in" as opposed to "transition: all 250ms ease-in")
[+] [-] tspike|10 years ago|reply
> If you're just starting out with a new web application, it should probably be an SPA.
Your reasoning for this seems to be performance (reloading assets), but IMHO the only good reason for using a single-page app is when your application requires a high level of interactivity.
In nearly every case where an existing app I know and love transitions to a single-page app (seemingly just for the sake of transitioning to a single-page app), performance and usability have suffered. For example, I cannot comprehend why Reddit chose a single-page app for their new mobile site.
It's a lot harder to get a single-page app right than a traditional app which uses all the usability advantages baked in to the standard web.
[+] [-] joepie91_|10 years ago|reply
For websites, you should always use progressive enhancement - there is no reason why you couldn't obtain the same performance gains by progressively enhancing your site with reload-less navigation. That's what AJAX and the HTML5 History API are for.
Especially don't forget that no, not all your clients support JS. And there's no reason why they should need to, for a website.
[+] [-] Silhouette|10 years ago|reply
You get a much more limited range of languages and libraries to work with. You get to use overcomplicated build and deployment processes with ever-changing tools. You get to reinvent the wheel if you do want to use things like URI-based routing and browser history in a sensible way. In many cases you are going to need most of the same back-end infrastructure to supply the underlying data anyway.
Also, it's tough to argue the SPA approach is significantly more efficient if it's being compared with a traditional web app built using a server-side framework where switching contexts requests exactly one uncached HTML file, no uncached CSS or JS in many cases, and any specific resources for the new page that you would have had to download anyway.
Of course some web apps are sufficiently interactive that you do need to move more of the code client-side, and beyond a certain point you might find it's then easier to do everything there instead of splitting responsibilities. I'm not saying everything should be done server-side; I'm saying different choices work for different projects and it is unwise to assume that SPA will be a good choice for all new projects.
[+] [-] AdeptusAquinas|10 years ago|reply
If you're just starting out, chances are you are not (or should not be) making any sort of application where the performance increases by operating as a SPA will be even be noticeable compared to a standard server app.
Plus, I'd argue that you won't really understand what a SPA adds (or takes away) unless you are thoroughly familiar with the traditional model.
Finally, at the end of the day traditional apps are just a lot easier to put together even compared to the latest SPA frameworks, especially if your server side tech is something like Ruby or C#. A beginner will be better served by getting something nice up quickly, before attempting to do it the 'purist' way and possibly getting discouraged by the difficulty.
[+] [-] thekingshorses|10 years ago|reply
They should but not the way they are making it.
It should be server side render for initial page (not just home page but any page). And mostly changing content through AJAX when you navigate between pages.
SPA is hard specially when it comes to usability. One of the biggest issue I see with SPA is going back. Browser handles back history pretty good for non-SPA. Replicating similar behavior in JS in not easy.
Reddit SPA should be like this with server side rendering. http://reddit.premii.com/
[+] [-] Scarbutt|10 years ago|reply
[+] [-] arikrak|10 years ago|reply
[+] [-] balls187|10 years ago|reply
You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.
Security is a hairy issue, and no single blog post/article is going to distill the nuances down in an easy to digest manner.
[+] [-] paragon_init|10 years ago|reply
Instead of throwing money at the problem, you can instead choose to teach yourself more about the subject. We maintain a curated list on Github for people interested in learning about application security for this very reason.
https://github.com/paragonie/awesome-appsec
But if you're a company and your operating budget is in the millions of dollars hire a security consultant!
[+] [-] Silhouette|10 years ago|reply
It is easy to write that, and on the face of it, it's hard to argue against.
The trouble is, those audits and consultants don't come cheap, and if you're new at web apps and working on your first one that no-one has ever heard of yet, there is little really essential that you wouldn't find investing the same time reading the usual beginners' guides to security on-line. It's all risk management, and if you even make that effort you'll already be a significantly harder target than many established sites.
As a corporate lawyer once told me when I was getting the very first contract drawn up for a new business, for a simple supplier relationship, he could certainly charge me five figures and write an extensive document protecting the business against every conceivable threat he could imagine involving that supplier, but until the business had actual revenues worth protecting and the deal with that particular supplier was worth a lot more than the legal fees, he wouldn't advise doing it.
[+] [-] codingdave|10 years ago|reply
[+] [-] borski|10 years ago|reply
Don't try to do it yourself.
[+] [-] jaawn|10 years ago|reply
I challenge you to point out specific suggestions in this article which are wrong or misleading, or to point out glaring omissions.
[+] [-] jhallenworld|10 years ago|reply
[+] [-] HOLYCOWBATMAN|10 years ago|reply
brb compiling linux to JS to render my blog post.
[+] [-] quadrature|10 years ago|reply
heres a compressed version: https://www.dropbox.com/s/bw606t7znouxpj1/photo-141847963101...
[+] [-] lewisl9029|10 years ago|reply
That is ironic on so many levels.
I mean there is even a section on "UX: Bandwidth"...
Maybe the author should brush up on image compression best practices and consider adding a subsection on images and other media.
EDIT: Realized my previous wording was probably a bit too harsh considering the author is still relatively new to web development.
[+] [-] exodust|10 years ago|reply
4k is all the rage now on mobile.
just taken a chunk out of someone's download quota with that nice background!
[+] [-] cpeterso|10 years ago|reply
[+] [-] joepie91_|10 years ago|reply
The thing that everybody seems to overlook here: this has serious legal consequences.
You are demanding of your users that they agree to a set of TOS from a third party, that does not have either their or your best interests at heart, and that could have rather disturbing things in their TOS - such as permission to track you using widgets on third-party sites.
Not to mention the inability to remove an account with a third-party service without breaking their authentication to your site as well.
Always, always offer an independent login method as well - whether it be username/password, a provider-independent key authentication solution, or anything else.
> When storing passwords, salt and hash them first, using an existing, widely used crypto library.
"Widely used" in and of itself is a poor metric. Use scrypt or bcrypt. The latter has a 72 character input limit, which is a problem for some passphrases, as anything after 72 characters is silently truncated.
[+] [-] devNoise|10 years ago|reply
The concat and minify seems better as that reduces the JavaScript libraries and code load to a single HTTP request.
A CDN seems nice in theory. Reality is: Does the browser have the library cached? Is the library cached from the CDN that I'm using? The browser is making more HTTP requests, which sometimes takes more time to request than to download the library.
I agree that using CDNs is a good speed boost. I'm trying to figure out if hoping for a library cache hit out weights a library cache miss.
[+] [-] billyhoffman|10 years ago|reply
Standard libraries for major JavaScript project, all served from a single shared CDN (like Google, maxcdn, cdnjs, etc) also tend to be called "CDNs" but this is a little confusing. Yes, these shared files are often stored on a CDN, but that's not the so-called major benefit of these shared hosts. The main benefit is supposed to be that, if everyone references the same copy of jQuery on one of these shared hosts, the when visitors hit other sites, jQuery will already be in their browser cache.
This rarely works in practice. The URLs to these shared libraries. Multiple shared services. Multiple version numbers. HTTPS vs HTTP. The net result is that the probability that someone visiting your site has a copy of the exactly same resource referenced via the exact same URL is very low.
With the overhead of having to do a DNS lookup, a TCP connection, TCP slow start, its rarely worth it. Just concat/minify into your own block of JS served from your own systems. Shared JS hosts/CDNs are a terrible and annoying hack, all in an attempt to save 50KB or so
[+] [-] vdaniuk|10 years ago|reply
jQuery CDN has something like 99,8% cache hits. And even if the browser doesn't have the library cached, it will have it cached on all subsequent request. Additional roundtrips will be needed on first page load only. Take into consideration that as soon as you make even a small change to your js files, the whole minified and bundled JavaScript will need to be redownloaded.
[+] [-] mbq|10 years ago|reply
[+] [-] shiggerino|10 years ago|reply
Nopenopenopenopenope!
This is terrible advice. Don't do this. Remember what happened when Adobe did this?
[+] [-] lstamour|10 years ago|reply
[+] [-] rfrey|10 years ago|reply
"When storing passwords, encrypt them first, using an existing, widely used crypto library. If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow."
Can you elaborate on what's so "nope" about that advice? Are you saying one shouldn't encrypt passwords?
[+] [-] aikah|10 years ago|reply
[+] [-] jessaustin|10 years ago|reply
[+] [-] balls187|10 years ago|reply
OAuth isn't identity management, it's for authorization.
Each of those platforms does provide it's own identity management, but that isn't OAuth.
[+] [-] lewisl9029|10 years ago|reply
http://openid.net/connect/
Personally, I still prefer Persona's privacy-oriented approach to id management, but since Mozilla stopped pushing it, development has slowed quite a bit and widespread adoption will probably never happen.
https://www.mozilla.org/en-US/persona/
[+] [-] artmageddon|10 years ago|reply
[+] [-] ad-hominem|10 years ago|reply
[+] [-] romaniv|10 years ago|reply
> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.
Questionable advice. At the very least neither of these two are some kind of automatic "best practice" everyone should just follow.
> it can be helpful to rename all those user.email vars to u.e to reduce your file size
Or maybe you should less JavaScript so length of your variable names does not matter.
[+] [-] vbezhenar|10 years ago|reply
Maybe even append HMAC signature to that parameter with user IP and timestamp. Might be an overkill, but still be careful with craftable redirects, they might become vulnerability one day.
[+] [-] jameshart|10 years ago|reply
... well, no. Technically you don't have to. But you almost certainly should.
[+] [-] davnicwil|10 years ago|reply
If anything the advice should be inverted by replacing 'mobile' with 'desktop'
[+] [-] toynbert|10 years ago|reply
[+] [-] schnable|10 years ago|reply
[+] [-] patcheudor|10 years ago|reply
ALWAYS USE CRYPTOGRAPHY for communication! Simply doing HTTP to HTTPS redirects is not sufficient. The origin request must be via HTTPS. Also make sure the app is properly validating the HTTPS connection.
Sorry I had to shout, but I'm growing tired of downloading the latest cool app that is marketed as secure only to find that it doesn't use HTTPS and as a result I can hijack the application UI to ask users for things like their password, credit-card number, etc., all without them having any way to tell if they are being asked by some bad guy.
[+] [-] Domenic_S|10 years ago|reply
1. Use a widely-accepted framework.
2. Implement your application using that framework's methods.
Why a beginner would implement even 1/3 of this list manually is beyond me.
[+] [-] ahayschi|10 years ago|reply
If you're a student or are serious about learning web development (and want to focus on developing in JS), it would make a lot of sense to dedicate your time to actually learning Node and Express, figuring out all of these hairy details and 'manually' implementing the items in Venantius' list.
Or don't figure out the hairy details, because many of his items have proven and documented solutions in the Node context, and learning how to properly use bcrypt and passport isn't too difficult. These libs are a good middle-ground between low-level details and something more out of the box.
[+] [-] martin-adams|10 years ago|reply
I'm curious, why is this good? Sure, sending an email to them so they confirm they have the correct email, but what is the benefit of the verification step? Is it to prevent them from proceeding in case they got the wrong email? It would be nice if this was justified in the article.
I would also add, that changing a password should send an email to the account holder to notify them. Then when changing the email address, the old email address should be notified. This is so a hijacked account can be detected by the account owner.
[+] [-] Quanttek|10 years ago|reply
I don't know much about web development, but shouldn't those resources get cached? Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?
[+] [-] Kudos|10 years ago|reply
[+] [-] donmb|10 years ago|reply
[+] [-] avinassh|10 years ago|reply
[+] [-] Yhippa|10 years ago|reply
> Forms: When submitting a form, the user should receive some feedback on the submission. If submitting doesn't send the user to a different page, there should be a popup or alert of some sort that lets them know if the submission succeeded or failed.
I signed up for an Oracle MOOC the other day and got an obscure "ORA-XXXXX" error and had no idea if I should do anything or if my form submission worked. My suggestion would be to chaos monkey your forms because it seems that whatever can go wrong can. Make it so that even if there is an error the user is informed of what is going on and if there's something they can do about it.
[+] [-] Kiro|10 years ago|reply
Why is it better to be specific?
[+] [-] unknown|10 years ago|reply
[deleted]