Please note that this describes something you would do for a toy project. If you do this for work you probably want to:
* Install NPM in a way that you can upgrade easily. Your package manager was designed to do this, so use it. Never ever put untracked files in the system directories.
* Don't git clone into production. You need to know which version was deployed when and where. At the very least set a tag. Better yet, roll a package from that tag (see above) which you then can sign and store. It's very easy and there are tools to help.
* Schedule when and how you upgrade your operating system, NPM, and your application. To do this you need a way to take an application out of production, which brings us to...
* You generally want a load balancer or some sort of web server between the world and your application. This could be as simple as an Apache or nginx. Don't muck about with port forwards!
* And most importantly, document this down to every command in the internal wiki! Even better, write a Puppet/Salt/Ansible file and put it under version control.
In short: Tools exist for a reason. Use them. Don't hack files manually until you master the tools and know why they exist.
> * Install NPM in a way that you can upgrade easily. Your package manager was designed to do this, so use it. Never ever put untracked files in the system directories.
The articles uses the package manager - specifically, the article directs people to install node using nodesource's RPM and dpkg node packages (npm is included with node), and only mentions tarballs if these don't exist for your OS.
> * Schedule when and how you upgrade your operating system, NPM, and your application. To do this you need a way to take an application out of production, which brings us to...
> You generally want a load balancer or some sort of web server between the world and your application.
Hence covering that and load balancing with HAProxy in the article. This is mentioned in the introduction.
> * And most importantly, document this down to every command in the internal wiki! Even better, write a Puppet/Salt/Ansible file and put it under version control.
Hence mentioning exactly that in the opening few paragraphs.
It's very clear you didn't read the article before commenting.
Tools exist for a reason. Use them. Don't hack files manually until you master the tools and know why they exist.
I feel it's often the other way around: don't hack around with big powerful tools until you know what the underlying problem is and what steps are actually needed to solve it.
For hassle-free node apps deployment, I use and recommend PM2 : a [free and open source tool](https://github.com/Unitech/pm2)
As its doc states: PM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks.
It also provides a very nice integration with keymetrics.io, which is the paid service which finance PM2 development (they do provide a free tier).
PS: Apart from using it and loving it, I am not affiliated with this product's team.
Surprised that this is missing a recommendation to run 'npm shrinkwrap' and include npm-shrinkwrap.json in your deployment package so that the dependencies you have been testing against (and more importantly your dependencies' dependencies) don't shift unintentionally under your feet. Although maybe this isn't a problem with npm v3?
I'd also recommend including your dependencies in your deployment package anyway to avoid a deployment being held up by npmjs.org downtime.
Author here! You're totally right, I've just added a shrinkwrap section.
PS. npm v3 still needs it, in fact it makes it better - `install --save` updates the shrinkwrap file.
Re: checking in modules, I used to do this during 2013 when npm was up and down every few days, but stopped a couple of years ago after npm Inc stabilised everything. It's been solid so far and the smaller repo sizes (and faster deploys) have been worth it.
As someone who never deployed a node app I was a bit surprised to see it doesn't require a server like Apache or Nginx as a reverse proxy ? Is it a commmon practice to deploy node apps like this ?
For most cases you want load balancing, hence HAProxy. But if you don't, node has event driven IO like nginx does, so it's quite capable of things like static file serving and https (like nginx, node uses openssl for all the hard work).
Can I shamelessly plug[0] and say maybe do all this and stick it in a Debian package if you're running Debian, or like most of the cloud world, Ubuntu.
As far as I know, using nginx in front helps with serving static files, which is a moot point on a REST API.
Well, that depends on the API; for example, a product inventory API might need to serve many product images, a document management API might have to serve PDFs and such, etc.
After a few years of deploying apps in a reasonably similiar way to this, I switched to using dokku for deploying node applications recently, the experience so far has been extremely nice.
I get `git push dokku master` deployment for free, for any application, I dont have to worry about conflicting node versions between applications and it took a lot less setup for all of my applications than this process describes for one.
Sure - as discussed in the article, the steps of deploying a server should only be performed once, and customised to your needs.
From then on you can - and should - deploy everything with Ansible playbooks, AWS AMIs / Digital Ocean images, Dockerfiles, or whatever else. You've already been doing this for years and have the experience - this is written for developers who haven't done a lot of Linux before and want to take control of the process.
Dokku is great if you want to host all apps on a single server. I had used it for a while.
Switched to using github to push to, which fires off a build on a ci server, which fires off a push to docker registry and my server pulls that and rolls the versions.
This setup allows me to have everything on a single server until an app starts needing it's own, then I deploy a new CoreOS server and transfer the fire system to that one.
If the node app uses express js and has multiple processes, then a redis server may be required to handle non-stikcy session. It would be nice if the article describes the redis server setup.
If you are deploying to a single server, consider using `pm2 deploy` which makes life much easier for rolling new releases and deploying them on server.
Mainly because I didn't want it to be a Docker/kubernetes tutorial. At some point I'll publish an Ansible playbook you can modify for your environment and call from a Dockerfile.
[+] [-] xorcist|10 years ago|reply
* Install NPM in a way that you can upgrade easily. Your package manager was designed to do this, so use it. Never ever put untracked files in the system directories.
* Don't git clone into production. You need to know which version was deployed when and where. At the very least set a tag. Better yet, roll a package from that tag (see above) which you then can sign and store. It's very easy and there are tools to help.
* Schedule when and how you upgrade your operating system, NPM, and your application. To do this you need a way to take an application out of production, which brings us to...
* You generally want a load balancer or some sort of web server between the world and your application. This could be as simple as an Apache or nginx. Don't muck about with port forwards!
* And most importantly, document this down to every command in the internal wiki! Even better, write a Puppet/Salt/Ansible file and put it under version control.
In short: Tools exist for a reason. Use them. Don't hack files manually until you master the tools and know why they exist.
[+] [-] nailer|10 years ago|reply
The articles uses the package manager - specifically, the article directs people to install node using nodesource's RPM and dpkg node packages (npm is included with node), and only mentions tarballs if these don't exist for your OS.
> * Schedule when and how you upgrade your operating system, NPM, and your application. To do this you need a way to take an application out of production, which brings us to... > You generally want a load balancer or some sort of web server between the world and your application.
Hence covering that and load balancing with HAProxy in the article. This is mentioned in the introduction.
> * And most importantly, document this down to every command in the internal wiki! Even better, write a Puppet/Salt/Ansible file and put it under version control.
Hence mentioning exactly that in the opening few paragraphs.
It's very clear you didn't read the article before commenting.
[+] [-] pavlov|10 years ago|reply
I feel it's often the other way around: don't hack around with big powerful tools until you know what the underlying problem is and what steps are actually needed to solve it.
[+] [-] joeyspn|10 years ago|reply
Exactly my thoughts... this is for toy projects.
It's 2016 and we have proper tools like PM2 [0], StrongPM [1], or even better, Serverless [2] (this is only for AWS but sooo cool) ...
[0] https://github.com/Unitech/pm2
[1] http://strong-pm.io/
[2] https://github.com/serverless/serverless
[+] [-] joshmn|10 years ago|reply
[+] [-] edelans|10 years ago|reply
As its doc states: PM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks.
It also provides a very nice integration with keymetrics.io, which is the paid service which finance PM2 development (they do provide a free tier).
PS: Apart from using it and loving it, I am not affiliated with this product's team.
[+] [-] jorge-d|10 years ago|reply
[1] http://pm2.keymetrics.io/docs/usage/deployment/
[2] https://keymetrics.io/2014/06/25/ecosystem-json-deploy-and-i...
[+] [-] humbleMouse|10 years ago|reply
[+] [-] joshmn|10 years ago|reply
[+] [-] jimmcslim|10 years ago|reply
I'd also recommend including your dependencies in your deployment package anyway to avoid a deployment being held up by npmjs.org downtime.
[+] [-] nailer|10 years ago|reply
PS. npm v3 still needs it, in fact it makes it better - `install --save` updates the shrinkwrap file.
Re: checking in modules, I used to do this during 2013 when npm was up and down every few days, but stopped a couple of years ago after npm Inc stabilised everything. It's been solid so far and the smaller repo sizes (and faster deploys) have been worth it.
[+] [-] tangue|10 years ago|reply
[+] [-] wanda|10 years ago|reply
[+] [-] nailer|10 years ago|reply
[+] [-] ehartsuyker|10 years ago|reply
[0] https://github.com/ehartsuyker/node-deb
[+] [-] tomcam|10 years ago|reply
[+] [-] hardwaresofton|10 years ago|reply
[+] [-] kybernetikos|10 years ago|reply
[+] [-] emilsedgh|10 years ago|reply
As far as I know, using nginx in front helps with serving static files, which is a moot point on a REST API.
[+] [-] icebraining|10 years ago|reply
Well, that depends on the API; for example, a product inventory API might need to serve many product images, a document management API might have to serve PDFs and such, etc.
[+] [-] illyism|10 years ago|reply
[+] [-] daleharvey|10 years ago|reply
I get `git push dokku master` deployment for free, for any application, I dont have to worry about conflicting node versions between applications and it took a lot less setup for all of my applications than this process describes for one.
[+] [-] nailer|10 years ago|reply
From then on you can - and should - deploy everything with Ansible playbooks, AWS AMIs / Digital Ocean images, Dockerfiles, or whatever else. You've already been doing this for years and have the experience - this is written for developers who haven't done a lot of Linux before and want to take control of the process.
[+] [-] Killswitch|10 years ago|reply
Switched to using github to push to, which fires off a build on a ci server, which fires off a push to docker registry and my server pulls that and rolls the versions.
This setup allows me to have everything on a single server until an app starts needing it's own, then I deploy a new CoreOS server and transfer the fire system to that one.
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] CalmStorm|10 years ago|reply
[+] [-] emilsedgh|10 years ago|reply
[+] [-] cdnsteve|10 years ago|reply
[+] [-] nailer|10 years ago|reply