top | item 10237195

The sad state of web app deployment

267 points| cespare | 10 years ago |eev.ee

160 comments

order
[+] wpietri|10 years ago|reply
As another old-school type, I really enjoy this rant. And my understanding is that the piece of software in question is a nest of snakes, so I can well believe that there is no good way to install it, only bad ways and worse ways.

But having tried out Docker for some production deployments, I think that it (or some less goofy successor) is the way forward. You get a sealed box with all the necessary dependencies, and the box has just enough holes poked in it that you can connect up the bits you care about. It turns apps into appliances, not things you need expert craftspeople to lovingly install.

As much as I have enjoyed 25 years of doing everything on an OS whose conceptual model is a 70s university department minicomputer, this era is coming to a close. We already know it's a poor match for existing hardware and use cases because we now mostly run it in virtual servers. But "virtual server" is the new "horseless carriage". It's a phrase that tells us we are living in a future we don't yet fully understand.

[+] csirac2|10 years ago|reply
I love Docker; it's solved a lot of problems for me. But this article highlights the laziness that it can enable: there has to be a middle ground between traditional package management and all of the curation/QA that goes into getting stable releases out to multiple distributions, versus chucking shit over the wall into a git repo with a Dockerfile and calling it done.

It's a better, more isolated mess - but for anyone trying to enforce configuration policy into all of the running services in their environment, untangling gortesquely basic shit like not granting superuser privs on a database to your webapps - which would never fly in a traditional distro package - becomes even more work than the old source tarballs with INSTALL.txt.

[+] ihsw|10 years ago|reply
I cannot agree more, and this is coming from a young whipper-snapper that tries to stay on the leading edge of technology.

There's a vast tectonic shift occurring and it couldn't come sooner -- "tinkering" with an increasingly complex environment of microservices, message queues, RPC servers, databases, caches, and so forth is headache-inducing and everybody knows it, as it detracts from actual application development.

I like the nest of snakes analogy, it's quite accurate. The twelve-factor application architecture[1] is here and many are moving towards it in one way or another, be it simply starting out writing unit tests or refactoring large applications into something easier to grok.

http://12factor.net/

[+] nicklaf|10 years ago|reply
As much as I have enjoyed 25 years of doing everything on an OS whose conceptual model is a 70s university department minicomputer, this era is coming to a close. We already know it's a poor match for existing hardware and use cases because we now mostly run it in virtual servers. But "virtual server" is the new "horseless carriage". It's a phrase that tells us we are living in a future we don't yet fully understand.

https://mirage.io/

You'll need to learn OCaml (though in my mind, that's actually a major plus).

Caveat emptor: AFAIK, MirageOS is still undergoing heavy development. That said, I believe it is essentially functional at this point (pun not intended).

[+] ajsharp|10 years ago|reply
The problem with docker is that it's Yet Another Tool you have to learn how to install, configure and use. Once you grok what it's doing, how to install it, how to run it on OS X, etc, it's powerful, but being forced to learn a new tool to install another, is a bit of a yak shave, and quite frustrating.
[+] archimedespi|10 years ago|reply
I heartily agree with a successor to Docker being the way forward - Docker is large and embroiled in lots of "ooh cool look at this shiny thing".
[+] drcongo|10 years ago|reply
Your last two sentences there are superb.
[+] wpietri|10 years ago|reply
Huh. Why the downvotes?
[+] quaunaut|10 years ago|reply
Wow. This is ridiculous.

"I've got a weird set up OS for a reason, but this software should work even though no one else would ever run this."

"Whatever- this database should work, even though I'm using a 4 year old version of the free software(that the docs specifically say needs to be a newer version)."

"Also, I'm extremely inexperienced installing Rails and Rails apps, and despite the fact that it's a language and server that we literally teach the newest of the new, it's just impossible to do anything with."

This isn't a story about the state of web app deployment, this is about the state of the server from hell. This sounds like the kind of thing you hear out of megacorps with 20 year old Mainframes.

Should open source be supporting such convoluted, needlessly minor cases of awful environments?

[+] eevee|10 years ago|reply
Yes, my OS is slightly "weird" for reasons I did not choose. Funny story: while I was trying to figure out what the hell RVM was doing, I mostly ran across people running OS X who had the same setup: 32-bit OS on a 64-bit chip. (And for what it's worth, Python native extensions build for the architecture of the Python executable — you know, the thing that has to load them — rather than the architecture of the machine.)

I never complained that it didn't work with an old database. Ubuntu's versioning was just a fun surprise. I don't see why it wouldn't work with Postgres 9.1; I just figured I'd do the upgrade while I already had my foot in the door.

Maybe if installing Rails apps is still a huge headache after the fourth or fifth time I've done it, something is wrong with Rails. Maybe.

I have a fairly mundane server running the latest Ubuntu LTS with all stock vendor packages. If your app is such brittle crap that this is the "server from hell", well, it's no wonder everyone is using Docker.

[+] vaxgeek|10 years ago|reply
I agree... Docker's 64-bit preference is called out ALL OVER the place... it's true that some people are running on ARM or in 32-bit environments but that is definitely not a normal setup for 99% of Docker users.

RVM itself is kind of a hack. Of course, if you have a working Docker environment, no reason to worry about RVM, as each container can have a full Ruby stack with whatever gems you'd like.

In an alternate universe where the author had a 64-bit virtual machine (takes about 55 seconds to set one up on Digital Ocean, with Docker pre-installed) I can subtract at least 8 hours from this story, as a prebuilt Ruby image and a prebuilt PostgreSQL image are both available.

Docker, like most kool-aid, is best if you buy in completely.

* Docker on 32-bit: weird, non-standard deployment * RVM by itself: not the best to begin with * RVM on 32 bit linux: not tested

Although I agree with the lack of dependency management on modern stacks and a few other points from this essay -- it seems like the core idea which led to this whole situation:

"I have a non-standard environment and it was tough to roll out things to it"

[+] x0x0|10 years ago|reply
I tuned out at

    I actually have a 64-bit kernel, but a 32-bit userspace
Eevee right there decided to swim upstream. If you play computers on hard mode, it's gonna be hard.
[+] friendzis|10 years ago|reply
I will not stop repeating this: the fact that we can have multiple "brands" of the same software, possibly forked from each other, is both the strength and weakness of open source and we just cannot cover our eyes and pretend only mod_php exists. Open source is a community first. Quoting the same eevee:

> If two features exist, someday, someone will find a reason to use them together.

I strongly believe that if you find a valid reason to use one specific brand of software, this has to be both clearly documented and explained. From my own experience, Ruby apps tend to be written with rather hard dependencies and are actually hell to deploy if any of the components you have preinstalled (be that process supervisor, message/job queue, web server) differ from recommended ones. Last time I needed to deploy a Rails app, I've gone down the route "bring up a new vm, secure it and deploy on that" and that was a breeze in comparison with attempts to integrate with already installed software.

[+] sklogic|10 years ago|reply
In the old days things were supposed to just work. `./configure && make && make install` should always have worked around any local peculiarities.
[+] dikaiosune|10 years ago|reply
Looking back, open source teams could support those varied environments partly because a) the tools were often small and composeable, and b) huge numbers of people worked to get them nicely packaged in distro package managers. I'm sure this oversimplifies a complex issue, but with the huge proliferation of open source projects that are quite complex, and the fact that packaging for a bunch of distros is a huge chore, I can totally understand why projects have only one or two supported installation methods like Docker.

That said, it represents a significant departure from being able to

  apt-get install foobar
for almost anything you'd want. I can understand being nostalgic for that, but the ecosystem was generally smaller then (at least that's how it seems to me).
[+] hackerboos|10 years ago|reply
"Also, I'm extremely inexperienced installing Rails and Rails apps, and despite the fact that it's a language and server that we literally teach the newest of the new, it's just impossible to do anything with."

That's why nearly every newbie tutorial recommends Heroku.

[+] pliu|10 years ago|reply
The tone of this article is very upsetting. It begins with "I like to think I’m pretty alright at computers", but frankly I don't really agree that this is the case. Good at programming, maybe, but clearly not very good at ops.

Rather than accept that there are skills and experience they don't have, the author instead chooses to blame the entire world for their ignorance. The frustration on display is understandable and real and I totally get it, but the attitude is less forgivable.

Faced with a problem, instead stopping for a hot second to read some documentation, the author instead concludes that Docker is garbage, does some insane shit and then claims that the software industry has failed them. What incredible arrogance.

This article says way more about the character of this person than it does about the state of web app deployment.

[+] eevee|10 years ago|reply
Sorry, I forgot to painstakingly document the innumerable hours I spent reading documentation (or trying to find critical documentation that didn't exist). I didn't think that would be a compelling read.
[+] dmix|10 years ago|reply
Indeed, not to make this a distro vs distro comment, but with a modern Arch install a few AUR scripts would eliminate (and automate) 50% of the problems here. The other 50% could easily be solved by investing time in becoming familiar with the technology he is working with, or not choosing newish unstable tech when his apparent preference is more suited towards established stable software that has been built (or evolved via usage) to run cleanly on a broad spectrum of systems.
[+] cwyers|10 years ago|reply
> See, I actually have a 64-bit kernel, but a 32-bit userspace. (There’s a perfectly good reason for this.)

Well, maybe there is, but a lot of this article seems like "the sad state of nobody has this web app set up to be installed on my frankenlinux."

[+] mkozlows|10 years ago|reply
This is a weak article. The author's never worked with Docker before, so got tripped up by a few gotchas (but seriously, installing a third-party PPA isn't that hard or weird), but it doesn't take that much googling or trial/error to figure out how to do it right.

And the point of that is, once you know Docker, now installing just about any set of dependencies becomes a skill you know. I've never worked with Discourse. I've never worked with Rails. But because I've done Docker-based deployments, I bet I could indeed get it up and working in 30 minutes.

[+] pki|10 years ago|reply
I've used docker before and my install was to a clean ubuntu 14 64bit - discourse installed + up and running in about 3.5 minutes. I have never worked with Rails or Ruby, or had to touch rvm for this case.

Incidentally, my hosting machine is basically clean - I don't use it as a personal or dev box, it does not have a ton of random userspace stuff in 32bit, it just hosts things.

[+] markbnj|10 years ago|reply
Ok, I get it. I've felt like writing this article many times. I didn't, but I give the author props for letting off some steam in a constructive way. However... this statement drove me a little bonkers...

>> The 30-minute claim came because the software only officially supports being installed via Docker, the shiny new container gizmo that everyone loves because it’s shiny and new, which already set off some red flags. I managed to install an entire interoperating desktop environment without needing Docker for any of it, but a web forum is so complex that it needs its own quasivirtualized OS? Hmm.

I understand the reflexive dismissal of things that become popular topics. I'm guilty of it myself repeatedly, most recently with "microservices." But sometimes technologies become popular because they are good and useful.

I haven't run into Docker's 32-bit install problem. They should fix that. But to dismiss the whole technology as some sort of obviously useless quasi-virtualized OS mumbo jumbo is taking the rant too far. Dependency isolation is a good thing. Deploying server applications with one command is a good thing. Knowing that your runtime environment is always the same is a good thing. Having a simple source controlled script that completely describes that runtime environment is a good thing. Some people try to use containers for unreasonably complicated things, or to hide unreasonably complicated software, but that is not an indictment of the technology. It's popular for a reason.

[+] csirac2|10 years ago|reply
As a happy Docker user with a lot of respect for what Docker has achieved, what concerns me is that Docker is clearly an enabler for people to totally abandon proper release and dependency management along with sane sysadmin friendly configuration.

I can't tell you the number of times I've hoped to use a public docker image or just the Dockerfile, only to spend hours futzing with it because I was unhappy with the grotesquely insecure configuration or because I needed to work around a bunch of assumptions that are invalid once I've tuned the Dockerfile for my environment.

[+] baddox|10 years ago|reply
Also, the author's explicitly makes the assumption that Docker is only appropriate once an application surpasses a certain level of complexity. I don't think that's a very good assumption. I think that it's perfectly reasonable for any open source web app with any dependencies to support or even encourage development and installation via Docker. I don't see why the choice to use Docker has any relevance to the complexity of the application.
[+] dikaiosune|10 years ago|reply
Agreed in general, but I'm not so sure Docker should be expected to support installing a complex container manager into a 32-bit userspace on a 64-bit OS. They've done a lot of good work (doing the things that you've called out above), and I personally think it's fine to say that the tool is mostly for the kind of commodity just-imaged OS that typically runs in the cloud. Putting Docker on a snowflake server doesn't seem like a great idea even if it works.
[+] sytse|10 years ago|reply
GitLab is the other large open source Rails project. We choose to package below the container level with Omnibus packages. These run without all the Docker dependencies and install in 2 minutes. We did this because many of our users could not run a recent/custom kernel with Docker. We're very happy with the choice and are able to run the Omnibus packages on top of our images https://gitlab.com/gitlab-org/gitlab-ce/tree/master/docker

That being said, it is still a bit silly how hard a Rails app is to install.

[+] dheera|10 years ago|reply
There actually is a point. I'm not a rails developer but as a Python developer I can definitely say deploying applications is far more difficult than it should be. To deploy a simple Flask app as FastCGI I had to download some nasty .fcgi file from MIT's 6.170 course website to detect code changes and reload the app. To deploy with nginx+uwsgi I spent ages configuring it and with poor documentation.

Seriously, things like uwsgi should have an "auto-configure" feature where when an error is encountered with a dependency, it searches the hell out of your /usr filesystem and caches the resulting configuration. If a module is missing, automatically try to apt-get and pip install it. Nginx should portscan the hell out of localhost and detect uwsgi servers. Install uwsgi if a wsgi app is detected but uwsgi isn't installed. Write the .ini file automatically. This process is so automatable that there really should be a "magic deploy" feature where I can just drop a Flask app as /var/www/html/some_app/index.py and it should be instantly up and running at http://localhost/some_app/ with zero questions asked.

[+] 3princip|10 years ago|reply
Nice article, I can empathize having recently inherited a PHP project which included a puphpet (vagrant/puppet) VM. It wasn't touched for a some time so the configuration was a few months old.

After installing the required dependencies (downgrade VirtualBox etc) all that was left was: vagrant up. My lord, what had I gotten myself into. Que problems with the configuration, ruby version during provisioning, paths, puphpet had gone through multiple breaking changes in the meantime, the documentation was unhelpful. The only inspiration were GitHub issues on tangentially related projects ... all I know is it was 3am having stared 6 hours earlier.

Then, in a moment of madness I deleted everything and created a completely new/fresh puphpet configuration using the puphpet site. Again during provisioning I was met with a problem, there are non-ascii characters in the php-fpm upstart configuration file (the authors name!). Luckily, this was an issue already discovered a couple of hours earlier, so a small change to that and the app was up and running. It was 4am.

Needless to say, I was very frustrated with this state of affairs of these supposed aids to setting up the dev environment. Granted, my mistake was going in the wrong direction trying to fix an outdated configuration and all the problems it generated rather than just generating the config again, but I had never used this tool before, and hadn't looked at the PHP ecosystem for the past few years, it seemed crazy at first that a config which worked in the past, presumably, would not work on my very mundane Ubuntu dev environment.

[+] mschuster91|10 years ago|reply
Oh gosh, puphpet and vagrant. Had a similar architecture with a Silverstripe framework recently inherited, it was well documented and all, but due to new versions everything was broken.

I ended up doing deployment with git pull.

[+] dasil003|10 years ago|reply
I've been doing Rails now for a decade, and I love ruby, but Rails is not suitable for installable open source software. It is one of the worst languages for that use case. The whole mentality of the Rails community runs counter to the goals of providing easily deployable packages. There is certainly a lot of low-hanging fruit to work on these issues, but I don't see it becoming a priority anytime soon because if you value these things you're probably already using some other language than ruby.

The sweet spot for Rails is custom apps that is continuously maintained over a long period, or discardable prototypes. It is not a good choice for deploy-and-forget, or organizations without any in-house programmers.

[+] bhuga|10 years ago|reply
I was going to come comment that there must be a good way to solve this, since the Ruby community has such a rich ecosystem of tools.

Heroku, for example, has a nice one-click deploy button for Rails (and many more languages/frameworks). It works straight from the source code, such as with this open-source rails app, and it's really quite impressive:

https://github.com/heroku/starboard#deploy-the-app

The author also calls out error reporting as being terrible. And there's also great tools for managing that, such as newrelic and airbrake.

So surely this author was just unused to the tool ecosystem, I thought. What a perfect opportunity for a constructive yet snarky comment! But lo and behold, discourse has deprecated all non-docker installs:

https://github.com/discourse/discourse/blob/master/docs/inst...

I was, and am, completely baffled at that decision. And I learned a valuable lesson about trying to out-snark snarky blog posts.

[+] dikaiosune|10 years ago|reply
Supporting multiple deployment options introduces a lot of overhead, and Docker (in particular) seems really good at reducing deployment overhead for both developers and admins. I'm not sure why it's a bad thing when it means that the team can focus resources on a single canonical deployment method that just works. Docker containers can also be run on a variety of public cloud services, so while Heroku may not be available, pointing your Dockerfile at EC2 or GCE shouldn't be too hard?
[+] riking|10 years ago|reply
> The author also calls out error reporting as being terrible.

Actually, he never got far enough to see the error reporting tool, Logster.

[+] jpgvm|10 years ago|reply
If you use 32bit Linux in 2015 you deserve the pain you just endured.

The problem is not the state of deployment (which is still admittedly quite bad), it's the state of your system.

Install a fully 64bit version of 14.04 and watch all your problems just disappear.

[+] plaguuuuuu|10 years ago|reply
Two things come to mind

1. Tinfoil hat time! "Pisshorse" make money off of hosting their own software. So they are financially incentivized to make it as difficult as possible to install yourself, but at the same time they get to go "woooo, open source" as much as they like.

I'm convinced Oracle did the same sort of thing by making their DB product impossible for normal people to understand, in order to charge outrageously expensive consulting fees to companies. Or so I hear; I've never used it so I could be wrong.

2. Your users aren't always who you think they are. I learned this one switching from back-end/n-tier work over to front-end CMS based web dev. Yes, the users are the people browsing the website, but the users are also the people trying to use the damn CMS, so the website UX and design extends to those people as well.

Meaning, instead of forcing your hapless marketers to use some crazy admin panel with thousands of options and checkboxes, or even try and edit XML configuration (I've actually seen this), any time one extends the functionality of a CMS, creating some custom front-end UI to control it is a basic necessity.

In the same vein, any sort of software (and hardware! printers, tools, cars, whatever) has to consider the installation and maintenance of itself as a UX/design concern and the fact that it has multiple domains of users.

[+] Myrmornis|10 years ago|reply
Really, 90% of this article is an argument for docker. His docker installation annoyance was nothing compared to the pain ensuing from his decision to chuck docker out of the window.
[+] ajhit406|10 years ago|reply
I had the same trouble setting up discourse, so I setup a template on Nitrous using Docker that you can definitely use to get Discourse up and running in 30 seconds. (I just confirmed, I went from no environment to running discourse in less than 30 seconds). Just `cd code/discourse && ./start-app`.

This, IMO is where Docker shines. It shouldn't matter if it's setup with a microservice 12-factor architecture or if everything is setup in a monolithic VM-like container. I don't have the patience for ops -- I just want something that works. That's the point of having isolated, replicable containers.

In any case, I encourage you to try out the discourse container on Nitrous. I was actually surprised it happens to be the least popular container for us. I assumed because it's such a pain in the ass to get started, that it would be more popular =p

[+] lewisl9029|10 years ago|reply
So I actually remember running into the "Unable to locate package docker-engine" issue a while ago, and it seemed like an issue on Docker's end because for me, the issue actually only lasted about half a day before it started working again without any changes on my part.

So I think in the end this was just a case of really unfortunate timing, because if it weren't for the Docker installation issue, the only real complaint left in this post would have been the fact that the official method of installing Docker was to curl a script and pipe into sh[1]. And the rest of the post would have been singing praises of how amazing Docker is to be able to take setting up a Rails app along with all its dependencies and turn it into such a simple, painless process.

[1] Which is a perfectly valid criticism, by the way. They should really document the much saner method of installing through their official repos:

https://blog.docker.com/2015/07/new-apt-and-yum-repos/

[+] mkozlows|10 years ago|reply
Yeah, the really weird thing about this article is that it's all snarky about Docker, and then it goes off into weird manual-install land (which makes about as much sense as saying "I couldn't get the .msi installer to run, so I started copying files around and registering DLLs by hand" -- maybe you'll get that to work eventually, but it's never going to be pleasant), and then makes a call for something that... solves the problems Docker solves.

The whole article is basically a "there has to be an easier way!" infomercial for Docker, only it doesn't realize it.

[+] raspasov|10 years ago|reply
I have to say that Docker used to be very annoying to work with on Mac OS X but with the latest release of Docker Toolbox it has a much better "works out of the box" experience.

The article is pretty ranty but I can agree with the author that many things nowadays are way more complex than they have to be. As a plug, I'm going to say that this is one of the reasons why we started the CloudMonkey.io project. It lets you deploy a docker container with no fuss to production. It's up to you, however, to ensure that you don't over complicate your system unnecessarily.

I've done the mistake in the past where very early on in the project I started using a web server, Redis, ElasticSearch, MySQL, memcached, RabbitMQ, etc.

In most cases, more than three moving pieces only bring you headaches. Now I always try very hard to keep things simple to at most a web server and a database, plus maybe a memcached caching layer. If you need to have a queue or full text search functionality, I'd try to bring it in as an outside service.

[+] mschuster91|10 years ago|reply
What's most worrying with all that Docker bullshit is updating. We're going to end up with physical hosts with dozens of fire-and-forget VMs on them and each one filled with security holes.
[+] reacweb|10 years ago|reply
I agree with most of this rant, but not with this too common mith: "Only one thing can bind to port 80 and it has to run as root". I generally use the following command <pre>setcap 'cap_net_bind_service=+ep' /usr/bin/nodejs </pre>

Just learned the trick to become root when you belong to docker group. Awesome

[+] Jerry2|10 years ago|reply
>"Let’s just say it rhymes with “piss horse”

I think he's talking about Discourse [1]. I tried Discourse few years back when it was released but it was too bare-bones at that time. Haven't tried it recently.

[1]: https://github.com/discourse/discourse

[+] voltagex_|10 years ago|reply
I really don't understand why a forum needs 2GB of RAM.