(no title)
bguthrie | 4 months ago
More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.
bguthrie | 4 months ago
More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.
timr|4 months ago
Well, OK, so you remember a bad setup that was bad for whatever reason. My point is that there's nothing about your remembered system that was inherent to Rails, and there were (and are) tons of ways to deploy that didn't do that (just like any other framework).
Capistrano can do whatever you want it to do, of course, so maybe someone wrote a deployment script that rsynced a tarball, touched a file, etc., to restart a server, but it's not standard. The plain vanilla Cap deploy script, IIRC, does a git pull from your repo to a versioned directory, runs the asset build, and restarts the webserver via signal.
bguthrie|4 months ago
The main issue that, while not unique to Rails, plagued the early interpreted-language webapps I worked on was that the tail end of early CI pipelines didn't spit out a unified binary, just a bag of blessed files. Generating a tarball helped, but you still needed to pair it with some sort of an unpack-and-deploy mechanism in environments that wouldn't or couldn't work with stock cap deploy, like the enterprise. (I maintained CC.rb for several years.) Docker was a big step up IMV because all of the sudden the output could be a relatively standardized binary artifact.
This is fun. We should grab a beer and swap war stories.
misiek08|4 months ago