COPY drkiq/Gemfile Gemfile
WORKDIR /opt/app/drkiq
RUN bundle install
This appears to be leaving out copying Gemfile.lock so you have no real idea what versions of dependencies are being installed here. This is not safe. Also it's installing a bunch of dependencies that are not needed in production, a better bundle command would be something like: `RUN bundle install --no-cache --jobs 4 --without development test`
RUN rails webpacker:install
RUN rails assets:precompile
Why is `rails webpacker:install` being run here? This is very odd. As a general tip, I would also not compile assets as part of the docker build, but rather externally then only copy over the `public` directory afterward. This removes any need for `node` or any related development tools/packages in the production image.
> This appears to be leaving out copying Gemfile.lock so you have no real idea what versions of dependencies are being installed here. This is not safe.
As someone who's only touched Ruby for a few years (and not even Ruby on Rails): I wouldn't have even known to look for a Gemfile.lock.
Yes, we do it at Basecamp for all of our legacy apps (and our newest one, HEY), and we've ran Basecamp 2 in production via Kubernetes (we no longer do it, but we did it for a lengthy period of time).
GitLab has a pretty nice image for running their Community Edition in Docker.
I've investigated it a few times for several Rails applications that I maintain, but I've never completed a rollout. Running Rails on Docker isn't a huge hurdle, but Capistrano's default git-based deploys are just so _easy and fast_ once the server is set up. I also have to balance deploying Docker with ops' expertise.
I do recommend getting a copy of _Docker for Rails Developers_ to avoid needing to read a bunch of disparate blogs, though it covers Docker Swarm instead of Kubernetes.
We have a number of Dockerized Rails apps hosted on AWS Elastic Beanstalk. It isn't quite as smooth as Heroku but once setup it has been quite solid. The parity between develop and prod is decent, we use docker-compose and localstack to simulate backing services.
We've had our Rails like apps (Sinatra, Padrino) Dockerized for 5 years now and running on container orchestration systems. First Mesos, now Kubernetes. And some running as Lambdas as well. We do not have a single production app not in a container and we are mostly Ruby.
I use docker as a build server. At the end of the project it spits out a debian package that I can install on a machine. It doesn't need bundler, only ruby and the gem command (up to a certain version). Quite a nice setup if you look for bare metal performance
I run my side project in Docker on Kubernetes, works well and wasn’t too annoying to set up. Compared to getting my Go deployments to run it was kind of a pain though.
Any suggestions on how to do zero-downtime deploys with a Rails app running on Docker?
I guess an obvious one is to use two running images, pull incoming requests from one of them, take it down, update, bring it up, start traffic flow again. Repeat for the other.
You don’t need to take your old app down or update during your cutover. Your new app will be a new docker image, so just run an instance ahead of time.
When it comes to deploy, just repoint your load balancer/proxy.
[+] [-] mbell|6 years ago|reply
[+] [-] LukaD|6 years ago|reply
[+] [-] inetknght|6 years ago|reply
As someone who's only touched Ruby for a few years (and not even Ruby on Rails): I wouldn't have even known to look for a Gemfile.lock.
Perhaps the author doesn't know either.
[+] [-] thrownaway954|6 years ago|reply
gem 'byebug', '1.1.0'
[+] [-] MattyMc|6 years ago|reply
I've been using Rails for years, I haven't yet experienced much pain by way of language/library versioning (etc). Python is a different story...
[+] [-] t3rabytes|6 years ago|reply
[+] [-] breckenedge|6 years ago|reply
I've investigated it a few times for several Rails applications that I maintain, but I've never completed a rollout. Running Rails on Docker isn't a huge hurdle, but Capistrano's default git-based deploys are just so _easy and fast_ once the server is set up. I also have to balance deploying Docker with ops' expertise.
I do recommend getting a copy of _Docker for Rails Developers_ to avoid needing to read a bunch of disparate blogs, though it covers Docker Swarm instead of Kubernetes.
[1] https://www.amazon.com/Docker-Rails-Developers-Applications-...
[+] [-] gnarco|6 years ago|reply
[+] [-] specialp|6 years ago|reply
[+] [-] Fire-Dragon-DoL|6 years ago|reply
[+] [-] dewey|6 years ago|reply
[+] [-] xfalcox|6 years ago|reply
[+] [-] bdcravens|6 years ago|reply
[+] [-] sandstrom|6 years ago|reply
I guess an obvious one is to use two running images, pull incoming requests from one of them, take it down, update, bring it up, start traffic flow again. Repeat for the other.
But is there any other method?
[+] [-] turtlebits|6 years ago|reply
When it comes to deploy, just repoint your load balancer/proxy.
[+] [-] mrinterweb|6 years ago|reply
[+] [-] sealthedeal|6 years ago|reply
[+] [-] ArturT|6 years ago|reply
[+] [-] james-imitative|6 years ago|reply
[deleted]