top | item 5147465

(no title)

scotje | 13 years ago

Steve, could you maybe expand a little bit on your reasons for not wanting to have the gems cached in your repo? Have you run into practical issues (deployment speed, etc.) or is it more of a philosophical thing?

discuss

order

steveklabnik|13 years ago

Philosophical, mostly. Any non-philosophical justification I could give you would really be me just porting over my philosophical justification and pretending it's objective.

Example: "I don't want to wait forever while I transfer the extra 38mb over the network." (That's the size of a vendor/bundle for a new Rails app.) I have never actually compared transfer speed in each instance, so that'd just be a backport. ;)

That said, as far as philosophical objections go:

1. Checking in generated files is not best-practice. This feels the same to me.

2. I do a lot of development:

    $ ls src | wc -l
         107
Not all of those are Ruby projects, mind you, but I'm a member of 25 GitHub organizations and have ~100 repos on my personal account. That's a LOT of duplicate gem data.

3. Updating gem files in the repo obscures diffs. If I'm working on a feature branch, and I have 3 commits, and one of them is updating 3 gems, I have a few dozen or hundred files changed. I just want to see my changes, dammit! I guess this one can be construed as practical.

scotje|13 years ago

Thanks, those are all fair points.

The reason I went with "vendor everything" several years ago was that I had to make some changes to a legacy codebase and discovered that one of the gems it depended on was no longer available. It wasn't the end of the world to refactor around it, but it motivated me to find a way to ensure I would always have a local copy of all the dependencies for each app.

Someone in the comments of the rubygems.org story yesterday mentioned using a submodule for vendor/cache which seems like an interesting idea to me. That could at least partially address #3.