(no title)
cesarb | 2 months ago
> The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.
This article is mixing two separate issues. One is using git as the master database storing the index of packages and their versions. The other is fetching the code of each package through git. They are orthogonal; you can have a package index using git but the packages being zip/tar/etc archives, you can have a package index not using git but each package is cloned from a git repository, you can have both the index and the packages being git repositories, you can have neither using git, you can even not have a package index at all (AFAIK that's the case for Go).
kpcyrd|2 months ago
It then digresses into implementation details of Github's backend implementation (how is 20k forks relevant?), then complains about default settings of the "standard" git implementation. You don't need to checkout a git working tree to have efficient key value lookups. Without a git working tree you don't need to worry about filesystem directory limits, case sensitivity and path length limits.
I was surprised the author believes the git-equivalent of a database migration is a git history rewrite.
What do you want me to do, invent my own database? Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure?
Spivak|2 months ago
Oh how times have changed. Yes, maybe run two $5 VPSs behind a load balancer for HA so you can patch and then put a CDN in front of it to serve the repository content globally to everyone. Sign the packages cryptographically so you can invite people in your community to become mirrors.
How do people think PyPI, RubyGems, CPAN, Maven Central, or distro Packages work?
bobpaw|2 months ago
jayd16|2 months ago
skywhopper|2 months ago