top | item 44210596

(no title)

matthewbauer | 8 months ago

Not sure on the exact take of the OP, but:

Package maintainers often think in terms of constraints like I need a 1.0.0 <= pkg1 < 2.0.0 and a 2.5.0 <= pkg2 < 3.0.0. This tends to make total sense in the micro context of a single package but always falls apart IMO in the macro context. The problem is:

- constraints are not always right (say pkg1==1.9.0 actually breaks things)

- constraints of each dependency combined ends up giving very little degrees of freedom in constraint solving, so that you can’t in fact just take any pkg1 and use it

- even if you can use a given version, your package may have a hidden dependency on one if pkg1’s dependencies, that is only apparent once you start changing pkg1’s version

Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you. So while you can’t say take a version of pkg1 from 2015 and use it with a version of pkg2 from 2025, you can just take the whole 2015 Nixpkgs and get pkg1 & pkg2 from 2015.

discuss

order

jonhohle|8 months ago

There’s no clear definition (in most languages, of major/minor/patch versioning). Amazon did this reasonably well when I was there, though the patch version was implicitly assigned and the major and minor required humans to follow the convention:

You could not depend on a patch version directly in source. You could force a patch version other ways, but each package would depend on a specific major/minor and the patch version was decided at build time. It was expected that differences in the patch version were binary compatible.

Minor version changes were typically were source compatible, but not necessarily binary compatible. You couldn’t just arbitrarily choose a new minor version for deployment (well, you could, but without expecting it to go well).

Major versions were reserved for source or logic breaking changes. Together the major and minor versions were considered the interface version.

There was none of this pinning to arbitrary versions or hashes (though, you could absolutely lock that in at build time).

Any concept of package (version) set was managed by metadata at a higher level. For something like your last example, we would “import” pkg2 from 2025, bringing in its dependency graph. The 2025 graph is known to work, so only packages that declare dependencies on any of those versions would be rebuilt. At the end of the operation you’d have a hybrid graph of 2015, 2025, and whatever new unique versions were created during the merge, and no individual package dependencies were ever touched.

The rules were also clear. There were no arbitrary expressions describing version ranges.

booniepepper|8 months ago

For the record, Amazon's Builder Tools org (or ASBX or whatever) built a replacement system years ago, because this absolutely doesn't work for a lot of projects and is unsustainable. They have been struggling for years to figure out how to move people off it.

Speaking at an even higher level, their system has been a blocker to innovation, and introduces unique challenges to solving software supply chain issues

Not saying there aren't good things about the system (I like cascading builds, reproducibility, buffering from 3p volatility) but I wouldn't hype this up too much.

0xbadcafebee|8 months ago

> Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you.

Thank you, I was looking for an explanation of exactly why I hate Nix so much. It takes a complicated use case, and tries to "solve" it by making your use-case invalid.

It's like the Soylent of software. "It's hard to cook, and I don't want to take time to eat. I'll just slurp down a bland milkshake. Now I don't have to deal with the complexities of food. I've solved the problem!"

lkjdsklf|8 months ago

It’s not an invalid use case in nixpkgs. It’s kind of the point of package overlays.

It removes the “magic” constraint solving that seemingly never works and pushes it to the user to make it work

chriswarbo|8 months ago

> I was looking for an explanation of exactly why I hate Nix so much

Note that the parent said "I think Nixpkgs takes the right approach in mostly avoiding it". As others have already said, Nix != Nixpkgs.

If you want to go down the "solving dependency version ranges" route, then Nix won't stop you. The usual approach is to use your normal language/ecosystem tooling (cabal, npm, cargo, maven, etc.) to create a "lock file"; then convert that into something Nix can import (if it's JSON that might just be a Nixlang function; if it's more complicated then there's probably a tool to convert it, like cabal2nix, npm2nix, cargo2nix, etc.). I personally prefer to run the latter within a Nix derivation, and use it via "import from derivation"; but others don't like importing from derivations, since it breaks the separation between evaluation and building. Either way, this is a very common way to use Nix.

(If you want to be even more hardcore, you could have Nix run the language tooling too; but that tends to require a bunch of workarounds, since language tooling tends to be wildly unreproducible! e.g. see http://www.chriswarbo.net/projects/nixos/nix_dependencies.ht... )

matthewbauer|8 months ago

I mean you can do it in Nix using overlays and overrides. But it won’t be cached for you and there’s a lot of extra fiddling required. I think it’s pretty much the same as how Bazel and Buck work. This is the future like it or now.