(no title)
andrewrothman | 5 years ago
I think the way Go and ES Modules (Deno / browsers) represent module naming and resolution by URLs has some nice benefits:
* decentralized module hosting
* could be extended beyond HTTP / Git (ie. import modules via "ipfs://")
* re-uses existing web infrastructure for namespace ownership (ie. via DNS)
* possibility of hosting backend and frontend modules for multiple languages in a single HTTP registry
However, I do see some important tradeoffs to that approach: * modules may become unresolvable from their original URL (link rot)
* cannot guarantee immutability at the original URL
* semver support becomes a concern of the registry, and some registries may not support it (ie. the logic to resolve "https://example.com/my-module/1.x.x/lib.rs")
* more difficult to discover (browse and search) all available modules
I see some proposed solutions to the tradeoffs: * proxies and dep lockfiles of file hashes to help prevent link rot and immutability
* possibility for open source registry implementations with built-in semver support
* central index of modules which can be searched / browsed, if modules are submitted to it
However, those solutions don't guarantee fixes for every tradeoff.I'd be very interested in learning more about the costs / benefits of the various approaches and the Rust team's discussions on modules. I really appreciate the team's thorough approach to design decisions, as it's manifested into a really cleanly-designed language.
Thanks!
steveklabnik|5 years ago
1. the only part of this that's been in recent discussions is using the URL as a namespace. This has led to questions around how those are represented in code, given that URLs are not identifiers. Also, using DNS for namespaces is mutable and costs money, this is a huge barrier to adoption that we did not want. Publishing should be free and easy.
2. I think you've already identified some of the tradeoffs that made this be rejected: for reproducible build reasons, referring to external services is considered unacceptable. This is due to stuff like downtime of services we don't control, but also the mutability of those services. In theory, you could proxy them, but it's not as clear what this actually buys you, because to get those advantages back, you'd want to always refer to the proxy, and now you've re-centralized everything.
Another way to look at this is that all of the solutions you've proposed are significantly more complicated than what we've built. That complexity is for good reasons, but given 2, it's not clear that they're good enough to justify the costs. The current situation is significantly simpler for all parties, and the end experience ends up the same anyway.
I hope that helps!
andrewrothman|5 years ago