ilikebits | 2 months ago | on: Show HN: Hurry – Fast Rust build caching
ilikebits's comments
ilikebits | 2 months ago | on: Show HN: Hurry – Fast Rust build caching
ilikebits | 5 months ago | on: Monads are too powerful: The expressiveness spectrum
ilikebits | 9 months ago | on: Launch HN: Relace (YC W23) – Models for fast and reliable codegen
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
It would be a fun thing to do if we had the resources to get equally good cross-compilation in Rust, but we're focused on building functionality right now.
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
To our knowledge, we have not found another provider who supports both of these requirements. It's not some amazing technical innovation, but it is one of those annoying paper cuts that builds up with all the others.
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
We're also working on a hosted service! If you'd like a sneak peek, send us a message at [email protected] (or email me directly at [email protected]). I'm happy to talk about your specific needs and see if we can build something for them.
(And yes, it is Rust. I keep trying to find projects where I get to stretch my Haskell wings again, but unfortunately I keep working on things with silly requirements such as "easily understandable performance" and "a cross-compilation story" and "not making my collaborators sit through another monad tutorial".)
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
Most of these scripts were designed for a world where there was A Blessed Deployment Machine that acted as its own de facto centralized control plane. We're designed for a newer world where publishing is just another piece of your CI, so you need more features to handle concurrency control, distributed signing, incremental index rebuilds, etc.
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
1. Aptly needs to rebuild the entire repository before it can do any changes. One of our customers builds packages for every Linux OS, and every architecture, and each release contains 10-something separately packaged tools, and they do regular releases. This means their repository of packages is _huge_. When they try to do small maintenance actions like removing a package that has a bug, they need to rebuild their entire repository to do this, which takes upwards of 30 minutes. On Attune, since we have a centralized control plane, it takes about 10 seconds.
2. Aptly doesn't have concurrency controls. This makes it annoying to run in CI. Imagine you're setting up a CI pipeline that runs on every commit and publishes a package. You need to make sure you don't have two different instances of Aptly running at the same time, because their updates will clobber each other if they publish at the same time. Can you ensure that your CI only ever runs one publish at a time? This is easy in some setups, but surprisingly hard in GitHub Actions - concurrency keys don't quite work the way you'd expect them to.
3. Aptly uses its storage as its source of truth for the current state of packages. This is a great design choice for simplicity! However, if your CI job dies halfway through during an Aptly publish, this can leave your repository in a corrupted (in this case, half-updated) state. To recover from this, you need to rebuild the repository state... which you can't, because the ground truth is contained in the storage which was just corrupted! We mitigate this issue by tracking metadata in a separate transactional database so CI crashes don't cause corruption.
These are some examples of little cuts that build up over time with all of the tools we've found so far. Lots of the open source tooling is great at low scale, but the cuts start to build up once you are pushing a meaningful amount of packages to production.
ilikebits | 11 months ago | on: Show HN: Attune - Build and publish APT repositories in seconds
To answer your direct question: on self-hosted repositories, the URI right now is just a string identifier. We are working right now on a cloud hosted service, and there the URI will help us figure out which repository to actually serve.
ilikebits | 1 year ago | on: Launch HN: Modernbanc (YC W20) – Modern and fast accounting software
ilikebits | 1 year ago | on: Show HN: A submarine combat game in the browser
ilikebits | 2 years ago | on: Unison Cloud
There are two specific things here that make me reluctant to use Unison Cloud in my own work:
1. It doesn't look like there's any FFI or way to shell out to other tools within Unison Cloud. I understand that this is necessary to provide the desired static guarantees, but the lack of an escape hatch for using pre-existing code makes this a really hard sell.
2. Typed storage is excellent! What are its performance characteristics? In my experience, gaining expressiveness in storage systems often requires trading away performance because being able to store more kinds of values means we have fewer invariants to enable performance optimizations. How do migrations work? I've always found online migrations to be a major pain point, especially because data rapidly becomes very heavy. (At a glance, it looks like storage is key-value with some DIY indexing primitives, and I couldn't find anything about migration.)
The approach article asks "Why is it so complicated, anyway?". My guess would be that:
1. For small projects where you can toss together NextJS and SQLite and throw it onto Hetzner, it really _isn't_ that complicated.
2. For large projects where with large amounts of data, high availability requirements, and very large scale, performance and operability matters a lot, and none of these all-in-one systems have yet to demonstrate good performance and operability at scale.
3. There really is not that much demand for projects between these two sizes.