top | item 36962191

(no title)

workingjubilee | 2 years ago

The answer to that is simple: We do.

We try to catch things before public release.

We run UNSPEAKABLE amounts of regression tests.

We ask people to test unstable things.

Almost no one actually does test the unstable things. There's too much friction.

Almost no one actually gives us feedback on them. There's too much friction.

Things land on stable, people FINALLY actually mass-use them, the stable thing turns out to be busted, and THEN people overcome the friction to complain because they (rightly) fear it is going to be unfixable. Sometimes it is. Most of these have been very minor so far, and only affected things easily rolled back, but there will come a day when it is not so minor and not so easy to roll back. And nothing we can do in simulation or stress-testing on our end can replace contact with actual programs.

We're not making a typical "product" here. You can't think of it in typical product terms. We're making a programming language here. A contract, effectively, though not a legal one, as it is itself a system of expressing contracts. We cannot simply roll back things that are part of the language's contract in our stable releases if something is flawed, unless the flaw is SO dire as to result in one of the handful of situations that allows us to break it anyways.

discuss

order

LightFog|2 years ago

Thanks for the extra context - and just in case my comment came across as very negative or blanket anti-telemetry, I think you are building an awesome tool and the sensitive approach to actually handling metrics seems totally reasonable. Hopefully they end up being useful here - I’ve just seen them not move the needle much on release quality relative to the various nuisances in handling them, although for sure in a different domain.