(no title)
gmfawcett | 2 months ago
While the later AIs quickly understood many aspects of the spec, they struggled with certain constraints whose intuitive meaning was concealed behind too much math. Matters which I had assumed were completely settled, because a precise constraint existed in the spec, had to be re-explained to the AI after implementation errors were found. Eventually, I added more spec comments to explain the motivation for some of the constraints, which helped somewhat. (While it's an untested idea, my next step was going to be to capture traces of the TLA+ spec being tested against some toy models, and including those traces as inputs when producing the implementations, e.g. to construct unit tests. Reasoning about traces seemed to be a fairly strong suit for the AI helper.)
In hindsight, I feel I set my sights a little too high. A human reader would have had similar comprehension problems with my spec, and they probably would have taken longer to prime themselves than the AI did. Perhaps my takeaway is that TLA+ is a great way to model certain systems mathematically, because precision in meaning is a great quality; but you still have to show sympathy to your reader.
hamiecod|2 months ago
gmfawcett|2 months ago
Many of my rules seemed pretty basic to me -- a lot of things like, "there is no target-node candidate whose ordinal is higher than the current target's and also satisfies the Good_Match predicate." But if I had been writing it for a human reader, rather than just to document constraints, I would have put in more effort to explain why the constraints existed in the first place (what's a Good match? Are there Poor matches? Why do we care? etc.). I didn't skip this step altogether but I didn't invest much time into it.
I did take care to separate "physics" rules from "strategy" rules (i.e, explicitly separating core actions and limits from objectives and approaches). That seemed to help the AI, and I'm sure it would have helped people too.