“Formulas that update backwards” is the main idea behind neural networks such as LLMs: the computation network produces a value, the error in this value is computed, and then the error quantity is pushed backward through the network; this relies on the differentiability of the function computed at each node in the network.
big-chungus4|2 months ago
I think you should be able to use bidi-calc to train a neural net, altough I haven't tried. You'd define a neural net, and then change it's random output to what you want it to output. However as I understand it, it won't find a good solution. It might find a least squares solution to the last layer, then you'd want previous layer to output something that reduces error of the last layer, but bidi-calc will no longer consider last layer at all.
uoaei|2 months ago
The term of interest is "backpropagation".
Towaway69|2 months ago
Wasn’t Prolog invented to formalise these kinds of problems of making the inputs match what the desired output should be.
[1] https://en.wikipedia.org/wiki/Declarative_programming