top | item 41753731

(no title)

davmre | 1 year ago

Backprop itself doesn't invert the computation, but it does give you the direction for an incremental move towards the inverse (a 'nudge' as the article puts it). That is, given a sufficiently nice function f and an appropriate loss ||f(x) - y*||^2, gradient descent wrt x will indeed recover the inverse x* = f^{-1}(y*) since that is what minimizes the loss. I assume this what the article is pointing at.

If you want to be picky, it's true that the direct analogue of continuous optimization would be discrete optimization (integer programming, TSP, etc) rather than decision problems like SAT. But there are straightforward reductions between the two so it's common to speak of optimization problems as being in P or NP even though that's not entirely accurate.

discuss

order

No comments yet.