top | item 38845853

(no title)

lindig | 2 years ago

The argument for annealing in the original paper is that accepting regressions is essential to escape from local minima.

https://www2.stat.duke.edu/~scs/Courses/Stat376/Papers/Tempe...

"Annealing, as implemented by the Metropolis procedure, differs from iterative improvement in that the procedure need not get stuck since transitions out of a local optimum are always possible at nonzero temperature. A second and more important feature is that a sort of adaptive divide-and-conquer occurs. Gross features of the eventual state of the system appear at higher tempera-tures; fine details develop at lower tem-peratures. This will be discussed with specific examples."

discuss

order

_0ffh|2 years ago

Yes, without a temperature and cooling schedule, how can it be annealing? It's in the name. It may sound harsh, but I'd call it an abuse of the term to do hillclimbing but call it annealing. It also seems lazy, since doing it right is an all but trivial addition to the code. Finding the best cooling schedule might require some experimentation though.

WanderPanda|2 years ago

So obscure that in a field as important as optimization we still think in terms of „escaping from local minima“. Also (as a total outsider) the progress in general optimization algorithms/implementations appears to be very slow (I was shocked how old ipopt is). I was wondering if all the low hanging inductive biases (for real world problems) have already been exploited or if we just have no good ways of expressing them? Maybe learning them from data in a fuzzy way might work?

marcosdumay|2 years ago

Unless you come with some new take on the P ?= NP problem, there isn't much we can improve on generic optimization.

There are all kinds of possibilities for specific problems, but if you want something generic, you have to traverse the possibility space and use its topology to get into an optimum. And if the topology is chaotic, you are out of luck, and if it's completely random, there's no hope.

mturmon|2 years ago

Yes.

And beyond this intuition (escape from local optima), the reason that annealing matters is that you can show that (under conditions) with the right annealing schedule (it's rather slow, T ~ 1/log(Nepoch) iirc?) you will converge to the global optimum.

I'm not well-versed enough to recall the conditions, but it wouldn't surprise me if they are quite restrictive, and/or hard to implement (e.g., with no explicit annealing guidance to choose a specific temperature).