(no title)
lindig
|
2 years ago
My understanding of simulated annealing is that solutions that are not improvements are still accepted with some probability in early steps but that this probability decreases as "temperature" drops. Looking at your description (but not code) I did not see that aspect but it looked like you would only accept improvements of the cost function. Is this correct or where does your solution accept slight regressions with some probability, too?
stefanpie|2 years ago
I abused the definition of annealing a lot in the post but I briefly touched on the idea:
"At first, you might want to make moves or swaps over large distances or you might want to accept some percent of moves that don't improve the objective, but as time goes on, you want to make smaller moves and be less likely to select moves that don't improve the objective. This is the "annealing" part of simulated annealing in the context of FPGA placement."
I think I might have made the writing confusing because I mixed the original definition of the annealing approach (of accepting moves that don't improve the objective) with the use of "annealing" for other things like action parameters (ex. swap distance between two nodes). Something I should edit to clarify better.
Note that, yes, the thing I implemented doesn't do any annealing but rather just pick actions that only improve the objective. I am working on some extensions to add real annealing but that turned out to have a lot of more in-depth technical work that is not obvious.
cerved|2 years ago
kwoff|2 years ago
"At first, you might want to make moves or swaps over large distances or you might want to accept some percent of moves that don't improve the objective, but as time goes on ...
However, as it turns out, you technically don't need this annealing part to make FPGA placement work. You can just randomly try different moves and accept or reject them based on whether they improve the objective function. This is what I did in my toy implementation of an FPGA placer just to keep it simple."
lindig|2 years ago
https://www2.stat.duke.edu/~scs/Courses/Stat376/Papers/Tempe...
"Annealing, as implemented by the Metropolis procedure, differs from iterative improvement in that the procedure need not get stuck since transitions out of a local optimum are always possible at nonzero temperature. A second and more important feature is that a sort of adaptive divide-and-conquer occurs. Gross features of the eventual state of the system appear at higher tempera-tures; fine details develop at lower tem-peratures. This will be discussed with specific examples."
klyrs|2 years ago