top | item 45843927

Modern Optimizers – An Alchemist's Notes on Deep Learning

46 points| maxall4 | 3 months ago |notes.kvfrans.com

7 comments

order

derbOac|3 months ago

Interesting read and interesting links.

The entry asks "why the square root?"

On seeing it, I immediately noticed that with log-likelihood as the loss function, the whitening metric looks a lot like the Jeffreys prior or an approximation (https://en.wikipedia.org/wiki/Jeffreys_prior), which is a reference prior when the CLT holds. The square root can be derived from the reference prior structure, but also has the effect in a lot of modeling scenarios of scaling things proportionally to the scale of the parameters (for lack of a better way of putting it; think standard error versus sampling variance).

If you think of the optimization method this way, you're essentially reconstructing a kind of Bayesian criterion with a Jeffreys prior.

big-chungus4|3 months ago

>Likely, there is a method that can use the orthogonalization machinery of Muon while keeping the signal-to-noise estimation of Adam, and this optimizer will be great.

if you take SOAP and change all betas to 0, it still works well, so SOAP is that already

big-chungus4|3 months ago

I personally think we've hit the limit and no more better optimizers are to be developed in my humble opinion

big-chungus4|3 months ago

best we can do is something like make SOAP faster by replacing QR with something cheaper and maybe warm started

big-chungus4|3 months ago

which PSGD did you use because there is apparenly like a million of them