top | item 39800018

(no title)

matteopagli | 1 year ago

We're still working on training the DWA weights on top of a pretained model. We're hopeful that this is feasible. The experiments you're mentioning in the appendix are not changing the learning rate scheduler. E.g., when starting to train the DWA weights after 20k iterations, the learning rate is already quite small. To some extent, this might explain the diminishing returns. Maybe this could work with a completely different learning rate scheduler.

discuss

order