top | item 39253705

(no title)

eggie5 | 2 years ago

ah, so observe the reward and then take a gradient step

discuss

order

gwern|2 years ago

(Well, not necessarily, which is why I framed it as training from scratch, to make it clearer that it doesn't have anything necessarily to do with SGD or HMC etc. In theory it shouldn't matter via the likelihood principle, but in practice, taking a gradient step might not give you the same model as you would if you trained from scratch. You'd like it to be a gradient step because that would save you a ton of compute, but I don't know how well Bayesian NNs actually do that. And if that works OK in supervised problems or the simplest bandit RL, it might not work in full PSRL uses because DRL is so unstable.)