top | item 33233024

(no title)

someguyorother | 3 years ago

>You can do MCMC like AlphaGO and see ten moves ahead.

The existence of adversarial attacks shows that most neural networks have pretty bad worst-case performance. Thus sticking GPT-3 into alpha-beta or MCTS could just as easily give you an ungeneralizable optimum, because optimizers are by nature intended to find extreme responses. Call it a Campbell's law for neural nets.

The actual AlphaZero nets are probably more robust because they were themselves trained by MCTS, although they still don't generalize very well out-of-sample: IIRC AlphaZero is not a very strong Fischer Random player.

discuss

order

No comments yet.