top | item 41208885

(no title)

adroniser | 1 year ago

Isn't RL the algorithm we want basically?

discuss

order

HarHarVeryFunny|1 year ago

Want for what?

RL is one way to implement goal directed behavior (making decisions now that hopefully will lead towards a later reward), but I doubt this is the actual mechanism at play when we exhibit goal directed behavior ourselves. Something more RL-like may potentially be used in our cerebellum (not cortex) to learn fine motor skills.

Some of the things that are clearly needed for human-like AGI are things like the ability to learn incrementally and continuously (the main ways we learn are by trial and error, and by copying), as opposed to pre-training with SGD, things like working memory, ability to think to arbitrary depth before acting, innate qualities like curiosity and boredom to drive learning and exploration, etc.

The Transformer architecture underlying all of today's LLMs have none of the above, not surprising since it was never intended as a cognitive architecture - it was designed for seq2seq use such as language models (LLMs).

So, no, I don't think RL is the answer to AGI, and note that DeepMind who had previously believed that have since largely switched to LLMs in the pursuit of AGI, and are mostly using RL as part of more specialized machine learning applications such as AlphaGo and AlphaFold.

adroniser|1 year ago

But RL algorithms do implement things like curiosity to drive exploration?? https://arxiv.org/pdf/1810.12894.

Thinking to arbitrary depth sounds like Monte Carlo tree search? Which is often implemented in conjunction with RL. And working memory I think is a matter of the architecture you use in conjunction with RL, agree that transformers aren't very helpful for this.

I think what you call 'trial and error', is what I intuitively think of RL as doing.

AlphaProof runs an RL algorithm during training, AND at inference time. When given an olympiad problem, it generates many variations on that problem, tries to solve them, and then uses RL to effectively finetune itself on the particular problem currently being solved. Note again that this process is done at inference time, not just training.

And AlphaProof uses an LLM to generate the Lean proofs, and uses RL to train this LLM. So it kinda strikes me as a type error to say that DeepMind have somehow abandoned RL in favour of LLMs? Note this Demis tweet https://x.com/demishassabis/status/1816596568398545149 where it seems like he is saying that they are going to combine some of this RL stuff with the main gemini models.