(no title)
adroniser | 1 year ago
Thinking to arbitrary depth sounds like Monte Carlo tree search? Which is often implemented in conjunction with RL. And working memory I think is a matter of the architecture you use in conjunction with RL, agree that transformers aren't very helpful for this.
I think what you call 'trial and error', is what I intuitively think of RL as doing.
AlphaProof runs an RL algorithm during training, AND at inference time. When given an olympiad problem, it generates many variations on that problem, tries to solve them, and then uses RL to effectively finetune itself on the particular problem currently being solved. Note again that this process is done at inference time, not just training.
And AlphaProof uses an LLM to generate the Lean proofs, and uses RL to train this LLM. So it kinda strikes me as a type error to say that DeepMind have somehow abandoned RL in favour of LLMs? Note this Demis tweet https://x.com/demishassabis/status/1816596568398545149 where it seems like he is saying that they are going to combine some of this RL stuff with the main gemini models.
HarHarVeryFunny|1 year ago
I hadn't read that paper, but yes using prediction failure as learning signal (and attention mechanism), same as we do, is what I had in mind, but it seems that to be useful it needs to be combined with online learning ability, so that having explored then next time one's predictions will be better.
It's easy to imagine LLM's being extended in all sorts of ad-hoc ways, including external prompting/scaffolding such as think step by step and tree search, which help mitigate some of the architectural shortcomings, but I think online learning is going to be tough to add in this way, and it also seems that using the model's own output as a substitute for working memory isn't sufficient to support long term focus and reasoning. You can try to script intelligence by putting the long-term focus and tree search into an agent, but I think that will only get you so far. At the end of the day a pre-trained transformer really is just a fancy sentence completion engine, and while it's informative how much "reactive intelligence" emerges from this type of frozen prediction, it seems the architecture has been stretched about as far as it will go.
I wasn't saying that DeepMind have abandoned RL in favor of LLMs, just that they are using RL in more narrow applications than AGI. David Silver at least still also seems to think that "Reward is enough" [for AGI], as of a few years ago, although I think most people disagree.
adroniser|1 year ago
To be clear, i'm not claiming that you take an LLM and do some RL on it and suddenly it can do particular tasks. I'm saying that if you train it from scratch using RL it will be able to do certain well defined formal tasks.
Idk what you mean about the online learning ability tbh. The paper uses it in the exact way you specify, which is that it uses RL to play montezuma's revenge and gets better on the fly.
Similar to my point about the inference time RL ability of the alphaProof LLM. That's why I emphasized that RL is done at inference time, like each proof you do it uses to make itself better for next time.
I think you are taking LLM to mean GPT style models, and I am taking LLM to mean transformers which output text, and they can be trained to do any variety of things.