top | item 29894553

Reinforcement Learning as a fine-tuning paradigm

22 points| ankeshanand | 4 years ago |ankeshanand.com

7 comments

order

visarga|4 years ago

The fruits of massive language modeling are coming to RL. I envision such foundation models becoming cheap and standardized, like an AI operating system. If we could have a cheap, compact, multi-modal GPT-3 chip we could make all sorts of agents run on top. These RL agents would be like the libraries of skills in Matrix, you can load any skill you want on the player.

YetAnotherNick|4 years ago

In India without VPN:

"The website has been blocked as per order of Ministry of Electronics and Information Technology under IT Act, 2000."

ankeshanand|4 years ago

Looks like any Github pages served with CloudFlare are getting blocked, I am trying out a fix.

armanboyaci|4 years ago

> Other learning paradigms are about minimization; reinforcement learning is about maximization.

I don't see why this is important.

Matumio|4 years ago

I think they wanted to express that learning to predict the correct output ("error minimization") puts a limit on the achievable performance. While ranking (not just RL, really) allows to improve beyond the current best-known answer.

patresh|4 years ago

Also the next point

> It should have (and has shown to have) better scaling laws

is a statement based on two anecdotes but I don't see a compelling reason why this should be the case in general.

Active learning approaches are not mentioned even though they allow incorporating human feedback during the fine-tuning process and this can be done with a purely supervised approach.

IMO the last point is the only compelling one : having for example agents that can browse the web during learning could open a lot of possibilities. It would have been interesting to develop this last point more : what are the current difficulties in training such agents?

ankeshanand|4 years ago

It's important in the context that RL does not have performance ceilings.