(no title)
Jack000 | 2 years ago
An LLM could play chess though, all it needs is grounding (by feeding it the current board state) and agency (RF to reward the model for winning games)
Jack000 | 2 years ago
An LLM could play chess though, all it needs is grounding (by feeding it the current board state) and agency (RF to reward the model for winning games)
famouswaffles|2 years ago
No it's playing games. And if you're not at about the level I spoke of, you will lose repeatedly over whatever number or stretch of games you imagine.
https://github.com/adamkarvonen/chess_gpt_eval
3.5 Instruct (different model from regular 3.5 that can't play) can play chess. There's no trick. Any other framing seems like a meaningless distinction.
The goal is to model the chess games and there's no better way to do that than to learn to play the game.
>all it needs is grounding (by feeding it the current board state)
The Model is already constructing a board state to play the game.
https://www.neelnanda.io/mechanistic-interpretability/othell...
>agency (RF to reward the model for winning games)
Predict the next token loss is already rewarding models for winning when the side they are predicting wins.
And when the preceeding text says x side wins and it's playing as x side then the loss is rewarding it to do everything it can to win.
I agree different goals and primary rewards led to this ability to play and with it , slight manifestations(GPT can probably modulate level of play better than any other machine or human) but it is nonetheless playing.