top | item 44215753

(no title)

no_op | 8 months ago

The material finding of this paper is that reasoning models are better than non-reasoning models at solving puzzles of intermediate complexity (where that's defined, essentially, by how many steps are required), but that performance collapses past a certain threshold. This threshold differs for different puzzle types. It occurs even if a model is explicitly supplied with an algorithm it can use to solve the puzzle, and it's not a consequence of limited context window size.

The authors speculate that this pattern is a consequence of reasoning models actually solving these puzzles by way of pattern-matching to training data, which covers some puzzles at greater depth than others.

Great. That's one possible explanation. How might you support it?

- You could systematically examine the training data, to see if less representation of a puzzle type there reliably correlates with worse LLM performance.

- You could test how successfully LLMs can play novel games that have no representation in the training data, given instructions.

- Ultimately, using mechanistic interpretability techniques, you could look at what's actually going on inside a reasoning model.

This paper, however, doesn't attempt any of these. People are getting way out ahead of the evidence in accepting its speculation as fact.

discuss

order

somethingsome|8 months ago

While I agree overall, LLMs are pattern matching in a complicated way.

You transform your training data in a very strange and high dimensional space. Then when you write an input, you calculate the distance between that input and the closest point in that space.

So, in some sense.. You pattern match your input with the training data. Of course, in a very non intuitive way for humans.

Now, it doesn't necessarily imply things as 'models cannot solve new problems not seen before' we don't know if our problem could get matched to something completely unrelated for us, but in that space it makes sense.

So with your experiments, if the model is able to solve a new puzzle never seen before, you'll never know why, but it doesn't imply either that the new puzzle was not matched in some sense to some previous data in the dataset.