top | item 43202645

(no title)

loki49152 | 1 year ago

LLMs don't do formal reasoning. Not in any sense. They don't do any kind of reasoning - they replay combinatorics of the reasoning that was encoded in their training data via "finding" the patterns in the relationships of the tokens at different scales and then applying those to the generation of some output triggered by the input.

discuss

order

Tainnor|1 year ago

That's irrelevant. They have an "input tape" and an "output tape" and whatever goes on inside the LLM could in principle be implemented in a TM (of course that wouldn't be efficient but that's besides the point).