top | item 37476263

(no title)

haimez | 2 years ago

LLM’s are spitting out responses based on their inputs. It is (or was) shockingly effective, but there is no generalized math processing going on. That’s not what LLM’s are, that’s not how they work.

discuss

order

dekhn|2 years ago

And yet, trained on a large corpora of correct math statements, they produce responses that are more often right than wrong (I am taking this for true- it might not be)- which simply raises more questions about the nature of math.

haimez|2 years ago

…or the nature of the question and corpus?