LLM’s are spitting out responses based on their inputs. It is (or was) shockingly effective, but there is no generalized math processing going on. That’s not what LLM’s are, that’s not how they work.
And yet, trained on a large corpora of correct math statements, they produce responses that are more often right than wrong (I am taking this for true- it might not be)- which simply raises more questions about the nature of math.
dekhn|2 years ago
haimez|2 years ago