(no title)
tmhn2 | 3 months ago
[1] https://mathstodon.xyz/@tao/115420236285085121 [2] https://xcancel.com/wtgowers/status/1984340182351634571
tmhn2 | 3 months ago
[1] https://mathstodon.xyz/@tao/115420236285085121 [2] https://xcancel.com/wtgowers/status/1984340182351634571
dns_snek|3 months ago
[1] That does not mean that they can never produce texts which describes a valid reasoning process, it means that they can't do so reliably. Sometimes their output can be genius and other times you're left questioning if they even have the reasoning skills of a 1st grader.
chimprich|3 months ago
Humans sometimes make mistakes in reasoning, too; sometimes they come up with conclusions that leave me completely bewildered (like somehow reasoning that the Earth is flat).
I think we can all agree that humans are significantly better and more consistently good at reasoning than even the best LLM models, but the argument that LLMs cannot reliably reason doesn't seem to match the evidence.