top | item 40540416

(no title)

aogaili | 1 year ago

I use LLMs daily in coding and it very clear to me (as a humble average thinking machine) that it is an approximation of reasoning and very close one. But the mistakes it makes clearly shows that this system does not really understand, it is not very different from the mistakes those who memorized text do. Humans, when they really think, they think differently. That is why, I would never expect the current architecture of LLMs to come up with a something like special relativity or any novel idea, because again it doesn't not really reasonn the same way deep thinkers, philosophers do. However, most knowledge work does not require that much depth in reasoning, hence LLMs wide adoption.

discuss

order

No comments yet.