(no title)
bloaf | 2 months ago
The contra-positive of "All LLMs are not thinking like humans" is "No humans are thinking like LLMs"
And I do not believe we actually understand human thinking well enough to make that assertion.
Indeed, it is my deep suspicion that we will eventually achieve AGI not by totally abandoning today's LLMs for some other paradigm, but rather embedding them in a loop with the right persistence mechanisms.
viccis|2 months ago
visarga|2 months ago
robinei|2 months ago