(no title)
sergiomattei | 9 days ago
Skimming through conclusions and results, the authors conclude that LLMs exhibit failures across many axes we'd find to be demonstrative of AGI. Moral reasoning, simple things like counting that a toddler can do, etc. They're just not human and you can reasonably hypothesize most of these failures stem from their nature as next-token predictors that happen to usually do what you want.
So. If you've got OpenClaw running and thinking you've got Jarvis from Iron Man, this is probably a good read to ground yourself.
Note there's a GitHub repo compiling these failures from the authors: https://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failur...
vagrantstreet|9 days ago
mettamage|9 days ago
An LLM is more akin to interacting with a quirky human that has anterograde amnesia because it can't form long-term memories anymore, it can only follow you in a long-ish conversation.
LiamPowell|9 days ago
I'm not arguing that LLMs are human here, just that your reasoning doesn't make sense.
alansaber|9 days ago
otabdeveloper4|9 days ago
They're sold as AGI by the cloud providers and the whole stock market scam will collapse if normies are allowed to peek behind the curtain.
alansaber|9 days ago
throw310822|9 days ago
Which LLMs? There's tons of them and more powerful ones appear every month.
alansaber|9 days ago
simianwords|9 days ago
jibal|9 days ago
lostmsu|9 days ago
Specifically, the idea that LLMs fail to solve some tasks correctly due to fundamental limitations where humans also fail periodically well may be an instance of the fundamental attribution error.