(no title)
Wonderfall | 1 year ago
The conclusion states "Language and thought are not purely autoregressive in humans".
Which doesn't mean humans don't have autoregressive components in their thinking. At least that's my opinion. I don't make this bold statement and I don't know enough to know, too.
> Like a lot of my coworkers analyzing a production bug? I would agree if the statement were that LLMs were underpowered compared to a human brain today
Clearly not in the same way and that was what I was trying to explain with regards to the hallucination issue too. Humans are also learning from proofs, can apply frameworks, etc. there's no denying that. But the internal process of an LLM remains pattern matching and sequential prediction whereas there's more to the human's thinking process.
LLMs are underpowered in some aspects that can't be replicated with autoregressive modeling, but are already stronger in other aspects. That is what I think.
> but I'm not seeing evidence that humans do reasoning in a way that can't be correctly modeled.
Me neither, this is not what my stance is, and I'm actually optimistic about it. I just don't think we should be satisfied with only autoregressive modeling if the ambition is to reach or comprehend human-level intelligence.
No comments yet.