top | item 43520749

(no title)

jaehong747 | 11 months ago

Modern transformer-based language models fundamentally lack structures and functions for "thinking ahead." And I don't believe that LLMs have emergently developed human-like thinking abilities. This phenomenon appears because language model performance has improved, and I see it as a reflection of future output token probabilities being incorporated into the probability distribution of the next token set in order to generate meaningful longer sentences. Humans have similar experiences. Everyone has experienced thinking about what to say next while speaking. However, in artificial intelligence language models, this phenomenon occurs mechanically and statistically. What I'm trying to say is that while this phenomenon may appear similar to human thought processes and mechanisms, I'm concerned about the potential anthropomorphic error of assuming machines have consciousness or thoughts.

discuss

order

No comments yet.