top | item 43520820

(no title)

jaehong747 | 11 months ago

Like you, I also find the paper's findings interesting. I'm not arguing that LLMs lack the ability to "think" (mechanically), but rather expressing concern that by choosing the word "thinking" in the paper, LLMs might become anthropomorphized in ways they shouldn't be.

I believe this phenomenon occurs because high-performance LLMs have probability distributions of future words already reflected in their neural networks, resulting in increased output values of LLM neurons (activation functions). It's something that happens during the process of predicting probability distributions for the next or future output token dictionaries.

discuss

order

No comments yet.