top | item 46188915

(no title)

kaluga | 2 months ago

A lot of the confusion comes from forcing LLMs into metaphors that don’t quite fit — either “they're bags of words” or “they're proto-minds.” The reality is in between: large-scale prediction can look useful, insightful, and even thoughtful without being any of those things internally. Understanding that middle ground is more productive than arguing about labels.

discuss

order

No comments yet.