(no title)
aklein | 1 month ago
“LLMs only predict what a human would say, rather than predicting the actual consequences of an action or engaging with the real world. This is the core deficiency: intelligence requires not just mimicking patterns, but acting, observing real outcomes, and adjusting behavior based on those outcomes — a cycle Sutton sees as central to reinforcement learning.” [1]
An LLM itself is a form of crystallized intelligence, but it does not learn and adapt without a human driver, and that to me is a key component of intelligent behavior.
[1] https://medium.com/@sulbha.jindal/richard-suttons-challenge-...
No comments yet.