top | item 44260898

(no title)

curious_cat_163 | 8 months ago

Nathan Lambert provides a counterpoint to the recent "The Illusion of Thinking" paper by Apple [1]:

"On one of these toy problems, the Tower of Hanoi, the models structurally cannot output enough tokens to solve the problem — the authors still took this as a claim that “these models cannot reason” or “they cannot generalize.” This is a small scientific error."

"it appears that a majority of critiques of AI reasoning are based in a fear of no longer being special rather than a fact-based analysis of behaviors."

[1]: https://www.arxiv.org/pdf/2506.06941

discuss

order

No comments yet.