(no title)
yathaid | 1 year ago
In the previous paragraph, the author makes the case for why Lecun was wrong with the example of reasoning models. Yet, in the next paragraph, this assertion is made which is just a paraphrasing of Yecun's original assertion. Which the author himself says is wrong.
>> Instead of waiting for FAA (fully-autonomous agents) we should understand that this is a continuum, and we’re consistently increasing the amount of useful work AIs
Yes! But this work is already well underway. There is no magic threshold for AGI - instead the characterization is based on what percentile of the human population the AI can beat. One way to characterize AGI in this manner is "99.99% percentile at every (digital?) activity".
jxmorris12|1 year ago
This is a subtle point that may have not come across clearly enough in my original writing. A lot of folks were saying that the DeepSeek finding that longer chains of thought can produce higher-quality outputs contradicts Yann's thesis overall. But I don't think so.
It's true that models like R1 can correct small mistakes. But in the limit of tokens generated, the chance that they generate the correct answer still decays to zero.
partypete|1 year ago
There was a paper not too long ago which illuminated that reasoning models will increase their response length more or less indefinitely toward solving a problem, but the return from doing so asymptotes toward zero. My apologies for missing a link.
yathaid|1 year ago
>> But in the limit of tokens generated, the chance that they generate the correct answer still decays to zero.
I don't understand this assertion though.
Lecun's thesis was errors just accumulate.
Reasoning models accumulate errors, track back and are able to reduce it back down.
Hence the hypothesis of errors accumulating (at least asymptotically) is false.
What is the difference between "Probability of correct answer decaying to zero" and "Errors keep accumulating" ?
lern_too_spel|1 year ago