(no title)
maebert | 10 months ago
in a somewhat ironic twist, it seems like the authors internal definition for "intelligence" fits much closer with 1950s. good old-fashioned AI, doing proper logic and algebra. literally all the progress we made in ai in the last 20 years in ai is precisely because we abandoned this narrow-minded definition of intelligence.
Maybe I'm a grumpy old fart but none of these are new arguments. Philosophy of mind has an amazingly deep and colorful wealth of insights in this matter, and I don't know why this is not required reading for anyone writing a blog on ai.
13years|10 months ago
"First, we should measure is the ratio of capability against the quantity of data and training effort. Capability rising while data and training effort are falling would be the interesting signal that we are making progress without simply brute-forcing the result.
The second signal for intelligence would be no modal collapse in a closed system. It is known that LLMs will suffer from model collapse in a closed system where they train on their own data."
maebert|10 months ago
yes, humans can learn to comprehend and speak language with magnitudes less examples than llms, however we also have very specific hardware for that evolved over millions of years — it's plausible that language acquisition in humans is more akin to fine-tuning in llms than training them from ground up. Either way, this metric is comparing apples to oranges when it comes to comparing real and artificial intelligence.
model collapse is a problem in ai that needs to be solved, and maybe it's even a necessary condition for true intelligence, though certainly not a sufficient one, and hence not an equivalent definition of intelligence either.