top | item 41132647

(no title)

pocketsand | 1 year ago

If you took the "no it won't" side of every argument about "how in X number of years, AI is sure to Y", you'd be way ahead.

In any event, raw parameter/weight count to me seems like a very primitive way to judge "complexity" in comparison to the human brain. Looked at most ways, our brains are for more efficient at doing the incredible things they do than LLMs. Consider how little language young children are exposed to in comparison to LLMs given their abilities to figure out how to produce language.

If the brain doesn't work like an LLM, you can expand the size and "complexity" of these models to the moon and they won't outperform the brain. Current models can write impressively well, but they can barely do math. It's clear they don't reason as we do.

discuss

order

No comments yet.