This is not good news, this means that we could end up with a dangerously superintelligent AI just by scaling up the number of parameters, without increasing the amount of training data.
No, but LLMs require orders of magnitude more language input than humans[1]. It's very reasonable to assume that architectural differences (size among them) is more likely a constraint for performance.
1. Specifically larger than the upper bound on lifetime language input for humans, even assuming 24/7 at max reading speed.
Do they? What is the total size of all visual, audio, touch, locomotive, scent, and taste data collected between birth and when a human reaches IQ 100? There are multiple high-bandwidth feeds running into the brain 24/7.
Yes, but LLMs come out of training as experts in approximately any single thing you can think of, and then some, and all that in dozen of languages. Humans don't achieve even a fraction of this kind of breadth.
kelseyfrog|1 year ago
1. Specifically larger than the upper bound on lifetime language input for humans, even assuming 24/7 at max reading speed.
p1esk|1 year ago
HeatrayEnjoyer|1 year ago
TeMPOraL|1 year ago
mirekrusin|1 year ago
exe34|1 year ago
tehsauce|1 year ago
pfdietz|1 year ago
Kronopath|1 year ago