top | item 40078235

(no title)

Kronopath | 1 year ago

This is not good news, this means that we could end up with a dangerously superintelligent AI just by scaling up the number of parameters, without increasing the amount of training data.

discuss

order

kelseyfrog|1 year ago

No, but LLMs require orders of magnitude more language input than humans[1]. It's very reasonable to assume that architectural differences (size among them) is more likely a constraint for performance.

1. Specifically larger than the upper bound on lifetime language input for humans, even assuming 24/7 at max reading speed.

p1esk|1 year ago

How much language input does a human need to become intelligent if he doesn’t receive any other input?

HeatrayEnjoyer|1 year ago

Do they? What is the total size of all visual, audio, touch, locomotive, scent, and taste data collected between birth and when a human reaches IQ 100? There are multiple high-bandwidth feeds running into the brain 24/7.

TeMPOraL|1 year ago

Yes, but LLMs come out of training as experts in approximately any single thing you can think of, and then some, and all that in dozen of languages. Humans don't achieve even a fraction of this kind of breadth.

mirekrusin|1 year ago

LLMs are super-intelligent at mimicking already, it won't take much time to find some kind of RL loop there.

exe34|1 year ago

Like a corporation then. We should ban them until we can figure out how to align them!

tehsauce|1 year ago

ASI is nothing like a corporation

pfdietz|1 year ago

It's only bad news if you don't want a dangerously superintelligent AI.

Kronopath|1 year ago

No one should want this.