top | item 41679502

(no title)

Hercuros | 1 year ago

I think the biggest fallacy in this type of thinking is that it projects all AI progress into a single quantity of “intelligence” and then proceeds to extrapolate that singular quantity into some imagined absurd level of “superintelligence”.

In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.

Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.

discuss

order

No comments yet.