(no title)
Hercuros | 1 year ago
In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.
Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.
No comments yet.