top | item 37963065

(no title)

atleta | 2 years ago

No, it's not just you, but a lot of people are downplaying the dangers of AI. The easiest to accept one is that it can cause mass unemployment and displacement of workforce. No, it doesn't matter that we have better jobs that what the luddite textile workers lost 200 years ago, because it's not guaranteed to be the same situation (indeed, I'd say it's guaranteed to be different) and those luddites ended up in way worse situation anyway.

So the thing is that nobody knows what the development curve of AI is going to be and what the exact economical and societal effects are going to be. Whether it's 5 years to AGI or 50. (Neither of these seem very likely BTW.) Now since we do expect that there can be problems and since we at least can't rule out that these will manifest in the foreseeable (near) future, it's better to assume that we will have (at least economical) problems soon. It doesn't matter what LLMs can do today.

The development curve is what matters. And even if I said we don't know it, we have pretty good reasons to think (see above) that it's going to be powerful enough soonish. Just remember: about 1.5-2 years ago basically nobody would have predicted that LLMs would be able to do what they can do today. And I mean most experts would have probably said that it's not possible for LLMs to do what they can do today at all . Definitely not that they would be doing it by mid 2023. Or even just that they would be so powerful that a lot of non-technical people would use them. (Though, sure, there is still very little practical use as of today but the capabilities did make a huge and unexpected jump. It even surprised researchers like Geoffrey Hinton.)

discuss

order

No comments yet.