(no title)
atleta | 2 years ago
So the thing is that nobody knows what the development curve of AI is going to be and what the exact economical and societal effects are going to be. Whether it's 5 years to AGI or 50. (Neither of these seem very likely BTW.) Now since we do expect that there can be problems and since we at least can't rule out that these will manifest in the foreseeable (near) future, it's better to assume that we will have (at least economical) problems soon. It doesn't matter what LLMs can do today.
The development curve is what matters. And even if I said we don't know it, we have pretty good reasons to think (see above) that it's going to be powerful enough soonish. Just remember: about 1.5-2 years ago basically nobody would have predicted that LLMs would be able to do what they can do today. And I mean most experts would have probably said that it's not possible for LLMs to do what they can do today at all . Definitely not that they would be doing it by mid 2023. Or even just that they would be so powerful that a lot of non-technical people would use them. (Though, sure, there is still very little practical use as of today but the capabilities did make a huge and unexpected jump. It even surprised researchers like Geoffrey Hinton.)
No comments yet.