(no title)
soiler | 2 years ago
Yes, I know. We should under no circumstances unleash a powerful, sentient AI that acts like average people on the internet.
> But current AI doesn't have comprehension or planning abilities.
Yes, I know. That's why I said I do not believe current AI has comprehension or planning abilities.
Did an AI write this comment?
bootsmann|2 years ago
I think the motte and bailey argument where one warns extensively about how we're on the road to agi doom, pointing to gpt as evidence for it but then retreats to "I never said current AI is anywhere near agi" when pressed shows the lazyness of alignment discourse. Either its relevant to the models available at hand or you are speculating around the future without any grounding in reality. You don't get to do both.
soiler|2 years ago
I think you've misunderstood my arguments, so I'll step through them again:
1. The trajectory of how we got to current AI (from past AI) is terrifyingly steep. In the time since ChatGPT was released, many experts have shortened their predicted timelines for the arrival of AGI. In other words: AGI is coming soon.
2. Current AI is smart enough to demonstrate that alignment is not solved, not even close. Current AI says things to us that would be very scary coming from an AGI. In other words: Current AI is dangerous.
3. Alignment does not come automatically from increased capabilities. Maybe this is a huge leap, but I don't see any reason that making AI smarter will automatically give it values that are more aligned with out interests. In other words: Future AI will not be less dangerous than current AI without dramatic and unlikely effort.
None of these ideas contradict each other. Current AI is dangerous. AI is getting smarter faster than it is getting safer. Therefore, future AI will be extremely dangerous.