top | item 36130895

(no title)

Darkphibre | 2 years ago

I just don't see how the genie is put back in the bottle. Optimizations and new techniques are coming in at a breakneck pace, allowing for models that can run on consumer hardware.

discuss

order

NumberWangMan|2 years ago

I think it could be done. Or rather, instead of putting the genie back in the bottle, we could slow it down enough that we figure out how to ask it for wishes in a way that avoids all the monkey-paw's scenarios.

Dropping the metaphor, running today's models isn't dangerous. We could criminalize developing stronger ones, and make a "Manhattan project" for AI aimed at figuring out how to not ruin the world with it. I think a big problem is what you point out -- once it's out, it's hard to prevent misuse. One bad AGI could end up making a virus that does massive damage to humanity. We might end up deciding that this tech is just too dangerous to be allowed to happen at all, at least until after humanity manages to digitize all our brains or something. But it's better to try to slow down as much as we can, for as long as we can, than to give up right from the get-go and wing it.

Honestly, if it turns out that China ends up developing unsafe AI before we develop safe AI, I doubt it would have turned out much better for the average American if America were the ones to develop unsafe AI first. And if they cut corners and still manage to make safe AI and take over the world, that still sounds a heck of a lot better than anyone making unsafe AI.