top | item 44274590

(no title)

PollardsRho | 8 months ago

If someone who is so good at manipulation their life is adapted into a movie still ends up serving decades behind bars, isn't that actually a pretty good indication that maxing out Speech doesn't give you superpowers?

AI that's as good as a persuasive human at persuasion is clearly impactful, but I certainly don't see it as self-evident that you can just keep drawing the line out until you end up with 200 IQ AI that is so easily able to manipulate the environment it's not worth elaborating how exactly a chatbot is supposed to manipulate the world through extremely limited interfaces with the outside world.

discuss

order

actsasbuffoon|8 months ago

In the context of the topic (could a rogue super intelligence break out), I don’t really see how that’s relevant. Clearly someone who is clever enough has an advantage at breaking out.

As for the bit about how limited it is, do you remember the Rowhammer attack? https://en.m.wikipedia.org/wiki/Row_hammer

This is exactly the kind of thing I’d worry about a super intelligence being able to discover about the hardware it’s on. If we’re dealing with something vastly more intelligent than us then I don’t think we’re capable of building a cell that can hold it.

PollardsRho|8 months ago

Advantage, sure. I just don't think that advantage is particularly meaningful in situations a human has virtually no chance of escaping. Humans also have a lot of their own advantages. How is a chatbot supposed to cross an air gap unless you assume it has what I consider unrealistic levels of persuasion?

I think you also have to consider that AI with superpowers is not going to materialize overnight. If superintelligent AI is on the horizon, the first such AI will be comparable to very capable humans (who do not have the ability to talk their way into nuclear launch codes or out of decades-long prison sentences at will). Energy costs will still be tremendous, and just keeping the system going will require enormous levels of human cooperation. The world will change a lot in that kind of scenario, and I don't know how reasonable it is to claim anything more than the observation of potential risks in a world so different from the one we know.

Is it possible that search ends up doing as much for persuasion as it does for chess, superintelligent AI happens relatively soon, and it doesn't have prohibitive energy costs such that escape is a realistic scenario? I suppose? Is any of that obvious or even likely? I wouldn't say so.