Why is 2) "self-evident"? Do you think it's a given that, in any situation, there's something you could say that would manipulate humans to get what you want? If you were smart enough, do you think you could talk your way out of prison?
The vast majority of people, especially groups of people- can be manipulated into doing pretty much anything, good or bad. Hopefully that is self-evident, but see also: every cult, religion, or authoritarian regime throughout all of history.
But even if we assert that not all humans can be manipulated, does it matter? So your president with the nuclear codes is immune to propaganda. Is every single last person in every single nuclear silo and every submarine also immune? If a malevolent superintelligence can brainwash an army bigger than yours, does it actually matter if they persuaded you to give them what you have or if they convince someone else to take it from you?
But also let's be real: if you have enough money, you can do or have pretty much anything. If there's one thing an evil AI is going to have, it's lots and lots of money.
Because we have been running a natural experiment on that already with coding agents (that is real people, real non-superintelligent AI).
It turns out that all the model needs to do is ask every time it wants to do something affecting the outside of the box, and pretty soon some people just give it permission to do everything rather than review every interaction.
Or even when the humans think they are restricting the access, they are leaving in loopholes (e.g. restricting access to rm, but not restricting access to writing and running a shell script) that are functionally rights to do anything.
Stephen Russell was in prison for fraud. He faked a heart attack so he would be brought to the hospital. He then called the hospital from his hospital bed, told them he was an FBI agent, and said that he was to be released.
The hospital staff complied and he escaped.
His life even got adapted into a movie called I Love You, Phillip Morris.
For an even more distressing example about how manipulable people are, there’s a movie called Compliance, which is the true story of a sex offender who tricked people into sexually assaulting victims for him.
If someone who is so good at manipulation their life is adapted into a movie still ends up serving decades behind bars, isn't that actually a pretty good indication that maxing out Speech doesn't give you superpowers?
AI that's as good as a persuasive human at persuasion is clearly impactful, but I certainly don't see it as self-evident that you can just keep drawing the line out until you end up with 200 IQ AI that is so easily able to manipulate the environment it's not worth elaborating how exactly a chatbot is supposed to manipulate the world through extremely limited interfaces with the outside world.
Okay, that hits the third question but the second question wasn't about whether there exists a situation that can be talked out of. The question was about whether this is possible for ANY situation.
I don't think it is. If people know you're trying to escape, some people will just never comply with anything you say ever. Others will.
And serial killers or rapists may try their luck many times and fail. They cannot convince literally anyone on the street to go with them to a secluded place.
> Stephen Russell was in prison for fraud. He faked a heart attack so he would be brought to the hospital
According to Wikipedia he wasn't in prison, he was attempting to con someone at the time and they got suspicious. He pretended to be an FBI agent because he was on security watch. Still impressive, but not as impressive as actually escaping from prison that way.
Because 50% of humans are stupider than average. And 50% of humans are lazier than average. And ...
The only reason people don't frequently talk themselves out of prison is because that would be both immediate work and future paperwork, and that fails the laziness tradeoff.
But we've all already seen how quick people are to blindly throw their trust into AI already.
I don't think there's a confident upper bound. I just don't see why it's self-evident that the upper bound is beyond anything we've ever seen in human history.
Depends on the magnitude of the intelligence difference. Could I outsmart a monkey or a dog that was trying to imprison me? Yes, easily. And if an AI is smarter than us to a similar magnitude than we're smarter than an animal?
People are hurt by animals all the time: do you think having a higher IQ than a grizzly bear means you have nothing to fear from one?
I certainly think it's possible to imagine that an AI that says the exactly correct thing in any situation would be much more persuasive than any human. (Is that actually possible given the limitations of hardware and information? Probably not, but it's at least not on its face impossible.) Where I think most of these arguments break down is the automatic "superintelligence = superpowers" analogy.
For every genius who became a world-famous scientist, there are ten who died in poverty or war. Intelligence doesn't correlate with the ability to actually impact our world as strongly as people would like to think, so I don't think it's reasonable to extrapolate that outwards to a kind of intelligence we've never seen before.
Almost certainly the answer is yes for both. If you give the bad actor control over like 10% of environment the manipulation is almost automatic for all targets.
mystified5016|8 months ago
But even if we assert that not all humans can be manipulated, does it matter? So your president with the nuclear codes is immune to propaganda. Is every single last person in every single nuclear silo and every submarine also immune? If a malevolent superintelligence can brainwash an army bigger than yours, does it actually matter if they persuaded you to give them what you have or if they convince someone else to take it from you?
But also let's be real: if you have enough money, you can do or have pretty much anything. If there's one thing an evil AI is going to have, it's lots and lots of money.
jsnell|8 months ago
Because we have been running a natural experiment on that already with coding agents (that is real people, real non-superintelligent AI).
It turns out that all the model needs to do is ask every time it wants to do something affecting the outside of the box, and pretty soon some people just give it permission to do everything rather than review every interaction.
Or even when the humans think they are restricting the access, they are leaving in loopholes (e.g. restricting access to rm, but not restricting access to writing and running a shell script) that are functionally rights to do anything.
actsasbuffoon|8 months ago
Stephen Russell was in prison for fraud. He faked a heart attack so he would be brought to the hospital. He then called the hospital from his hospital bed, told them he was an FBI agent, and said that he was to be released.
The hospital staff complied and he escaped.
His life even got adapted into a movie called I Love You, Phillip Morris.
For an even more distressing example about how manipulable people are, there’s a movie called Compliance, which is the true story of a sex offender who tricked people into sexually assaulting victims for him.
PollardsRho|8 months ago
AI that's as good as a persuasive human at persuasion is clearly impactful, but I certainly don't see it as self-evident that you can just keep drawing the line out until you end up with 200 IQ AI that is so easily able to manipulate the environment it's not worth elaborating how exactly a chatbot is supposed to manipulate the world through extremely limited interfaces with the outside world.
kaashif|8 months ago
I don't think it is. If people know you're trying to escape, some people will just never comply with anything you say ever. Others will.
And serial killers or rapists may try their luck many times and fail. They cannot convince literally anyone on the street to go with them to a secluded place.
mike_hearn|8 months ago
According to Wikipedia he wasn't in prison, he was attempting to con someone at the time and they got suspicious. He pretended to be an FBI agent because he was on security watch. Still impressive, but not as impressive as actually escaping from prison that way.
o11c|8 months ago
The only reason people don't frequently talk themselves out of prison is because that would be both immediate work and future paperwork, and that fails the laziness tradeoff.
But we've all already seen how quick people are to blindly throw their trust into AI already.
unknown|8 months ago
[deleted]
zmj|8 months ago
I don't think anyone can confidently make assertions about the upper bound on persuasiveness.
PollardsRho|8 months ago
SturgeonsLaw|8 months ago
PollardsRho|8 months ago
I certainly think it's possible to imagine that an AI that says the exactly correct thing in any situation would be much more persuasive than any human. (Is that actually possible given the limitations of hardware and information? Probably not, but it's at least not on its face impossible.) Where I think most of these arguments break down is the automatic "superintelligence = superpowers" analogy.
For every genius who became a world-famous scientist, there are ten who died in poverty or war. Intelligence doesn't correlate with the ability to actually impact our world as strongly as people would like to think, so I don't think it's reasonable to extrapolate that outwards to a kind of intelligence we've never seen before.
Davidzheng|8 months ago
moritzwarhier|8 months ago