top | item 47008916

(no title)

btown | 16 days ago

One of the biggest pieces of "writing on the wall" for this IMO was when, in the April 15 2025 Preparedness Framework update, they dropped persuasion/manipulation from their Tracked Categories.

https://openai.com/index/updating-our-preparedness-framework...

https://fortune.com/2025/04/16/openai-safety-framework-manip...

> OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.

> The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.

To see persuasion/manipulation as simply a multiplier on other invention capabilities, and something that can be patched on a model already in use, is a very specific statement on what AI safety means.

Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality.

discuss

order

imiric|16 days ago

> Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality.

So, like, social media and adtech?

Judging by how little humanity is preoccupied with global manipulation campaigns via technology we've been using for decades now, there's little chance that this new tech will change that. It can only enable manipulation to grow in scale and effectiveness. The hype and momentum have never been greater, and many people have a lot to gain from it. The people who have seized power using earlier tech are now in a good position to expand their reach and wealth, which they will undoubtedly do.

FWIW I don't think the threats are existential to humanity, although that is certainly possible. It's far more likely that a few people will get very, very rich, many people will be much worse off, and most people will endure and fight their way to get to the top. The world will just be a much shittier place for 99.99% of humanity.

webdoodle|16 days ago

Right on point. That is the true purpose of this 'new' push into A.I. Human moderators sometimes realize the censorship they are doing is wrong, and will slow walk or blatantly ignore censorship orders. A.I. will diligently delete anything it's told too.

But the real risk is that they can use it to upscale the Cambridge Analytica personality profiles for everyone, and create custom agents for every target that feeds them whatever content they need too manipulate there thinking and ultimately behavior. AKA MkUltra mind control.

komali2|16 days ago

What's frustrating is our society hasn't grappled with how to deal with that kind of psychological attack. People or corporations will find an "edge" that gives them an unbelievable amount of control over someone, to the point that it almost seems magic, like a spell has been cast. See any suicidal cult, or one that causes people to drain their bank account, or one that leads to the largest breach of American intelligence security in history, or one that convinces people to break into the capitol to try to lynch the VP.

Yet even if we persecute the cult leader, we still keep people entirely responsible for their own actions, and as a society accept none of the responsibility for failing to protect people from these sorts of psychological attacks.

I don't have a solution, I just wish this was studied more from a perspective of justice and sociology. How can we protect people from this? Is it possible to do so in a way that maintains some of the values of free speech and personal freedom that Americans value? After all, all Cambridge Analytica did was "say" very specifically convincing things on a massive, yet targeted, scale.

Razengan|16 days ago

> manipulates an entire world to lose its ability to perceive reality.

> ability to perceive reality.

I mean, come on.. that's on you.

Not to "victim blame"; the fault's in the people who deceive, but if you get deceived repeatedly, several times, and there are people calling out the deception, so you're aware you're being deceived, but you still choose to be lazy and not learn shit on your own (i.e. do your own research) and just want everything to be "told" to you… that's on you.

estearum|16 days ago

Everything you think you "know" is information just put in front of you (most of it indirect, much of it several dozen or thousands of layers of indirection deep)

To the extent you have a grasp on reality, it's credit primarily to the information environment you found yourself in and not because you're an extra special intellectual powerhouse.

This is not an insult, but an observation of how brains obviously have to work.