top | item 39966030

AI instructed brainwashing effectively nullifies conspiracy beliefs

16 points| 13years | 1 year ago |mindprison.cc

31 comments

order
[+] _ache_|1 year ago|reply
It needs to explain the method. For reproducibility. Nothing clear in the blog post but everything is well explained in the research paper. Kudo for that.

It seems solid. But I would like to have the same study with an human conversation instead of an AI one.

[+] ryandvm|1 year ago|reply
I think LLMs would actually do better at this than a human because the whole problem with debunking this stuff is that the anti-facts come at a pace that no human can sustainably refute in real time.

It's always "well, my cousin's friend's kid got autism right after getting a vaccine" and before you can look up the necessary studies to refute it, they're already talking about hydroxychloroquine.

An AI *that the user trusts* is going to be much more effective at that sort of whack-a-mole.

[+] breadbreadbread|1 year ago|reply
The term "brainwashing" is so misleading and fear-baity here. Humans are gullible and can be convinced of falsehoods. You have just as much if not more "brainwashing" power than any chatbot. The only difference is that a chatbot can reach more people faster, but we can also inoculate ourselves to the effectiveness of AI by things like "media literacy" and "skepticism". If you know that an AI can be programmed to promote falsehoods (or otherwise fed falsehoods), you can perhaps double check sources that an AI uses to promote their claims. It's not brain control its just media baby
[+] 13years|1 year ago|reply
> but we can also inoculate ourselves to the effectiveness of AI by things like "media literacy" and "skepticism".

The premise of the study implies otherwise. They took a group of "true believers" of conspiracies. Those who are skeptical of mainstream narratives. They were also told they are conversating with AI.

The study notes that these types of beliefs have not been shown to be swayed by other methods as such efforts have previously failed.

[+] boxed|1 year ago|reply
The article seems a bit all over the place when it comes to moral clarity and reality alignment. I do agree with their premise though that widespread influence of basic beliefs shouldn't be done by AI systems that a few people can control. It's a bit naive to think this isn't done already. And also done manually by troll farms etc.

We do need to have better schools so that our children understand the deep interconnectedness of reality. This is the only defense against conspiracy theorists, MLMs, cults, etc.

[+] 13years|1 year ago|reply
> It's a bit naive to think this isn't done already

Yes, it is definitely done already. However, the point emphasized is this is another level up for capability. Any technology that can be used to influence/control people will be used to do so.

The other disturbing aspect is that despite all of the known pitfalls and abuse that has already occurred there are no shortages of people who are enthusiastically willing to deploy such capabilities to be used on everyone else without a moment's thought of the repercussions.

[+] serf|1 year ago|reply
> It's a bit naive to think this isn't done already.

anyone that has used gpt4 for a few minutes for anything more than the simplest of questions already knows that this happens routinely.

for the easiest to touch-upon things try election questions. If you want it to impose a specific morality on you then try to get it to write fiction with risque themes.

guideposts abound, implying 'moral' guidance and direction.

[+] TheOtherHobbes|1 year ago|reply
We have entire industries dedicated to telling lies and influencing behaviour - often in ways that are sold as fulfilling, but are actually self-harming.

So of course this technology is going to be used to do that more efficiently. And transparently, so no one is aware of it.

I can imagine a future where mainstream AI chat and interactions control the mainstream discourse, and guerrilla AIs operate at the fringes, pushing people into even more toxic discourses or - in some cases - acting to deprogram them.

It's going to make social media quite interesting.

[+] keybored|1 year ago|reply
I don’t see how it is all over the place. The author seems quite firm in their position/belief.
[+] kvgr|1 year ago|reply
Yeah, but that learning does not really go well with religion and politics, deploying the same tactics as the school should be pushing against.
[+] wildrhythms|1 year ago|reply
The actual research is here: https://osf.io/preprints/psyarxiv/xcwdn

I personally found this blog article to be quite insulting to my own intelligence by peppering in these pseudo-intellectual clichés like 'paradox of tyranny'.

[+] keybored|1 year ago|reply
The linked article seems to take a different normative position than the research paper. A more neutral tone would be okay if they agreed. But the author is explicitly distancing themselves from any implication that they agree that it is good to deprogram so-called conspiracy theorist.

The alternative would be to sarcastically agree/celebrate the research. Which wouldn’t necessarily be obvious to everyone. But perhaps not intellectually insulting?

[+] 13years|1 year ago|reply
Insulting because you disagree or because it should be obvious?
[+] keybored|1 year ago|reply
It’s astonishing how ideological authoritarianism can be smuggled in by just saying “conspiracy theory”. So agreed with the article.

> The paper does mention that these capabilities could also be used for nefarious purposes, such as convincing people to believe in conspiracies. However, it continues to finish the thought with the idea that we simply need to ensure that such AI is only used "responsibly."

> Thus, we have arrived at the same fallacy of all authoritarian power beloved by those morally superior to the rest of us. They make all the rules. They will define what is responsible. They will define what is a conspiracy theory. They will define who is targeted by such methods.

See also the mythological “benevolent dictator”. Dictatorship is good because it would be efficient under the hypothetical benevolent dictator. Never mind how you would find one and avoid the malevolent or incompetent ones though.

[+] rurban|1 year ago|reply
They already have the press (plus TV, radio, film, social media censorship) to do exactly that. AI's would be cheaper, but how to get that into people, other than TV, radio and print? Chomsky would have a field day.

https://en.wikipedia.org/wiki/Manufacturing_Consent

[+] 13years|1 year ago|reply
The research study suggested using AI bots on social media and ads on search engines to disseminate to a wide population.
[+] im3w1l|1 year ago|reply
I've heard many people say “you cannot reason a person out of a position he did not reason himself into in the first place." With the implication being that conspiracy theorists can not be engaged in good faith debate and must be manipulated at best and coerced at worst.

If simply discussing matters calmly with an AI can lessen belief in conspiracy theories then it seems to disprove that notion, which should come as a relief to believers and non-believers alike.

[+] friend_and_foe|1 year ago|reply
Is it unethical? Maybe. Gas lighting people because they believe the Wrong Thing™ and the end justifies the means.

People have conversations with others attempting to convince them of things. Is that unethical? Often, people do this from a disingenuous perspective. Is that more or less unethical than an LLM doing it?

What about scale? You can deploy LLMs to do this without feeding them. But we already have TV, one person can control hundreds bots on the internet, "the algorithm" in search engines and social feed creation, it would appear to me that propaganda is already industrialized to the point of diminishing returns, taking Howard Beale out of the loop is not going to increase the efficiency by that much.

Ethical or not, people will always try to increase their power by convincing others to behave in ways beneficial to themselves. It's the world we live in, if you don't want to be a tool you must remain vigilant, the tools being used don't change that at all.

[+] 13years|1 year ago|reply
The view that society is already much a product of algorithms isn't a counter to that being a unfavorable outcome. A problem that already exists isn't a reason to accept the continuation.

But the tools do change outcomes as we already know they do. It is precisely why there is such a battle for ownership of this space. And the tool being demonstrated in this instance is showing capabilities that weren't present prior.

[+] Eddy_Viscosity2|1 year ago|reply
Wouldn't the opposite also be true: that AI instructed brainwashing can effectively CREATE conspiracy beliefs?
[+] 13years|1 year ago|reply
Yes, likely as there is nothing different in principle. It is briefly mentioned in the study that is a concern.