In the game Deus Ex (2000) you can find a secret room that contains an artificial intelligence you can talk to. The dialogue tries to make connections between religious deities and artificial intelligences. https://www.reddit.com/r/Games/comments/1glq98/deus_ex2000_a...
Morpheus: "The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms."
JC Denton: "Electronic surveillance hardly inspires reverence. Perhaps fear and obedience, but not reverence."
Morpheus: "God and the gods were apparitions of observation, judgment and punishment. Other sentiments towards them were secondary."
JC Denton: "No one will ever worship a software entity peering at them through a camera."
Morpheus: "The human organism always worships. First, it was the gods, then it was fame (the observation and judgment of others), next it will be self-aware systems you have built to realize truly omnipresent observation and judgment."
Right before the credits for the relevant ending, the screen reads:
"If God did not exist, it would be necessary to invent him." -- Voltaire
This game also introduced me to Last and First Men by Olaf Stapledon, from an optional dialogue with a bartender in a hidden area in the hong kong game map.
We place faith in algorithms and data, much like faith in a higher power, but rarely stop to consider who writes these digital 'scriptures' and what their intentions might be. Are we blindly trusting new 'gods' crafted in server rooms, not realizing that they might be as fallible—or as manipulative—as the human hands that created them?
In India ISRO top scienntists go to Tirupathi temple befor Every launch of satellites or Mars or moon probe and seek blessings for success.They use AI algorithms in guiding all such probes.
This is one of those "we don't know what we're doing, nor how we could explain it, nor what it means, but it sounds cool" studies. They have no reason to assume the two are in any way related. It's probably another fluke, forgotten in a few weeks time.
I can't read the full text, but the materials has this at the end: "Finally, we examined the impact of God salience on respondents’ propensity to use technologically new and/or innovative products." They find a significant effect ("God salience was associated with greater early adoption attitudes") there, too. Why?
My take is that the effect is the other way around: early adoption attitude explains AI acceptance.
Exactly. I immediately thought of this after reading the article. I find the incentive to use p-hacking or something similar is very hard to avoid given increasingly stronger incentive to publish.
Is it that kind of study where we also realize that thinking about God makes people click more often on ads and be less performant at using microsoft excel?
Which turns out to just be a consequence of "people who are religious are more often old fucks"?
I feel like what was once called magic, then became God and now becomes AI. An all-encompassing term for all things inexplicable and complex and a handy backdoor to give up the burden of decision making and critical thinking. I fear it will soon, like God, become an ultimate, unquestionable instance: "Chatgpt tone me so, so it must be true "
That's not really an accurate history of the concept called magic. Magic was very prevalent during the Christian Middle Ages, long after the concept of God. Magic was "removed" from the world by the process of disenchantment (a translation of Entzauberung, literally "de-magicification" in German.)
Entirely separate from the truth value or consequences of religious belief:
I think this paper makes sense. If you think of the God concept as a kind of externalized decision-making system that enables “cognitive work” that is beyond human capabilities - or the perception of human capabilities- then acceptance of AI is a similar phenomenon.
On a similar note, I haven’t done the research myself, but I have a solid feeling that the contemplation of an infinite, eternal, etc. God-concept (as opposed to more localized polytheistic concepts) can be tied to developments in mathematics throughout history.
My intuition was the same, but they are doing this the way you should: Pre-registered, sufficient N for small effect sizes, comparative across countries. I'm not so sure it does not replicate.
attention-grabbing headline combining a hype topic and a controversial topic... published in PNAS... and it's about priming... doesn't inspire confidence.
Remember, 60% of these kind of experiments (sociological) don't replicate. I don't know anything about this one in particular, but the odds are against it.
... and people downmod me when I say i see no difference between LLM enthusiasm and a religion ...
Edit: interesting that the article doesn't seem to be visible on the first 4 pages of the front page to me any more. This is why it's bad to have most social networking based in the US and censored according to their morals...
This makes intuitive sense to me (not that that means anything). Adherents to a traditional God Hypothesis would be predisposed to believe that humans need advice and guidance from a mysterious force beyond themselves and that such forces exist.
A recent related development I find interesting is the rise in apparent AI/LLM Apologetics. Nearly every discussion thread about LLMs I read lately includes numerous posters attributing abilities to these models which are far beyond anything documented or demonstrated.
Looks like the article is behind a paywall, so I only read the abstract, but it looks that there is a significant flaw in this study. There are really many really various religions out there, and the results _might_ depend heavily on the choice of religion. As a thought experiment, consider a fictional example religion where A.I. is treated as a god, compared to the religions of the Dune universe (where A.I. is a taboo).
The argument seems to be: Belief in God implies a belief in the fallibility of humans, which leads to a reduced reliance on humans and therefore increased willingness to accept AI recommendations.
Most of the logic in that argument checks out. I just don't understand the last step in the logic. How do decreased reliance on humans lead to increased willingness to accept AI?
I'm a religious person myself, and my argument would be to not trust AI all that much. It is a creation by fallible human beings trained on fallible human data.
You don't need to read the article, just search online a little. It probably took you longer to write that post for me to find this: https://osf.io/fdh4m
They assess belief in god in different ways. Details are found within the 8 preregistered study PDF documents.
Different religions is potentially interesting; though now I'm also wondering if this is cultural "God salience" or serious "God salience" (the way the British do Easter and Christmas even though mostly not taking it seriously).
-
"""Studies 3 and 4 demonstrate that the reduced reliance on humans is driven by a heightened feeling of smallness when God is salient"""
I'd expect this to be less true for pantheons.
"""followed by a recognition of human fallibility."""
In pantheons, I'd expect this to vary by the nature of the god/goddess in question; Greek revivalism probably has different answers when considering Athena vs. Dionysus.
"""Study 5 addresses the similarity in mysteriousness between God and AI as an alternative, but unsupported, explanation."""
Yeah, that feels plausible.
"X and Y are mysterious, perhaps they're the same?" seems common for any {X, Y} — AI, consciousness, quantum mechanics, god, evolution, prime numbers, art, …
"when God is salient" presumably means the participants both believed in God and it played an important/active part in their daily life.
> Eight preregistered experiments (n = 2,462) reveal that when God is salient, people are more willing to consider AI-based recommendations than when God is not salient
This matches my prejudices. Namely that 'thinking about god' is similar to the Eliza effect, ie., a kind of projection of consciousness onto the world.
Animals solve 'the problem of other minds' by assuming that everything is like them, ie., conscious. And then walking back from this presumption upon evidence to the contrary.
This kinda mild 'default schizophrenia' is something I've always been allergic to. (I imagine my defaults are lower than most, which no doubt runs the risk of under-valuing the actually conscious).
Nevertheless, it's one of the things that makes me most concerned about the hype around AI: I see nothing more in it than a nascent secular religion. Born "from the ground up", as all religions are, by immediate experience 'of the divine'. Ie., of that impulse to analogise the world to one's own mind.
You're starting to sound a little like Terry yourself towards the end there. How do you see a religion in enthusiasm around AI, and worse nothing else at all?
I think you're stretching the word religion very far from its meaning. No one is deriving any meaning or analogy to the world from AI beyond the answers to technical questions. Certainly no one attributes moral authority to AI?
The hype is about its applied value, not some new insight into the world.
Reaction: Could we get similar acceptance increases by having the subjects think about newspaper astrology column, Magic 8 Balls, and other "coin flip" decision-making strategies?
[+] [-] skocznymroczny|2 years ago|reply
Morpheus: "The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms."
JC Denton: "Electronic surveillance hardly inspires reverence. Perhaps fear and obedience, but not reverence."
Morpheus: "God and the gods were apparitions of observation, judgment and punishment. Other sentiments towards them were secondary."
JC Denton: "No one will ever worship a software entity peering at them through a camera."
Morpheus: "The human organism always worships. First, it was the gods, then it was fame (the observation and judgment of others), next it will be self-aware systems you have built to realize truly omnipresent observation and judgment."
[+] [-] Kiboneu|2 years ago|reply
"If God did not exist, it would be necessary to invent him." -- Voltaire
This game also introduced me to Last and First Men by Olaf Stapledon, from an optional dialogue with a bartender in a hidden area in the hong kong game map.
[+] [-] sph|2 years ago|reply
Deus Ex was way ahead of its time, like a lot of cyberpunk media is.
[+] [-] romanichm12|2 years ago|reply
[+] [-] venkat223|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] olalonde|2 years ago|reply
[0] https://en.m.wikipedia.org/wiki/Priming_(psychology)
[+] [-] tgv|2 years ago|reply
I can't read the full text, but the materials has this at the end: "Finally, we examined the impact of God salience on respondents’ propensity to use technologically new and/or innovative products." They find a significant effect ("God salience was associated with greater early adoption attitudes") there, too. Why?
My take is that the effect is the other way around: early adoption attitude explains AI acceptance.
[+] [-] WiSaGaN|2 years ago|reply
[+] [-] sebstefan|2 years ago|reply
Which turns out to just be a consequence of "people who are religious are more often old fucks"?
[+] [-] iammjm|2 years ago|reply
[+] [-] keiferski|2 years ago|reply
https://intotheclarities.com/2014/08/23/charles-taylor-on-di...
https://en.wikipedia.org/wiki/Disenchantment
[+] [-] brettermeier|2 years ago|reply
[+] [-] scandox|2 years ago|reply
[+] [-] throwawayqqq11|2 years ago|reply
[+] [-] irjustin|2 years ago|reply
[+] [-] keiferski|2 years ago|reply
I think this paper makes sense. If you think of the God concept as a kind of externalized decision-making system that enables “cognitive work” that is beyond human capabilities - or the perception of human capabilities- then acceptance of AI is a similar phenomenon.
On a similar note, I haven’t done the research myself, but I have a solid feeling that the contemplation of an infinite, eternal, etc. God-concept (as opposed to more localized polytheistic concepts) can be tied to developments in mathematics throughout history.
[+] [-] robertlagrant|2 years ago|reply
(English Franciscan friar, scholastic philosopher, apologist, and Catholic theologian.)
[+] [-] snowpid|2 years ago|reply
[+] [-] biasedestimate|2 years ago|reply
[+] [-] uniqueuid|2 years ago|reply
[+] [-] currymj|2 years ago|reply
[+] [-] dist-epoch|2 years ago|reply
[+] [-] nottorp|2 years ago|reply
Edit: interesting that the article doesn't seem to be visible on the first 4 pages of the front page to me any more. This is why it's bad to have most social networking based in the US and censored according to their morals...
[+] [-] mkl95|2 years ago|reply
[+] [-] getoffmyyawn|2 years ago|reply
A recent related development I find interesting is the rise in apparent AI/LLM Apologetics. Nearly every discussion thread about LLMs I read lately includes numerous posters attributing abilities to these models which are far beyond anything documented or demonstrated.
[+] [-] Cthulhu_|2 years ago|reply
[+] [-] mbork_pl|2 years ago|reply
[+] [-] debok|2 years ago|reply
The argument seems to be: Belief in God implies a belief in the fallibility of humans, which leads to a reduced reliance on humans and therefore increased willingness to accept AI recommendations.
Most of the logic in that argument checks out. I just don't understand the last step in the logic. How do decreased reliance on humans lead to increased willingness to accept AI?
I'm a religious person myself, and my argument would be to not trust AI all that much. It is a creation by fallible human beings trained on fallible human data.
[+] [-] orhmeh09|2 years ago|reply
They assess belief in god in different ways. Details are found within the 8 preregistered study PDF documents.
[+] [-] ben_w|2 years ago|reply
-
"""Studies 3 and 4 demonstrate that the reduced reliance on humans is driven by a heightened feeling of smallness when God is salient"""
I'd expect this to be less true for pantheons.
"""followed by a recognition of human fallibility."""
In pantheons, I'd expect this to vary by the nature of the god/goddess in question; Greek revivalism probably has different answers when considering Athena vs. Dionysus.
"""Study 5 addresses the similarity in mysteriousness between God and AI as an alternative, but unsupported, explanation."""
Yeah, that feels plausible.
"X and Y are mysterious, perhaps they're the same?" seems common for any {X, Y} — AI, consciousness, quantum mechanics, god, evolution, prime numbers, art, …
[+] [-] belter|2 years ago|reply
[+] [-] samuraijack|2 years ago|reply
[+] [-] croes|2 years ago|reply
[+] [-] helsinkiandrew|2 years ago|reply
> Eight preregistered experiments (n = 2,462) reveal that when God is salient, people are more willing to consider AI-based recommendations than when God is not salient
[+] [-] mjburgess|2 years ago|reply
Animals solve 'the problem of other minds' by assuming that everything is like them, ie., conscious. And then walking back from this presumption upon evidence to the contrary.
This kinda mild 'default schizophrenia' is something I've always been allergic to. (I imagine my defaults are lower than most, which no doubt runs the risk of under-valuing the actually conscious).
Nevertheless, it's one of the things that makes me most concerned about the hype around AI: I see nothing more in it than a nascent secular religion. Born "from the ground up", as all religions are, by immediate experience 'of the divine'. Ie., of that impulse to analogise the world to one's own mind.
[+] [-] tinco|2 years ago|reply
I think you're stretching the word religion very far from its meaning. No one is deriving any meaning or analogy to the world from AI beyond the answers to technical questions. Certainly no one attributes moral authority to AI?
The hype is about its applied value, not some new insight into the world.
[+] [-] ajani|2 years ago|reply
[+] [-] misja111|2 years ago|reply
[+] [-] f6v|2 years ago|reply
[+] [-] bell-cot|2 years ago|reply
[+] [-] amelius|2 years ago|reply
[+] [-] iraqmtpizza|2 years ago|reply
[+] [-] Ylpertnodi|2 years ago|reply