"Like most people, questions about the existence of God and all things spiritual plague me frequently. I want to believe in such things, especially when it comes to continuity of my consciousness. I don’t like the idea of disappearing when I die."
One way to address this is to deal with these thoughts rather than entering a philosophical rat hole of trying to simulate our existence and find an analogy to the computer. Once you accept that an outside meta-influence is possible then you don't have to worry about all this "is there a God or not?" If God exists then God has powers that make Him imperceptible; if God doesn't exist then He's imperceptible. You can't tell the difference.
Oh well.
Better to live a life of good deeds and actions then worry about what's next.
> If God exists then God has powers that make Him imperceptible; if God doesn't exist then He's imperceptible. You can't tell the difference.
This is the omnipotence paradox[1]. I often help friends and relatives understand why my religion (or lack thereof, actually) is `Tooth Fairy Agnosticism' by asking them whether or not God can create a stone that He can't lift.
> Better to live a life of good deeds and actions then worry about what's next.
I agree. This is how I try to live my life, but it's hard to stop worrying when you have severe thanatophobia[2] like I do.
Great point up to the last sentance which, whether you intended to or not, draws a line between the worlds major religions and quasi-endorses those on one side.
What I admire most about the author is the honest search for Truth. It is a rare individual that can approach this problem while relinquishing their biases. I can't count myself among their ranks. It seems the majority of people, believers and non-believers alike, tend to come to a decision about God after about an afternoon of thinking about it, at most.
And really, what philosophical question posed to us could be more important than the question of God? It seems that every human should spend some significant amount of time deciding where they fall on this problem before they do...well...anything else.
Personally, not only do I not fall on this problem in any way, as I question the actual validity of it;
Firstly, there can't be a single "Question of God", since there's no single coherent definition of "God."
Secondly, if we define God as an entity which is ultimately undetectable and unfalsifiable, does the question of its existence really mean anything? It may be an interesting thought problem, but trying to answer it leads nowhere.
What does it matter if the answer cannot be known? If the question cannot be answered than "spending a significant amount of time deciding" is just waste.
It's really interesting to think about the theological implications of simulationism. If the world were really structured as described in the article (which is not the point he was making, I'm just having fun), then one has to wonder what the purpose of having simulated, self-contained intelligences would be. I like to think about the implications of a specific purpose and I have a favorite.
Suppose one was trying to train up a batch of true AIs that were generous, moral, loving, responsible, principled, etc. (think Santa, Jesus, or Fred Rogers). I would submit that the task would be much better suited to some combination of genetic programming and machine learning i.e. generate a bunch of AIs with varying characteristics and let them learn for a while and see how they do at the stated task. I think it would make a lot of sense for a simulation scientist to want to have a bunch of these around, they could do some really cool stuff for you and you wouldn't have to worry too much about IRB approval since they would be certified moral by the previous test.
What kind of test would make sense? It would have to be one with difficult moral choices, one with an opportunity for failure. One without unnecessary pain and stress, but one that allows the AIs to make moral mistakes and be held responsible for them. If the true AIs are interacting, and it makes sense that they probably would have to, then naturally the duds could make life a little miserable for the good ones, you'd want to minimize that as much as you could without restricting the ability of all AIs to choose freely among available opportunities given to them. On top of that you might put them in a hostile environment that requires work and effort for survival to see if they have what it takes to get stuff done.
Now if that doesn't sound like life I don't know what does. I that context hell doesn't really make a lot of sense (would you really punish bad AIs forever or just put them in charge of non-morally challenging and easy jobs? (my first thought was the coffee pot, but maybe putting a misbehaving AI in charge of that is not such a good idea) and "heaven" becomes more of a "gets to do cool, interesting stuff" than "chill out playing harp". From my experience many religious people tend to punt on what actually an afterlife would be for, other than it would be nice to reward us for being good. The answer to "Why can't the simulation master give the nice heaven to everybody?" is pretty clear in this context. She can't because that would be really bad for whatever task she needed moral AI for, which, if it affected a bunch of other true AIs has huge moral implications. So sorry, no offence, but based on your test results you don't get to do the cool stuff.
Say I live in a set, set_1, that has some property p_1. I can think about other sets, like my own, say, set_n, with similar properties for those, say, p_n. I am confined to set_1 so I can only speculate about what goes on in the other sets, if, indeed, there be any other sets. I speculate that all these sets, s, reside in some big super-set, S, with it's own p-like property, P. But this does not have to follow. I could have an infinite series of sets s, with their own unique p properties, but a super-set to which they belong does not itself have to have its own super-p property. It might, but it might not.
This is my problem with "Simulationism". It assumes such a "super P" property for the "super-set containing all universes". This property of course is time. The entire universe could be describable by a bit-string (Tegmark, et al) but that does not mean it has to "run" on anything, any more than a super-set S of s worlds each with property p has to have its own "super-property" P. I think it is a common misconception but not easy to clear up for people who don't have a technical background.
I don't understand how your model hopes to capture any of the essential features of God as known to academic philosophers. You mention Godel incompleteness, but this only applies to sets of axioms which can be generated algorithmically. Furthermore, pure logic reveals that a transcendental first cause must be eternal, changeless (without the universe), non-physical, most likely having a mind with free will, etc. Your model fails to capture any of these essential features. In fact, I agree with the poster who says that you should reason about God, rather than try to model Him. Naturally my reason leads me to different inescapable conclusions than his.
* Naturally my reason leads me to different inescapable conclusions than his.*
Which should be a big honking, flashing neon sign that you're both arguing without any facts to back you up, ie you're both just making stuff up. You might as well argue about whether Santa has white or black fur as the trim on his coat, for all of the utility that you will get out of such arguments.
> Despite my desire to accept religious teachings, I am constantly prevented by a simple fact: no one has found any physical evidence of something like a soul, or any mechanism which might enable a persistent consciousness beyond our current brain.
For a minute, forget about "persisting beyond death".
Consider consciousness, is there any conceptual model at all of how it arises purely out of the known physical laws?
I can imagine building a computer with strong-AI and I see no reason - what so ever - to assume that it may possess any form of consciousness. Given all that we know about how the physical world works, and how computers work, there's no reason to assume that some particular software configuration will give rise to subjective consciousness within a computer.
It's clear to me that my own consciousness is dramatically influenced by my physical environment. After all, anything that physically interacts with my brain will affect my consciousness in some way. Maybe I'm wrong, but it seems that would imply it is a physical phenomenon, no?
The author is missing a key element in his model of life, one which significantly affects the outcome of the thought experiment: his simple loops are independent, they have no shared state. Real existence is not so clean---living beings are constantly being altered by their environment, and conversely altering it. It would be impossible to simply 'save the state' of a living being without saving the state of the entire universe in which it resides (the virtual machine). This makes reincarnation (or at least the very simple model presented in the article) impossible.
One can claim that humans are more like shared-nothing message passing processes, which can alter and be altered by their "surroundings" without sharing state per-se.
Shared state/memory seems more like a representation of the Borg ;)
[+] [-] jgrahamc|14 years ago|reply
One way to address this is to deal with these thoughts rather than entering a philosophical rat hole of trying to simulate our existence and find an analogy to the computer. Once you accept that an outside meta-influence is possible then you don't have to worry about all this "is there a God or not?" If God exists then God has powers that make Him imperceptible; if God doesn't exist then He's imperceptible. You can't tell the difference.
Oh well.
Better to live a life of good deeds and actions then worry about what's next.
[+] [-] acuozzo|14 years ago|reply
This is the omnipotence paradox[1]. I often help friends and relatives understand why my religion (or lack thereof, actually) is `Tooth Fairy Agnosticism' by asking them whether or not God can create a stone that He can't lift.
> Better to live a life of good deeds and actions then worry about what's next.
I agree. This is how I try to live my life, but it's hard to stop worrying when you have severe thanatophobia[2] like I do.
[1] http://en.wikipedia.org/wiki/Omnipotence_paradox
[2] http://en.wikipedia.org/wiki/Thanatophobia
[+] [-] euroclydon|14 years ago|reply
[+] [-] padobson|14 years ago|reply
And really, what philosophical question posed to us could be more important than the question of God? It seems that every human should spend some significant amount of time deciding where they fall on this problem before they do...well...anything else.
[+] [-] icebraining|14 years ago|reply
Firstly, there can't be a single "Question of God", since there's no single coherent definition of "God."
Secondly, if we define God as an entity which is ultimately undetectable and unfalsifiable, does the question of its existence really mean anything? It may be an interesting thought problem, but trying to answer it leads nowhere.
[+] [-] kristaps|14 years ago|reply
[+] [-] jamesrcole|14 years ago|reply
[+] [-] noonespecial|14 years ago|reply
http://www.simulation-argument.com/
[+] [-] danteembermage|14 years ago|reply
Suppose one was trying to train up a batch of true AIs that were generous, moral, loving, responsible, principled, etc. (think Santa, Jesus, or Fred Rogers). I would submit that the task would be much better suited to some combination of genetic programming and machine learning i.e. generate a bunch of AIs with varying characteristics and let them learn for a while and see how they do at the stated task. I think it would make a lot of sense for a simulation scientist to want to have a bunch of these around, they could do some really cool stuff for you and you wouldn't have to worry too much about IRB approval since they would be certified moral by the previous test.
What kind of test would make sense? It would have to be one with difficult moral choices, one with an opportunity for failure. One without unnecessary pain and stress, but one that allows the AIs to make moral mistakes and be held responsible for them. If the true AIs are interacting, and it makes sense that they probably would have to, then naturally the duds could make life a little miserable for the good ones, you'd want to minimize that as much as you could without restricting the ability of all AIs to choose freely among available opportunities given to them. On top of that you might put them in a hostile environment that requires work and effort for survival to see if they have what it takes to get stuff done.
Now if that doesn't sound like life I don't know what does. I that context hell doesn't really make a lot of sense (would you really punish bad AIs forever or just put them in charge of non-morally challenging and easy jobs? (my first thought was the coffee pot, but maybe putting a misbehaving AI in charge of that is not such a good idea) and "heaven" becomes more of a "gets to do cool, interesting stuff" than "chill out playing harp". From my experience many religious people tend to punt on what actually an afterlife would be for, other than it would be nice to reward us for being good. The answer to "Why can't the simulation master give the nice heaven to everybody?" is pretty clear in this context. She can't because that would be really bad for whatever task she needed moral AI for, which, if it affected a bunch of other true AIs has huge moral implications. So sorry, no offence, but based on your test results you don't get to do the cool stuff.
[+] [-] _THE_PLAGUE|14 years ago|reply
This is my problem with "Simulationism". It assumes such a "super P" property for the "super-set containing all universes". This property of course is time. The entire universe could be describable by a bit-string (Tegmark, et al) but that does not mean it has to "run" on anything, any more than a super-set S of s worlds each with property p has to have its own "super-property" P. I think it is a common misconception but not easy to clear up for people who don't have a technical background.
[+] [-] wbhart|14 years ago|reply
[+] [-] demallien|14 years ago|reply
Which should be a big honking, flashing neon sign that you're both arguing without any facts to back you up, ie you're both just making stuff up. You might as well argue about whether Santa has white or black fur as the trim on his coat, for all of the utility that you will get out of such arguments.
[+] [-] icebraining|14 years ago|reply
What does that even mean outside of the physical universe?
[+] [-] gnaritas|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] hasenj|14 years ago|reply
For a minute, forget about "persisting beyond death".
Consider consciousness, is there any conceptual model at all of how it arises purely out of the known physical laws?
I can imagine building a computer with strong-AI and I see no reason - what so ever - to assume that it may possess any form of consciousness. Given all that we know about how the physical world works, and how computers work, there's no reason to assume that some particular software configuration will give rise to subjective consciousness within a computer.
[+] [-] ryusage|14 years ago|reply
[+] [-] icebraining|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] TheRevoltingX|14 years ago|reply
[+] [-] vannevar|14 years ago|reply
[+] [-] icebraining|14 years ago|reply
Shared state/memory seems more like a representation of the Borg ;)
[+] [-] soci_rich|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]