I think it will take quite some time for humans to accept robots as entities with rights. There's a lot of fear surrounding the idea of intelligent machines, and most deeply religious people (who I know, family included) consider robots/A.I. to be (somewhat) evil. We mostly have Hollywood to thank for that.
When the majority of the human population understands technology and its ever-changing limits, the fear and stigma towards artificial intelligence comparable to our own will subside and only then will sentient robots be accepted.
It's human nature to fear the unknown. It's gotten us to where we are today. But I think there comes a point in our timeline where we overcome all fear with our ability to understand and predict. Ironically enough, this will probably require the advancement of (and potentially cooperation with) A.I.
Rather than answer this right now, I would suggest that everyone go watch a season or two of Battlestar Galactica, which probes this question deeply. Cylons (the "skin job" kind) were severely subjected to a kind of anti-machine racism, even though they were basically indistinguishable from humans.
There will probably be at least 4 stages, with the condition that their intelligence will not be limited, and it will keep growing at least until it's equal to ours:
1) No rights. We can do whatever we want, just like they are our property and we can treat them as pure objects
2) Some rights. Think of it like animal rights, but we still own them
3) "Official" human-like rights. They are free, but there would be "racism" against them at first
4) The exact same rights as a human, officially and unofficially.
Then things might start to change again when they start becoming drastically more intelligent than us. Then it becomes much harder to predict. At first glance, we could say it will be bad for us, but I choose to believe the optimistic side that the smarter they will be the more tolerant they will be, too.
I think it's pretty likely that people will say that very complete, high-level AI (if it is ever invented) would have a right to life. Actually, there's a fantastic exploration of this from an AI's point of view in Life Artificial by David A. Eubanks:
Even if the robots themselves won't have many rights, humans will try to get them more rights by electing the politicians that will do it. Why would humans do that? Because I believe we will become emotional towards them. We can easily become emotional towards our pets, so it should be even easier with a robot that is even smarter than that. Heck, some people are even somewhat emotional towards their iPhones.
The first step towards becoming more emotional will be naming them. I remember reading a couple of years ago about that cleaning robot Roomba, and how people started naming them, and when they were giving them away for repairing, they demanded the same robot back, not a replacement.
weren't humans put to death by killing or stealing horse. when horse was vital resource this was considered righteous thing to do. So to answer;. if AI was somehow vital resource to someone , it could possibly be that someone would get sentenced to death if one disables it.
If some countries that have capital punishment don't get civilized by the time we have human-level AI, then yes, that would be logical. There is no difference between a mind run on a biological brain and a silicon brain. Killing either is murder.
You know the dating site OKCupid? It matches people by their answers to user-submitted questions.
One of about a dozen questions on there I've marked as "important to me" is one on AI rights. And interestingly enough, I'd say the site's suggestions have become markedly better since I did that; I think that question is a barometer for a lot of things about one's morals and interests.
Yes, definitely. Unless it bore a passing resemblance to an Atari game, in which case Atari and Apple would drag it outside and put a bullet in its head.
Isn't that the most likely result? There are a lot of possible things we could call intelligent, and only a few of them could be "like us".
(Granted, anyone trying to make an AI will aim towards something like us, but considering how we still haven't developed AI yet, I imagine the first successful one will be more of a lucky accidentally-on-purpose than anything strictly-designed; even with humans making it, the first success will be quite non-human.)
A truly intelligent and self-aware computer will be created by simply simulating a human brain on a powerful computer. I can't see how that mind would have any less rights than its biological counterparts.
So the answer is: Yes, it'll have the exact same rights, as we do.
Morally, that is, because the law might take time to catch up.
If the simulation decides to copy itself, and allows the copy to run independently, are there now two autonomous people?
If the copy is allowed to advance one nanosecond and is then deleted, leaving the original running, has a murder been committed? What if the copy never runs at all? What if the original is deleted instead of the copy?
What if the copy receives identical input, and thus has identical state as the original, up to the point it is deleted? What if some aspects of the copy are "optimized" by reusing the results of the original? What if the entire copy just mirrors the state of the original, without recomputing anything? How can we say exactly how many copies actually exist?
What if the original simulation modifies the copy so that it wants to kill itself, which it does, once it starts running? What if it's modified to just not care whether it lives or dies? What if the copy is modified in various other ways? Are some kinds of modifications ethical, while others are not?
Point being, this course of AI development does not spare us having to completely rethink our models of ethics.
Well, we certainly don't do that with animals. Only after a certain threshold of intelligence we don't want to hurt/eat them, or if they are "wild life", which we generally want to preserve, although it's only in recent history that this has happened.
If aliens existed, and came here, I wonder if they would treat us the same depending on whether they are only 2x, 10x or 1000x more intelligent than we are. Of course it's also possible that the more intelligent they get, the more they will want to preserve any kind of life, and will try to get their resources from something else to survive.
It should get interesting if we ever develop human brain-like artificial intelligence, and then we amplify it by 1000x following Moore's law for a couple of decades. Kill switches would become useless. We'd just have to teach them to value life more than anything, before it's too late.
[+] [-] tfb|14 years ago|reply
When the majority of the human population understands technology and its ever-changing limits, the fear and stigma towards artificial intelligence comparable to our own will subside and only then will sentient robots be accepted.
It's human nature to fear the unknown. It's gotten us to where we are today. But I think there comes a point in our timeline where we overcome all fear with our ability to understand and predict. Ironically enough, this will probably require the advancement of (and potentially cooperation with) A.I.
[+] [-] wukix|14 years ago|reply
http://www.amazon.com/gp/product/B002HR17ZG/ref=as_li_ss_tl?...
[+] [-] nextparadigms|14 years ago|reply
1) No rights. We can do whatever we want, just like they are our property and we can treat them as pure objects
2) Some rights. Think of it like animal rights, but we still own them
3) "Official" human-like rights. They are free, but there would be "racism" against them at first
4) The exact same rights as a human, officially and unofficially.
Then things might start to change again when they start becoming drastically more intelligent than us. Then it becomes much harder to predict. At first glance, we could say it will be bad for us, but I choose to believe the optimistic side that the smarter they will be the more tolerant they will be, too.
[+] [-] jarin|14 years ago|reply
http://lifeartificial.com/
I think a far more interesting question is: will a human ever be put to death for "killing" an artificial intelligence?
[+] [-] nextparadigms|14 years ago|reply
The first step towards becoming more emotional will be naming them. I remember reading a couple of years ago about that cleaning robot Roomba, and how people started naming them, and when they were giving them away for repairing, they demanded the same robot back, not a replacement.
http://www.msnbc.msn.com/id/21102202/ns/technology_and_scien...
http://gizmodo.com/5483750/peoples-emotional-attachment-to-r...
[+] [-] majmun|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] hastur|14 years ago|reply
If some countries that have capital punishment don't get civilized by the time we have human-level AI, then yes, that would be logical. There is no difference between a mind run on a biological brain and a silicon brain. Killing either is murder.
[+] [-] egypturnash|14 years ago|reply
One of about a dozen questions on there I've marked as "important to me" is one on AI rights. And interestingly enough, I'd say the site's suggestions have become markedly better since I did that; I think that question is a barometer for a lot of things about one's morals and interests.
[+] [-] pica|14 years ago|reply
[+] [-] duncan_bayne|14 years ago|reply
[+] [-] rbanffy|14 years ago|reply
[+] [-] PotatoEngineer|14 years ago|reply
(Granted, anyone trying to make an AI will aim towards something like us, but considering how we still haven't developed AI yet, I imagine the first successful one will be more of a lucky accidentally-on-purpose than anything strictly-designed; even with humans making it, the first success will be quite non-human.)
[+] [-] hastur|14 years ago|reply
A truly intelligent and self-aware computer will be created by simply simulating a human brain on a powerful computer. I can't see how that mind would have any less rights than its biological counterparts.
So the answer is: Yes, it'll have the exact same rights, as we do. Morally, that is, because the law might take time to catch up.
[+] [-] extension|14 years ago|reply
If the copy is allowed to advance one nanosecond and is then deleted, leaving the original running, has a murder been committed? What if the copy never runs at all? What if the original is deleted instead of the copy?
What if the copy receives identical input, and thus has identical state as the original, up to the point it is deleted? What if some aspects of the copy are "optimized" by reusing the results of the original? What if the entire copy just mirrors the state of the original, without recomputing anything? How can we say exactly how many copies actually exist?
What if the original simulation modifies the copy so that it wants to kill itself, which it does, once it starts running? What if it's modified to just not care whether it lives or dies? What if the copy is modified in various other ways? Are some kinds of modifications ethical, while others are not?
Point being, this course of AI development does not spare us having to completely rethink our models of ethics.
[+] [-] matthewcieplak|14 years ago|reply
[deleted]
[+] [-] majmun|14 years ago|reply
[+] [-] lukeqsee|14 years ago|reply
1) Plants survive on their own and in groups. Therefore, plants have the right to live. (Don't step on the grass!)
2) Bacteria survive on their own and in groups. Therefore, bacteria have the right to live and should not be eradicated. (Sorry, hand sanitizer.)
3) Cows survive on their own and in groups. Therefore, cows have the right to live and should not be slaughtered. (So much for that steak last night.)
So yes, under this thesis, AI & robots have a right to live, regardless of their threat or benefit to society and humans.
[+] [-] nextparadigms|14 years ago|reply
If aliens existed, and came here, I wonder if they would treat us the same depending on whether they are only 2x, 10x or 1000x more intelligent than we are. Of course it's also possible that the more intelligent they get, the more they will want to preserve any kind of life, and will try to get their resources from something else to survive.
It should get interesting if we ever develop human brain-like artificial intelligence, and then we amplify it by 1000x following Moore's law for a couple of decades. Kill switches would become useless. We'd just have to teach them to value life more than anything, before it's too late.
[+] [-] geoffschmidt|14 years ago|reply