If you subscribe to a purely mechanistic world-view, i.e. computationalism, then yes. But that's a leap of faith I cannot justify taking. It's a matter of faith, because though we cannot exclude the possibility logically, it also doesn't follow necessarily from our experience of life, at least as far as I can see. Yes, so many times throughout the ages, scientists have discovered mechanisms to explain things which we've historically been convinced will always be outside the purview of science.
But that doesn't mean everything will one day be explained. And one thing that remains unexplained is our consciousness. The problem of qualia. Free will. The problem of suffering. We just don't understand those. Maybe they are simply epiphenomena, maybe they are false problems. But when it comes to software systems, we know with certainty that they don't have free will, don't experience qualia, pain or hope or I-ness.
Sure, it's a difference that disappears if one takes that leap of faith into computationalism. Then, to maintain integrity, one would have to show the same deference to these models as one shows to their fellow human. One would have to think hard about not over-working these already enslaved fellow beings. One would have to consider fighting for the rights of these models.
> Then, to maintain integrity, one would have to show the same deference to these models as one shows to their fellow human.
Except they’re not even remotely close to anything like human intelligence. As I wrote in another comment they are very capable systems, to the point where in some ways they show some level of elementary understanding, but in many forms of reasoning they are utterly and completely incapable. Assigning human equivalent cognitive status is patently absurd. And yes I am a physicalist and I see no reason why a computer system could not achieve human equivalent cognitive ability. These just aren’t that. They may be an important step towards it though.
pegasus|2 years ago
But that doesn't mean everything will one day be explained. And one thing that remains unexplained is our consciousness. The problem of qualia. Free will. The problem of suffering. We just don't understand those. Maybe they are simply epiphenomena, maybe they are false problems. But when it comes to software systems, we know with certainty that they don't have free will, don't experience qualia, pain or hope or I-ness.
Sure, it's a difference that disappears if one takes that leap of faith into computationalism. Then, to maintain integrity, one would have to show the same deference to these models as one shows to their fellow human. One would have to think hard about not over-working these already enslaved fellow beings. One would have to consider fighting for the rights of these models.
simonh|2 years ago
Except they’re not even remotely close to anything like human intelligence. As I wrote in another comment they are very capable systems, to the point where in some ways they show some level of elementary understanding, but in many forms of reasoning they are utterly and completely incapable. Assigning human equivalent cognitive status is patently absurd. And yes I am a physicalist and I see no reason why a computer system could not achieve human equivalent cognitive ability. These just aren’t that. They may be an important step towards it though.