(no title)
raindeer2 | 1 year ago
A better way of thinking about qualia is like embeddings in a neural network. Every time you run the training from a random initialization you will get different resulting embedding, but given the same training data all embeddings will all essentially be equivalent under rotation. I.e., your internal representation of blue might be very different from mine in an absolute sense but our relation between the representation of different colors will be roughly the same.
soulofmischief|1 year ago
The point is that you may not be able to prove that for a sufficiently connected system.
> A better way of thinking about qualia is like embeddings in a neural network.
I don't think this is a good analogy. Even if two brains might have the same input and output pairs, you don't necessarily know that they had the same experiences. You also don't know that initial epigenetic and prenatal conditions haven't deviated from the two brains. It would be extremely hard to control for this in a lab, and certainly you won't encounter such similarities in the wild.
> your internal representation of blue might be very different from mine in an absolute sense but our relation between the representation of different colors will be roughly the same.
I think you misunderstand qualia, as this is the exact crux of the argument. Just because we can agree to relational congruencies doesn't mean we have the same individual internal experiences. And we can't just handwave this away as "not important" or "roughly the same". In any case, your argument is self-defeating, as any deviation in internal experience validates my original claim.
Further reading on qualia: https://en.wikipedia.org/wiki/Qualia
northern-lights|1 year ago
On the contrary, I would say it's quite unlikely that two brains with the same input and outputs will ever have the same experiences in deriving the output from the input. But neither of us (nor anyone in the world) know how human brains work, so it is probably not useful debating about this.