Self-driving cars don't experience visual qualia. A model is not the same as an actual experience. There's no binding-problem to solve with self-driving cars because there's no attempts to make them conscious. It's a completely different thing.
There's no reason to believe qualia arise in a given discrete computation. Why would they? In what steps in the algorithm do qualia arise and why, what characteristics do they have, what causal roles do they play, etc.
It's completely self-evident we experience qualia. It's what our experiences are made of. There wouldn't be anything to experience or discuss if we didn't. The brain is not a deliberate, man-made object like a computer is, hence why it can possess these properties with us being unaware of how (they were selected for via evolution), but the computer cannot.
The input system is discrete but the end-result, our conscious experience of our world-simulations (made up of visual qualia) are not discrete. They are unified.
An example of how this could be implemented (not saying this is the case, just one of several possibilities):
monocasa|5 years ago
Prove that. Or alternatively prove that you or I do.
I'll also note that you didn't address the citations for the inherent discreteness of our visual systems.
ZeroFries|5 years ago
It's completely self-evident we experience qualia. It's what our experiences are made of. There wouldn't be anything to experience or discuss if we didn't. The brain is not a deliberate, man-made object like a computer is, hence why it can possess these properties with us being unaware of how (they were selected for via evolution), but the computer cannot.
ZeroFries|5 years ago
An example of how this could be implemented (not saying this is the case, just one of several possibilities):
https://en.wikipedia.org/wiki/Electromagnetic_theories_of_co...