top | item 45810904

(no title)

subb | 3 months ago

They are very useful to encode stimuli, but stimuli is "not yet" color. When you have an image that is not just a patch of RGB value, a lot of things will influence what color you will compute based on the exact same RGB.

Akiyoshi's color constancy demonstrations are good examples of this. The RGB model (and any three-values "perceptual" model) fails to predict the perceived color here. You are seeing different colors but the RGB values are the same.

https://www.psy.ritsumei.ac.jp/akitaoka/marie-eyecolorconsta...

discuss

order

dahart|3 months ago

Here you’re talking about only perception, and not physical color. You could use 100 dimensional spectral colors, or even 1D grayscale values, and still have the same result. So this example doesn’t have any bearing on whether a 3D color space works well for humans or not. Do you have any other examples that suggest a 3D color space isn’t good enough? I still don’t understand what you meant.

subb|3 months ago

Yes exactly. I'm intentionally using "color" as a perceptual thing, not as a physical thing. If we are talking about a color model, then it needs to model perception. As such, RGB, as a predictor of perception, can often fail because it doesn't account for much more than what hits the retina, not what happens after. For one, it lacks spatial context - placing the same RGB value with a different surround will feel different, like in the example above. But if you had a real color (as-in, perceptual) picker in Photoshop, you would get a different value.

It's excellent at compressing the visible part of the EM spectrum, however. This is what I meant by stimuli encoding.