(no title)
belugacat | 2 years ago
What captures an image is an imaging surface; traditionally a chemical emulsion on a piece of film, now a complex array of digital sensors.
This imaging surface is of human design, it therefore images what its designers designed it to image. But don't forget that it is a sampling of reality; by definition always partial, and biased (biased to the 400~700 nm range, for starters).
TeMPOraL|2 years ago
This does not matter in any way. What matters is that, what comes out on the other end of filtering and bias, is highly correlated with what came in, and carries information about the imaged phenomenon.
This is what both analog films and digital sensors were designed for. The captured information is then preserved through most forms of post-processing, also by design. Computational photography, in contrast, is destroying that information, for the sake of creating something that "looks better".