(no title)
ed312
|
1 year ago
This is an excellent ponit, and I don't know where to exactly draw the line ("I know it when I see it"). I personally use "auto" (probably heuristic, maybe soon-ish AI-powered) features to adjust levels, color balance etc. Using AI to add things that are _not at all present_ in the original crossed the line into digital art vs photography for me.
Toutouxc|1 year ago
But IMO it’s a point worth bringing up, most people have no idea how digital photography works and how difficult it is to measure, quantify and interpret the analog signal that comes from a camera sensor to even resemble an image.
sneak|1 year ago
Someone|1 year ago
Probably not exactly the same side and orientation. https://en.wikipedia.org/wiki/Libration#Lunar_libration: “over time, slightly more than half (about 59% in total) of the Moon's surface is seen from Earth due to libration”
TeMPOraL|1 year ago
I would object slightly less if they made a model (3D or AI) that captures the whole side of the Moon in high detail, and used that, combined with precise location and date/time, to guide resolving the blob in camera input into a high-resolution rendering *that matches, with high accuracy and precision, what the camera would actually see if it had better optics and sensor*. It still feels like faking things, but at least the goal would be to match reality as close as possible.