top | item 34591689

(no title)

wombat_trouble | 3 years ago

"Real" cameras do a lot of postprocessing too, but it's generally oriented at producing faithful results. They might remove unambiguous and correctable issues such as vignetting or lens distortion, but they don't cross the line of inventing new details to make the photo look good.

Computational photography techniques on smartphones, on the other hand, were always designed around squishy "user perception" goals to make photos look impressive, details be damned.

discuss

order

jlarocco|3 years ago

I didn't see any invented "new details" in the article's iPhone photos. Phones have small sensors and crap lenses, so they ramp up noise reduction and sharpness to make up for it. Turn up the ISO and max out the NR on the Fujifilm and the results would be nearly as bad.

wyager|3 years ago

The text looks qualitatively very little like the true text. Almost all details of the text (including shape of strokes!) are hallucinated by the iPhone.

I could import the X-T5 photo into lightroom or whatever and crank NR all the way up, and I don't think it would look anything like the iPhone image. Also, the less-processed image on the iPhone (which you see for a split second) looked fine, so there wasn't enough noise to justify this level of "correction".

My guess is the iPhone got confused by the texture of the anodized aluminum.

wonnage|3 years ago

At the end of the day you're always "inventing new details" to turn sensor data into an image. Most algorithms involve edge detection and predicting color correlations, and you'll also run into cases where false details are added and reality is changed to fit the priors.

One can find pathological cases for traditional cameras too - moire is a common problem, Fuji X-Trans sensors historically had a watercolor/worms effect particularly in greenery, etc.