top | item 44185007

(no title)

semidror | 9 months ago

Would it be possible to point out more details about where Apple got the math wrong and which inaccurate approximations they use? I'm genuinely curious and want to learn more about it.

discuss

order

dheera|9 months ago

It's not that they deliberately made a math error, it's that it's a very crude algorithm that basically just blurs everything that's not within what's deemed as the subject with some triangular, Gaussian, or other computationally simple kernel.

What real optics does:

- The blur kernel is a function of the shape of the aperture, which is typically circular at wide aperture and hexagonal at smaller aperture. Not gaussian, not triangular, and the kernel being a function of the depth map itself, it does not parallelize efficiently

- The blurring is a function of the distance to the focal point, is typically closer to a hyperbola; most phone camera apps just use a constant blur and don't even account for this

- Lens aberrations, which are often thought of as defects, but if you generate something too perfect it looks fake

- Diffraction effects happen at sharp points of the mechanical aperture which create starbursts around highlights

- When out-of-focus highlights get blown out, they blow out more than just the center area, they also blow out some of the blurred area. If you clip and then blur, your blurred areas will be less-than-blown-out which also looks fake

Probably a bunch more things I'm not thinking of but you get the idea

jjcob|9 months ago

The iPhone camera app does a lot of those things. The blur is definitely not a Gaussian blur, you can clearly see a circular aperture.

The blurring is also a function of the distance, it's not constant.

And blowouts are pretty convincing too. The HDR sources probably help a lot with that. They are not just clipped then blurred.

Have you ever looked at an iPhone portrait mode photo? For some subjects they are pretty good! The bokeh is beautiful.

The most significant issue with iPhone portrait mode pictures are the boundaries that look bad. Frizzy hair always ends up as a blurry mess.

qingcharles|9 months ago

Any ideas what the Adobe algorithm does? It certainly has a bunch of options for things like the aperture shape.

xeonmc|9 months ago

re: parallelization, could a crude 3Dfft-based postprocessing achieve a slightly improved result relative to the current splat-ish approach while still being a fast-running approximation?

i.e. train a very small ML model on various camera parameters vs resulting reciprocal space transfer function.