top | item 42213426

(no title)

nateroling | 1 year ago

I had the same thought, but it sounds like this operates at a much lower level than that kind of thing:

> Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.

discuss

order

Intralexical|1 year ago

I'd like to see some examples showing how it does when taking a picture of completely random fractal noise. That should show it's not just trained to reconstruct known image patterns.

Generally it's probably wise to be skeptical of anything that appears to get around the diffraction limit.

brookst|1 year ago

I believe the claim is that the NN is trained to reconstruct pixels, not images. As in so many areas, the diffraction limit is probabalistic so combining information from multiple overlapping samples and NNs trained on known diffracted -> accurate pairs may well recover information.

You’re right that it might fail on noise with resolution fine enough to break assumptions from the NN training set. But that’s not a super common application for cameras, and traditional cameras have their own limitations.

Not saying we shouldn’t be skeptical, just that there is a plausible mechanism here.