(no title)
photonic37 | 4 years ago
The major difference between a microphone array and an imaging sensor is the availability of phase information for the received wave. A microphone oscillates with the sound pressure wave, and that oscillation is translated directly to a voltage. Your software can see the full time series of that wave, so the information about it is 'complete'.
An optical image sensor, essentially, turns photons into electrons. The optical wave is too fast to turn into a voltage time series, so you only see the wave's amplitude at a given sample in time. Therefore, in order to turn it into an image, you need to recover some fraction of the phase information in some way.
A pinhole is one way to do that. One way to think of a pinhole is that it maps every source point to a distinct imaging plane point, so the phase of the wave doesn't matter as much to the final image. It acts as a filter that cuts out ambiguous information that phase would have disambiguated.
A lens performs a similar operation by interacting with the light wave's phase to bend wavefronts in a way that maps points on the object to an imaging plane.
Those approaches don't recover 100% of the phase information, but they recover or filter enough to form the image you care about. Light field cameras attempt to recover more complete phase information through various ways better explored in the wikipedia link.
Could you create a sound blocking plane with a pinhole that makes an acoustic camera that follows similar principles to an optical camera obscura? I bet at some level you could, but I also bet it would not be very advantageous. You still need a microphone array to act as the imagine plane. The size of the pinhole is probably very constrained by sound wave diffraction (it's a pretty long wave after all, compared to light). Using the directly available acoustic phase information is more compact and efficient.
marginalia_nu|4 years ago