Something I always find pretty mind-blowing is that a physical thin lens can also perform a Fourier transform.
This (generally) only applies to coherent light, so it's not something we're used to everyday, but it forms the basis of a lot of laser-based optical technologies. Combined with some other effects you can do "all-optical" signal processing [1,2], or generate create dynamic, "programmable" holograms [3]
Looking at the page source they seem to be using the library from Project Nayuki[0]. I'm actually quite amazed at how few lines of code are required for an FFT implementation[1].
Anyway, I might give a simple draw-on-canvas website a go, give me a few hours.
I don't think this exactly what you want, but you might like it anyway. You draw and it breaks it down into the 2D fourier series, and you can control the number of fourier terms which essentially acts like a crude high pass filter:
You can't just "draw" on the Fourier side because it actually consists of complex numbers, and only the magnitude is plotted. And the Fourier plot needs to be rotationally symmetric. The closest you could get is increasing or decreasing the magnitude of pixel regions.
Maybe a DCT would be more amenable to hand-editing because it's real and has no symmetry constraints.
> I'd love to be able to draw on the Fourier side.
Notice that the direct and inverse transforms are essentially the same transformation. So you can draw on the image side and swap both images on your mind.
This is an interactive demo that lets you explore how to remove certain wavelength-related (frequency) content from an image. By wavelength, I mean intensities that vary by some rate-of-change of pixels either across horizontally or up-and-down vertically in the image. The low-cut slider sets up a threshold below which content is removed, and the high-cut slider sets up an equivalent threshold above which content is removed.
I think it is interesting because it reveals a couple things. First, that most of the content we find essential to the image is in the low-frequencies (the high-cut slider can be moved very far left before the image appears altered). Second, it demonstrates that a sharp cutoff of frequencies results in ringing (oscillations) in the image (the opposite is also true -- a sharp edge in the image will produce oscillating wavelength magnitudes in the frequency plot).
This method of removing wavelength-related content (called Fourier Filtering) is very precise in terms of cutting out specific frequencies, but the effect can be undesirable in terms of producing images that appear smooth and artifact-free (no blemishes). If you want to produce a smoother image which has certain periodic content removed, you would have to filter it out gradually in the Fourier representation (called filter "roll-off") instead of a sharp cut-off.
A Fourier transform maps a signal from a time domain into an equivalent [0] signal in the frequency domain.
This demo:
1. Performs a Fourier transform on some image pixels, taking the pixel data from the "time" domain into the frequency domain. (Think of time as "how far through the image we are", and the pixel's intensity as the signal's magnitude; frequency in this case becomes a slightly abstract concept.)
2. Visualises that frequency domain in various ways - by default you're seeing the magnitude of the transformed signal
3. applies a band-pass filter to that frequency domain - i.e. only allowing signals above a certain frequency (low-cut) and below another certain frequency (high-cut), and removing the rest. Playing with this might give you an intuitive notion of what "frequency" means here.
4. applies the inverse transform, giving you a signal back in the time domain (i.e. a normally viewable image).
You can see what the effect of bandpassing the frequency-domain signal is on the end result. Thinking about it in terms of Information, if we band-pass half of the given frequency spectrum out, we're essentially throwing away half of the Information... but yet the image is still useful to a human (this is the principle of how lossy compression algorithms work, as others have noted).
[0] In the case of a "perfect" transform, in reality most algorithms are lossy
I won't ELI5, but maybe someone here remembers the name for chimeric pictures which contain one image in the low frequencies, and a different one in the high, so one gets different impressions (smile vs frown, etc.) depending upon one is seated at one's monitor or viewing from across the room?
The DFF-Mono pattern is interesting because of how if you reduce the high-cut you can see the bands disappearing step-wise in decreasing density... I assume this occurs as the Nyquist frequency is passed for those pixels?
> I assume this occurs as the Nyquist frequency is passed for those pixels?
No, the Nyquist frequency isn’t the relevant parameter here. What is happening here is that you have a low-pass filter with a very steep cutoff (or high-pass, depending on what you are sweeping). As you sweep the cutoff, because the filter is so steep, the band suddenly disappears when the content moves from one side of the cutoff to the other.
The "grid" example shows a similar effect, but the difference is even more stark. With Low Cut Frequency at 0%, slowly move the High Cut Frequency between 7% and 9%, on the "Modulus view" you should see the white dots which represent vertical bands in the "Inverse transform" get cut out, so the "Inverse transform" switches from a fuzzy grid to just horizontal bands.
[+] [-] liuhenry|5 years ago|reply
This (generally) only applies to coherent light, so it's not something we're used to everyday, but it forms the basis of a lot of laser-based optical technologies. Combined with some other effects you can do "all-optical" signal processing [1,2], or generate create dynamic, "programmable" holograms [3]
[1] https://ieeexplore.ieee.org/document/686739
[2] https://ieeexplore.ieee.org/document/6648413
[3] https://en.wikipedia.org/wiki/Computer-generated_holography
[+] [-] nyanpasu64|5 years ago|reply
I'm not an expert, but it's interesting.
[+] [-] AtomicOrbital|5 years ago|reply
Intro to Fourier Optics and the 4F correlator https://www.youtube.com/watch?v=wcRB3TWIAXE
[+] [-] murgindrag|5 years ago|reply
[+] [-] vanderZwan|5 years ago|reply
Anyway, I might give a simple draw-on-canvas website a go, give me a few hours.
[0] https://www.nayuki.io/page/free-small-fft-in-multiple-langua...
[1] https://www.nayuki.io/res/free-small-fft-in-multiple-languag...
[+] [-] montalbano|5 years ago|reply
http://www.jezzamon.com/fourier/
[+] [-] nyanpasu64|5 years ago|reply
Maybe a DCT would be more amenable to hand-editing because it's real and has no symmetry constraints.
[+] [-] enriquto|5 years ago|reply
Notice that the direct and inverse transforms are essentially the same transformation. So you can draw on the image side and swap both images on your mind.
[+] [-] tantalor|5 years ago|reply
[+] [-] JTon|5 years ago|reply
[+] [-] hcrisp|5 years ago|reply
I think it is interesting because it reveals a couple things. First, that most of the content we find essential to the image is in the low-frequencies (the high-cut slider can be moved very far left before the image appears altered). Second, it demonstrates that a sharp cutoff of frequencies results in ringing (oscillations) in the image (the opposite is also true -- a sharp edge in the image will produce oscillating wavelength magnitudes in the frequency plot).
This method of removing wavelength-related content (called Fourier Filtering) is very precise in terms of cutting out specific frequencies, but the effect can be undesirable in terms of producing images that appear smooth and artifact-free (no blemishes). If you want to produce a smoother image which has certain periodic content removed, you would have to filter it out gradually in the Fourier representation (called filter "roll-off") instead of a sharp cut-off.
[+] [-] phab|5 years ago|reply
This demo:
1. Performs a Fourier transform on some image pixels, taking the pixel data from the "time" domain into the frequency domain. (Think of time as "how far through the image we are", and the pixel's intensity as the signal's magnitude; frequency in this case becomes a slightly abstract concept.)
2. Visualises that frequency domain in various ways - by default you're seeing the magnitude of the transformed signal
3. applies a band-pass filter to that frequency domain - i.e. only allowing signals above a certain frequency (low-cut) and below another certain frequency (high-cut), and removing the rest. Playing with this might give you an intuitive notion of what "frequency" means here.
4. applies the inverse transform, giving you a signal back in the time domain (i.e. a normally viewable image).
You can see what the effect of bandpassing the frequency-domain signal is on the end result. Thinking about it in terms of Information, if we band-pass half of the given frequency spectrum out, we're essentially throwing away half of the Information... but yet the image is still useful to a human (this is the principle of how lossy compression algorithms work, as others have noted).
[0] In the case of a "perfect" transform, in reality most algorithms are lossy
[+] [-] 082349872349872|5 years ago|reply
Edit: https://en.wikipedia.org/wiki/Hybrid_image
Are those worth a thousand words?
[+] [-] phab|5 years ago|reply
[+] [-] klodolph|5 years ago|reply
No, the Nyquist frequency isn’t the relevant parameter here. What is happening here is that you have a low-pass filter with a very steep cutoff (or high-pass, depending on what you are sweeping). As you sweep the cutoff, because the filter is so steep, the band suddenly disappears when the content moves from one side of the cutoff to the other.
[+] [-] DougBTX|5 years ago|reply
[+] [-] jarenmf|5 years ago|reply
[+] [-] 2rsf|5 years ago|reply
[+] [-] qppo|5 years ago|reply
[+] [-] phab|5 years ago|reply
[+] [-] sdenton4|5 years ago|reply
[+] [-] qppo|5 years ago|reply
[+] [-] UK-Al05|5 years ago|reply
[+] [-] enthdegree|5 years ago|reply