The blurring and then sharpening in spatial domain is equivalent to applying a band-pass filter, or rather a band-stop in the article's case, in the frequency domain. Blurring requires a lot of parameter tuning, in particular the size of the mask and the weight of each the mask element. The choice of parameter can be mapped to particular filter in the frequency domain, e.g. a gaussian filter or an ideal filter. Same for sharpening. Operating in the frequency domain allows you to skip all the trial and error of parameters, by eye-balling the frequency of interest, and suppress its presence.
> what you did is still the best out there
It still is.
I worked with a printing company before. The company impart anti-forgery pattern to packaging material by tuning the half tone-ing angle of the printer. The printing company then offer a seemingly transparent film, with special pattern printed on it, to company requiring brand protection. By overlaying the film on top of packaging material, a specifically designed moire pattern would occur. If you squint your mind enough, it is like public-private key encryption in the physical world. Whenever the brand receive dispute over authenticity of purchased items, a special team, a team having that transparent film, will be summoned to verify the authenticity of item in question. It is one of the many ways the brand in question protect their goods. The printing company was looking for a mobile solution instead of delivery the transparent film, that's where I get to learn more about the printing industry.
It is a relatively well defined problem - removing periodic pattern due to physical arrangement of printing device. This is where algorithmic approach shines over ML approach. I think nowadays a lot of ML is an attempt to outsource the problem understanding part. These are hot dogs, these are not hot dogs. I don't know what defines a hot dog, but is this picture a hot dog?
Hyperbole, of course.
On second thought, I think the author shouldn't remove the periodic noise at all. He was preserving "a copy of history", not the first occurrence of history. It is a property worth preserving. It is a beauty of itself imo.
are you at liberty to share the printing company here? I'm looking for something like this. If not, my email is in my profile and would appreciate info so I can reach out and maybe give them business!
The article doesn't say, but te method described in the article is called a notch filter. Here are some links on it, [0], [1], [2], [4]. Instead of cutting a sharp edge on the frequency domain, it's better that the filter blur the edge to avoid introducing too much artifacts, and this is called a Gaussian notch filter.
Anyway here is another method, that also targets peak regions on frequency domain for removal but is based on a median filter instead, [5]
Nature will quite readily calculate the 2D Fourier transform and its inverse for us without the use of a digital computer.
This is because lenses effectively do a Fourier transform at the focal point. With the right setup, you can apply filters at the focal point and get pretty much exactly what you would expect. An example of such a setup is the 4F Correlator. [0]
Fourier optics is a whole subfield within optics, and it really is rather fascinating.
Exposure to Fourier optics really helped develop my intuition around the Fourier transform.
This is how Electron Crystallograpy works. You can choose to get half the Fourier transform (aka the diffraction pattern) with phase information lost, or use a secondary lens to get the full picture back after correction. It's quite magical.
Then you can do FT on that final image on a computer and then modify that pattern in reciprocal space to fix flaws with the image like astigmatism and noise.
I was introduced to this idea in a really cool video about using Fourier Optics for optical pattern recognition.[0] The video happens to have one of the best explanations of Fourier transforms I've yet encountered.
This is precisely the method used to undo screening and half-toning in one of the labs in 6.003 (the intro signal processing course at MIT). The technique works astoundingly well for its straightforwardness, and the exercise can be a lot of fun.
My institution's equivalent was projecting an image of a "criminal" behind bars, doing the masking with a piece of blue-tak in the reciprocal space, and then watching the bars vanish in the image. And then doing the same thing with a cold-war era real life spy-plane photograph of the ocean, followed by masking out the frequency corresponding to the waves, and revealing the underwater submarine. Fun and lifelong memorable.
It is a shame that the article doesn't make a connection between blurring (and sharpening), and operations in the Fourier domain.
The article paints on the Fourier image and sees the effect on the original image. Well, blurring an image is equivalent to painting a centered black ring.
I tried the image[1] on one of the ML super-resolution online image sharpeners[2] and the result was: https://ibb.co/S53Lt0X (click button to look at it in full resolution).
The generated image does not have the same global problem with moiré patterns. The dot patterns remain in the image, randomly dithered or converted into lines. The FFT solution worked better than that particular ML model, although I presume a ML model could be trained specifically to remove printing dots.
I would think the optimal technique lies somewhere between the two: a convolution kernel optimized to conflate halftone dots with each other as to restore exactly the information possible. (Of course, such a kernel would have to be individually tuned for the orientation/spacing of the dots.)
Literally painting over frequency peaks in the FFT with black circles I imagine would be pretty lossy, and not entirely rid yourself of the pattern (since you're making a new pattern with your dots). Indeed, in the animation, the image does get darker as circles are added, and some of the pattern is still visible.
Perhaps using a blur tool to blur out the peaks in the FFT would serve to maintain original image tone, and further reduce patterning?
GIMP has a wavelet deconstruction option/plugin which breaks frequency bands into separate layers (like a stereo's multiband equilizer). You simply delete the layer corresponding to the frequency component and bam, visual features at that scale vanish, preserving structure at all other scales.
There's even more sophisticated wavelet denoisers out there that effectively do the black circle over peaks trick, but automatically and more precisely.
Any convolution kernel is equivalent to a FFT (aside from wrapping effects). The advantage of using a convolution kernel (instead of a hand-painted FFT mask) is that it's purely local and doesn't cause halftone dots in one area of the image to affect the rest, and is faster than a full-image FFT (which is O(N log N) in both the width and height).
A halftone dot whose size/shape changes gradually across the image acts like slow PWM in a pulse wave, changing the relative amplitudes of the harmonics (but not their locations). However, steep discontinuous changes can have nastier effects (which aren't handled well by either a convolution kernel or FFT).
I suspect it's possible to handle edges better than a FFT using a specialized algorithm, but I don't know if it's possible without inordinate runtimes, and if the end result is significantly better than a FFT or not.
The specks on the FFT were hand removed... I thought the following: he had an image obtained after a blur and sharpening filters. By replacing parts of the FFT with black, he was just throwing away information. What would happen if instead of naively replacing the specks on the FFT with black he replaced those parts with the FFT from the blurred and sharpened image?
Indeed by applying the "dark circle" (ideal filter or sinc filter) in the frequency domain would introduce other artifacts. The restored image would contain some "ringing" around sharp edges, which might not be perceivable in the image shown. [1]
As of the darkening in restored image, it depends how the software interpret the band-stopping of dark holes. By dark circling in the frequency domain, energy is taken out from the image. Re-normalizing the image might be desired, but then it would turn dark areas brighter.
[1] https://en.wikipedia.org/wiki/Ringing_artifacts
Perhaps the optimal technique would be to write a function which halftones a greyscale image, and then use optimisation to find a greyscale image which reproduces the input image when halftoned. This feels like an expectation–maximization kind of problem, but i can't give a rigorous argument for why.
I use Fiji every day, and I love the first impressions from someone who does not use it regularly (all correct too).
FFTs are amazing. In xray crystallography, you can use them to recapture the original image of a crystallized protein from the scatter of dots left by passing through it—essentially acting as the role of lens. They never cease to amaze me with their usefulness!
If you liked this, you might enjoy this little trick you can do with moiré patterns -- near the end of the post, I talk a little bit about the math behind the FFT connection. https://hardmath123.github.io/moire.html
Good writing, yes. But there are ways to properly re-heat old pizza and it's almost as good as fresh. One way (the best way) is to put it in a skillet with the pizza slice on one side and a little bit of water on the other side (not touching!) and cover it, and put over medium heat for 2-4 minutes. Another is to use an air-dryer (if you have one). You may need to test temperature and durations but once you find the proper settings it's perfect.
Cool! I wrote a similar script to automate removing rasters some ten years ago using G'MIC[1]. It was open sourced[2] and made available as a plugin of some sort. (Maybe through GIMP? Can't really remember.) The use case was enhancing page size scans of Babar childrens books which it worked great for. YMMV though, it's a bit rough around the edges.
That solution is one of those things where the solution seems obvious as soon as it is mentioned. At least if you are familiar with Fourier transforms. You want to remove a periodic pattern from an image, and of course that will be easier to do in phase space than spatial space.
Interestingly, filtering out high frequency components from the Fourier transform of the image is exactly how JPEG lossy compression works, so compressing the image as a jpeg would likely have a similar result.
The divide-and-conquer Cooley–Tukey FFT "algorithm (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms)."
Lovely stuff, but halftone prints are just one type of prints. Before digital printing was a thing, photos were created using enlargers throwing light through a negative on photosensitive paper. Since the negative was also created using photochemical effects, the "frequency" effect was much reduced. This can still be done by ordering prints in "photographic papers" by Fuji or Kodak.
If the image was created using a digital sensor, then it won't work as well, of course because the sensor itself is subject to a grid. However, the kind of Bayer Filter used in the sensor can help tackle the effect at source. This is what Fuji's X-Trans sensor[1] claims to do. (I am a Fuji user but I have no data point to offer in either direction)
Fourier analysis is currently used in "Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera."
Also, "JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image."
I don't think what the author did here (descreening) has anything to do with Moire per se. Sure, due to the nature of halftone, it often introduce Moire as the author said. But if you scan your document/photo at high enough resolution (sampling theorem anyone?) it isn't an issue in the descreening process later (which is the main topic of this article.)
Also, using inverse fourier transform to descreen is already the basis of lots of popular commercial denoise plugins (for Photoshop etc.). Most of them will automatically measure the angle and resolution of halftone matrix too.
I assumed that the issue was less the scanning (which can be extremely high precision, and, in any case, the author had to do to get the half-tone image he was working with) but the printing. Once the image is resized and printed in the book, the offset between the dpi of the printer and the half-tone dots was going to introduce new artifacts.
The author probably missed that the first F in FFT refers to a very specific (but efficient) algorithm, and that the effects he achieved are due to the properties of the (2D) Fourier transform, which can be computed using other algorithms as well.
Completely correct. And yet, FFT is much less ambiguous than FT ("Foot"? "Financial Times"? "Face time"?), so imo what is lost in preciseness is gained in clarity.
This reminded me of when film photography processors went from printing using a photographic process to digital printing. The digital process introduced a regular grid to the print which reduced the quality of the print. When prints were made using an enlarger onto photographic paper, the grain was random and the resulting images had a much nicer character.
[+] [-] a_c|4 years ago|reply
> what you did is still the best out there
It still is.
I worked with a printing company before. The company impart anti-forgery pattern to packaging material by tuning the half tone-ing angle of the printer. The printing company then offer a seemingly transparent film, with special pattern printed on it, to company requiring brand protection. By overlaying the film on top of packaging material, a specifically designed moire pattern would occur. If you squint your mind enough, it is like public-private key encryption in the physical world. Whenever the brand receive dispute over authenticity of purchased items, a special team, a team having that transparent film, will be summoned to verify the authenticity of item in question. It is one of the many ways the brand in question protect their goods. The printing company was looking for a mobile solution instead of delivery the transparent film, that's where I get to learn more about the printing industry.
[+] [-] a_c|4 years ago|reply
It is a relatively well defined problem - removing periodic pattern due to physical arrangement of printing device. This is where algorithmic approach shines over ML approach. I think nowadays a lot of ML is an attempt to outsource the problem understanding part. These are hot dogs, these are not hot dogs. I don't know what defines a hot dog, but is this picture a hot dog?
Hyperbole, of course.
On second thought, I think the author shouldn't remove the periodic noise at all. He was preserving "a copy of history", not the first occurrence of history. It is a property worth preserving. It is a beauty of itself imo.
[+] [-] MihaiSandor|4 years ago|reply
[+] [-] lbotos|4 years ago|reply
[+] [-] nextaccountic|4 years ago|reply
Anyway here is another method, that also targets peak regions on frequency domain for removal but is based on a median filter instead, [5]
[0] Removal of Moiré Patterns in Frequency Domain, https://ijournals.in/wp-content/uploads/2017/07/5.3106-Khanj...
[1] Periodic Noise Removing Filter, https://docs.opencv.org/3.4.15/d2/d0b/tutorial_periodic_nois...
[3] Adaptive Gaussian notch filter for removing periodic noise from digital images, https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-...
[4] Adaptive Optimum Notch Filter for Periodic Noise Reduction in Digital Images, https://eej.aut.ac.ir/article_94_befd8a642325852c3a0d41ece10...
[5] A New Method for Removing the Moire' Pattern from Images, https://arxiv.org/abs/1701.09037
[+] [-] neves|4 years ago|reply
[+] [-] dpwm|4 years ago|reply
This is because lenses effectively do a Fourier transform at the focal point. With the right setup, you can apply filters at the focal point and get pretty much exactly what you would expect. An example of such a setup is the 4F Correlator. [0]
Fourier optics is a whole subfield within optics, and it really is rather fascinating.
Exposure to Fourier optics really helped develop my intuition around the Fourier transform.
[0] https://en.wikipedia.org/wiki/Fourier_optics#4F_Correlator
[+] [-] boxed|4 years ago|reply
This is how Electron Crystallograpy works. You can choose to get half the Fourier transform (aka the diffraction pattern) with phase information lost, or use a secondary lens to get the full picture back after correction. It's quite magical.
Then you can do FT on that final image on a computer and then modify that pattern in reciprocal space to fix flaws with the image like astigmatism and noise.
http://www.calidris-em.com is the software for this.
[+] [-] jstrieb|4 years ago|reply
[0] https://www.youtube.com/watch?v=Y9FZ4igNxNA
[+] [-] agalunar|4 years ago|reply
[1] https://sigproc.mit.edu/spring19/psets/11/cat
[2] https://sigproc.mit.edu/fall19/psets/10/cat
[+] [-] azalemeth|4 years ago|reply
[+] [-] kccqzy|4 years ago|reply
[+] [-] gugagore|4 years ago|reply
The article paints on the Fourier image and sees the effect on the original image. Well, blurring an image is equivalent to painting a centered black ring.
[+] [-] isatty|4 years ago|reply
[+] [-] robocat|4 years ago|reply
The generated image does not have the same global problem with moiré patterns. The dot patterns remain in the image, randomly dithered or converted into lines. The FFT solution worked better than that particular ML model, although I presume a ML model could be trained specifically to remove printing dots.
[1] https://s3.amazonaws.com/revue/items/images/010/787/862/orig...
[2] https://deepai.org/machine-learning-model/torch-srgan
Edited: added link to output image.
[+] [-] colanderman|4 years ago|reply
Literally painting over frequency peaks in the FFT with black circles I imagine would be pretty lossy, and not entirely rid yourself of the pattern (since you're making a new pattern with your dots). Indeed, in the animation, the image does get darker as circles are added, and some of the pattern is still visible.
Perhaps using a blur tool to blur out the peaks in the FFT would serve to maintain original image tone, and further reduce patterning?
[+] [-] mrob|4 years ago|reply
https://legacy.imagemagick.org/Usage/fourier/#noise_removal
[+] [-] kortex|4 years ago|reply
There's even more sophisticated wavelet denoisers out there that effectively do the black circle over peaks trick, but automatically and more precisely.
[+] [-] nyanpasu64|4 years ago|reply
A halftone dot whose size/shape changes gradually across the image acts like slow PWM in a pulse wave, changing the relative amplitudes of the harmonics (but not their locations). However, steep discontinuous changes can have nastier effects (which aren't handled well by either a convolution kernel or FFT).
I suspect it's possible to handle edges better than a FFT using a specialized algorithm, but I don't know if it's possible without inordinate runtimes, and if the end result is significantly better than a FFT or not.
(Also, FFTs won't work as well for non-uniform hand-drawn halftones, like the charming https://upload.wikimedia.org/wikipedia/commons/a/ac/Julemoti....)
[+] [-] marcodiego|4 years ago|reply
[+] [-] a_c|4 years ago|reply
As of the darkening in restored image, it depends how the software interpret the band-stopping of dark holes. By dark circling in the frequency domain, energy is taken out from the image. Re-normalizing the image might be desired, but then it would turn dark areas brighter. [1] https://en.wikipedia.org/wiki/Ringing_artifacts
[+] [-] twic|4 years ago|reply
[+] [-] IshKebab|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] beowulfey|4 years ago|reply
FFTs are amazing. In xray crystallography, you can use them to recapture the original image of a crystallized protein from the scatter of dots left by passing through it—essentially acting as the role of lens. They never cease to amaze me with their usefulness!
[+] [-] geoduck14|4 years ago|reply
This is a surprisingly good description of an FFT.
[+] [-] asnt|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] hardmath123|4 years ago|reply
[+] [-] dongping|4 years ago|reply
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...
https://matthews.sites.wfu.edu/misc/DigPhotog/alias/
[+] [-] sedatk|4 years ago|reply
What a fantastic analogy. I wanted to stand up and applaud the author.
[+] [-] bambax|4 years ago|reply
[+] [-] nialse|4 years ago|reply
[1] https://github.com/dtschump/gmic [2] https://github.com/dtschump/gmic-community/blob/master/inclu...
[+] [-] thayne|4 years ago|reply
Interestingly, filtering out high frequency components from the Fourier transform of the image is exactly how JPEG lossy compression works, so compressing the image as a jpeg would likely have a similar result.
[+] [-] yesenadam|4 years ago|reply
What must Gauss think when he hears this?!
The divide-and-conquer Cooley–Tukey FFT "algorithm (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms)."
https://en.wikipedia.org/wiki/Fast_Fourier_transform#Cooley%...
[+] [-] pedrosbmartins|4 years ago|reply
> FFT – Fast Fourier Transform – was invented in the 1805 and then, once again, in 1965.
[+] [-] pkd|4 years ago|reply
If the image was created using a digital sensor, then it won't work as well, of course because the sensor itself is subject to a grid. However, the kind of Bayer Filter used in the sensor can help tackle the effect at source. This is what Fuji's X-Trans sensor[1] claims to do. (I am a Fuji user but I have no data point to offer in either direction)
[1] https://en.m.wikipedia.org/wiki/Fujifilm_X-Trans_sensor
[+] [-] ronenlh|4 years ago|reply
For anyone interested, this is an excellent introduction to some of the concepts: https://web.archive.org/web/20060210112754/http://cns-alumni...
Fourier analysis is currently used in "Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera."
Also, "JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image."
Source for both quotes: https://en.wikipedia.org/wiki/Fourier_analysis
[+] [-] thrdbndndn|4 years ago|reply
Also, using inverse fourier transform to descreen is already the basis of lots of popular commercial denoise plugins (for Photoshop etc.). Most of them will automatically measure the angle and resolution of halftone matrix too.
[+] [-] SamBam|4 years ago|reply
[+] [-] amelius|4 years ago|reply
[+] [-] MauranKilom|4 years ago|reply
[+] [-] chmod775|4 years ago|reply
Reading this I actually got excited about something I already knew and had used now and then, when I never felt about it that way before.
[+] [-] vermarish|4 years ago|reply
[+] [-] timthorn|4 years ago|reply