top | item 33583114

Wavelets: A mathematical microscope [video]

148 points| peter_d_sherman | 3 years ago |youtube.com | reply

18 comments

order
[+] ohazi|3 years ago|reply
This is easily the best video overview on wavelets I've seen.

I've used wavelet transforms on and off for years for things like image compression, harmonic analysis, and currently brain research, and I've always been somewhat disappointed in the quality of material that's out there explaining how wavelets work and how to use them effectively.

There's a small amount of "abstract math" heavy material but relatively little that's focused on using it for engineering, in contrast to all of the amazing material on Fourier transforms.

[+] peter_d_sherman|3 years ago|reply
>This is easily the best video overview on wavelets I've seen.

Agreed completely!

That's why I submitted it to HN! <g>

[+] psychphysic|3 years ago|reply
> currently brain research

Random shot in the dark. What do you think of the Free Energy Principle?

[+] teleforce|3 years ago|reply
Wavelet is not a true native time-frequency technique, technically it is time-scale in nature [1].

The better approach with better resolution is to use native time-frequency technique for example non-linear Cohen class time-frequency but currently it requires massive computing resources even for a short duration [2]. Hopefully with the available of cheaper RAMs in the order of Terabytes it will be more feasible to use proper time-frequency techniques for big data signal processing.

[1]Wavelet:

https://en.m.wikipedia.org/wiki/Wavelet

[2]Bilinear time–frequency distribution:

https://en.m.wikipedia.org/wiki/Bilinear_time%E2%80%93freque...

[+] laszlokorte|3 years ago|reply
Inspired by this excellent video I just built an visualization to dig further into the relation between short time fourier transform and wavelet transform. [1]

Its not quiet done yet and misses some polishing but I find it already helpful rightnow.

[1]: https://static.laszlokorte.de/time-frequency/

[+] nigma|3 years ago|reply
Wavelets are fun. I first heard about them in the early 2000s when I read about JPEG 2000 image compression format [1]. Back then the primary tool to play with signal transformation was the Matlab Wavelet Toolbox. As I got more interested in the topic I started work on PyWavelets [2] Python package for my master's thesis about medical signals processing and ML classification.

I'm not actively involved in the package development anymore but it is still maintained [3] and there is a great chance that you have it already in your Python environment as a dependency of scikit-image/scikit-learn. Just give it a try, it's very simple:

  >>> import pywt
  >>> cA, cD = pywt.dwt([1, 2, 3, 4], 'db1')

[1] https://en.wikipedia.org/wiki/JPEG_2000 [2] https://pywavelets.readthedocs.io/ [3] https://github.com/PyWavelets/pywt
[+] mikewarot|3 years ago|reply
The only aspect I can think of that was probably deliberately left on the cutting room floor is negative frequencies. When he showed the wavelet in 3d... it had a chirality to it, the same wavelet in the negative frequency would look identical, but would swap left handed/right handed threading.
[+] rocqua|3 years ago|reply
Still watching the video, but I know that, in finite cases, a 'negative' frequency (i.e. rotating counter-clockwise) can also just be interpreted as a positive frequency rotating more than 180 degrees per step that is rotating clockwise.

That only works in the finite case though.

[+] GistNoesis|3 years ago|reply
Signal theory 101 : You have some signal that you observe : v ∈ Rⁿ.

You have a database of k reference signals you know about vrefₖ ∈ Rⁿ.

What you want is to match the signal you observe to one of the reference signal.

This is called template matching : you pick the closest reference signal to the observation :

You compute argminₖ distance( v, vrefₖ)

This is great but it scales badly with respect to the number of reference signals.

So instead of trying to find the closest signal, you try to decompose the signal as a sum of reference signals :

You compute argmin over λₖ distance( v, Σₖ λₖvrefₖ)

This decomposition is not unique, so you regularize it (if your distance and norm are well picked the following is convex so you can get uniqueness guarantees):

You compute argmin over λₖ ( distance( v, Σₖ λₖvrefₖ) + Σₖnorm(λₖ) )

This is called sparse coding.

The rest (fft, stft,wavelet transform,...) are mathematical tricks (called transforms) to compute this more efficiently by choosing certain special reference signals (picking a basis).

The first math trick to be aware is (a-b)²= a²+b²-2ab : This is the bridge that relate what is called correlation to the distance :

for vector a,b ∈ Rⁿ : Σₖ(aₖ-bₖ)² = Σₖaₖ²+ Σₖbₖ²- 2* Σₖ(aₖbₖ)

With judicious pick the first two sums Σₖaₖ² and Σₖbₖ² can be made irrelevant to the minimization problem because they are either constant by construction (normalized to 1) or don't depend on the variable that we minimized over. This trick applies both to template matching and sparse coding (even though there is more terms they all simplify).

If the two first terms can be made to disappear then minimizing the distance has the same result as maximizing the correlation.

Then you have a bunch of sliding sums and recursive summation tricks which all depend on the specific shape of your data. Notably with fft there is the well known butterfly algorithm.

A century of techniques and theory since Hilbert have been devoted to finding various smart variants. And they allow you to compute this decomposition fast usually in O(n) or O(nlog(n)).

The alternative to hand-crafted techniques, is learning the reference signals from the data (Sparse dictionary learning). You can also brute-force your way through with deep-learned filter banks. It works better but require more compute, and have less guarantees. It also let you deal with nitty gritty details like missing data and masks.

[+] Version467|3 years ago|reply
Steve Brunton has some amazing videos on Wavelets on his Youtube Channel that are intended to be consumed alongside his book Data Driven Science and Engineering, but I found that they work great as standalone material as well.
[+] meltyness|3 years ago|reply
After learning of the Fourier transform in school, I immediately became interested in this for obvious reasons, but never successfully wrapped my mind around its employment usefully, beyond the jargon "wave localization."

The physical significance really arises from a place called the Huygens principle in mechanics, which supports claims about spatial wavefront propagation to state that at each timestep and point on the wavefront, another similar wave propagates, these subwaves are called wavelets.

[+] this-pony|3 years ago|reply
I'm a doctoral student working in the direction of PDE's and function spaces. I have some colleagues that are using wavelets for numerics. They typically prove that certain wavelet bases are better suited for numerical approximation of certain types of problems. You can for instance think about if you have a signal mainly composed of square waves. Then it would be rather inefficient to decompose this signal in sines and cosines. For certain types of PDE's under certain type of geometrical restrictions, sometimes you can find a much better wavelet bases than just sines and cosines. My research is rather theoretical (so I don't do any numerics), so wavelets don't play a role for me.
[+] _jcrossley|3 years ago|reply
Absolutely excellent. Thanks for sharing, I wish I had videos like this when I studied in undergrad.
[+] rocqua|3 years ago|reply
Stating that 'the time-frequency resolution trade off' is an instance of the uncertainty principle is at best a meaningless flourish, and generally speaking wrong.

If anything, the uncertanity principle is due to the time-frequency resolution trade off.