In one of the last videos in the (relatively short) series, he discusses eigen-*:
~'eigen-stuffs are straight-forward but only make sense if you have a solid visual understanding of the pre-requisites (linear transformations, determinants, linear systems of equations, change of basis, etc.). Confusion about eigen-stuffs usually has more to do with a shaky foundation than the eigen-things themselves'
All of the videos in the series, including this later one on eigen-things, focus on animations to show what the number crunching is doing to the coordinate system.
3Blue1Brown (Grant Sanderson) is really, really good. I follow a number of education channels on YouTube, and Grant blows them all out of the water for the kind of insights, new perspectives, and inspiration he provides. His animations are fantastically put together to clearly and unobtrusively illustrate the point he's making. I also really like his voice, soothing, clear and with enough intonation to avoid boredom, and perfect pace. I wish I had his linear algebra series back in college, I suspect I would have done much better.
Just to throw in my own anecdote, I took linear algebra twice (in high school with no college credit, and in college) and I still couldn't ever remember afterwards what an eigenvector was until I watched that series. Now I'll probably never forget. He is an astonishing educator.
It’s possible to understand eigen-* without having an understanding of determinants. That’s how they’re introduced in “Linear Algebra Done Right” - http://linear.axler.net/
I just finished watching both his calculus series and linear algebra. I have to say 3blue1brown has made a mathematical masterpiece of a youtube series
Neither of those subjects really clicked with me until I could visualize it in a 2D / 3D representation
Whenever this kind of stuff comes up I feel like a bit of a fraud...
I’ve written a bunch of scientific data analysis code. I have a science PhD. Written large image analysis pipelines that worked as well as the state of the art... been published etc.
For the most part I’ve found basic math and heuristics to be good enough. Every so often I go relearn calculus. But honestly, none of this stuff ever seems to come in handy. Maybe it’s because most of what I encounter is novel datasets where there’s no established method?
I reasonably regularly pick up new discrete methods, but the numerical stuff never seems super useful...
I don’t know, just a confession I guess... it never comes up on interviews either for what it’s worth.
For a large fraction of probability theory, you only need two main facts from linear algebra.
First, linear transforms map spheres to ellipsoids. The axes of the ellipsoid are the eigenvectors.
Second, linear transforms map (hyper) cubes to parallelpipeds. If you start with a unit cube, the volume of the parallelpiped is the determinant of the transform.
That more or less covers covariances, PCA, and change of variables. Whenever I try to understand or re-derive a fact in probability, I almost always end up back at one or the other fact.
They're also useful in multivariate calculus, which is really just stitched-together linear algebra.
Eigenvectors and Eigenvalues show up everywhere, although sometimes it's in the form of an iterative estimate (PageRank is basically the power method estimation of the first eigenvector of a connected graph of web pages).
They're in the same class as logarithms and Fourier transforms IMHO. You won't need to calculate them by hand, but you should know what they do and why they're important.
It's one of those things that you don't notice when it's missing, but probably would help a bit if you knew it. That being said, I have to deal with linear algebra every day, and aside from proofs (which obviously they help with), there have been maybe a handful of times that having a deep knowledge of eigenvectors and eigenvalues has helped significantly. Once or twice though, I've got massive speedups (>500x) just by knowing how to do the same thing in a more efficient way.
My feeling is having a basic knowledge of testing/caching/memory management is way more useful when you're doing large image analysis.
It comes up all the time when trying to build second order optimization methods. With the eigensystem of your objective in hand you have a complete understanding of the (non-) convexity of your energy landscape, which is useful to ensure you always have good search directions, etc.
This is frightening but believable. I've worked with a few "quants" who stared at me doe eyed explaining eigen* and basic calculus concepts to them in the context of why their calculations don't add up. You mention you've used fourier transforms before - if you don't understand an eigenbasis then you don't have a fundamental understanding the math you're deploying.
Interesting to see this back on the front page after three years. Still remember us sitting in our living room drawing this on paper and arguing about the right approaches.
Maybe one day vicapow and I will make a triumphant return to the explorables space, but life has a way of getting in the way as you get older.
Eigen{vectors,values} seemed like this totally arbitrary concept when I first learned about them. Later it turned out that they are actually really awesome and pop up all the time.
Multivariable function extrema? Just look at the eigenvalues of the hessian.
Jacobi method convergence? Eigenvalues of the update matrix.
RNN gradient explosion? Of course, eigenvalues.
The visual explanation movement falls flat for me. It's like trying to understand Monads through blog posts. It's great if you already understand the concept to develop your intuition, or if you've never heard of the concept to pique your interest, but it won't help in the intermediate area where you know what you want to know but don't understand it fully. I need to build proofs through incremental exercises to grasp these concepts.
As someone who understands eigenfunctions already, I don't understand the pictures either. Here is the best way to think about it: a matrix is a transformation, a composition of rotation, scaling, etc. Eigensets are lines going through the origin that the matrix moves points along. So a rotation would have no eigenvectors because none of the points move in a straight line, while a scaling along the x axis would have an eigenset that was also along the x axis, consisting of the points that were moved straight up or down.
To imagine finding the eigenset, just ask, could I draw a line through 0,0 such that any point I put on it would stay on it after the matrix acted?
steamer25|7 years ago
https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...
In one of the last videos in the (relatively short) series, he discusses eigen-*:
~'eigen-stuffs are straight-forward but only make sense if you have a solid visual understanding of the pre-requisites (linear transformations, determinants, linear systems of equations, change of basis, etc.). Confusion about eigen-stuffs usually has more to do with a shaky foundation than the eigen-things themselves'
https://youtu.be/PFDu9oVAE-g
All of the videos in the series, including this later one on eigen-things, focus on animations to show what the number crunching is doing to the coordinate system.
AceJohnny2|7 years ago
He's the creator I support the most on Patreon: https://www.patreon.com/3blue1brown
smrq|7 years ago
bladecatcher|7 years ago
swaggyBoatswain|7 years ago
Neither of those subjects really clicked with me until I could visualize it in a 2D / 3D representation
xevb3k|7 years ago
I’ve written a bunch of scientific data analysis code. I have a science PhD. Written large image analysis pipelines that worked as well as the state of the art... been published etc.
For the most part I’ve found basic math and heuristics to be good enough. Every so often I go relearn calculus. But honestly, none of this stuff ever seems to come in handy. Maybe it’s because most of what I encounter is novel datasets where there’s no established method?
I reasonably regularly pick up new discrete methods, but the numerical stuff never seems super useful...
I don’t know, just a confession I guess... it never comes up on interviews either for what it’s worth.
soVeryTired|7 years ago
First, linear transforms map spheres to ellipsoids. The axes of the ellipsoid are the eigenvectors.
Second, linear transforms map (hyper) cubes to parallelpipeds. If you start with a unit cube, the volume of the parallelpiped is the determinant of the transform.
That more or less covers covariances, PCA, and change of variables. Whenever I try to understand or re-derive a fact in probability, I almost always end up back at one or the other fact.
They're also useful in multivariate calculus, which is really just stitched-together linear algebra.
jdonaldson|7 years ago
They're in the same class as logarithms and Fourier transforms IMHO. You won't need to calculate them by hand, but you should know what they do and why they're important.
Bukhmanizer|7 years ago
My feeling is having a basic knowledge of testing/caching/memory management is way more useful when you're doing large image analysis.
electricslpnsld|7 years ago
tzahola|7 years ago
Just from the top of my head, you could have encountered eigenvectors/eigenvalues:
- if you ever used spectral graph algorithms
- if you ever done dimensionality reduction via principal component analysis
- if you ever calculated the steady state distribution of a Markov chain
wglb|7 years ago
A fun book on this is https://openlibrary.org/books/OL2398351M/The_algebraic_eigen...
rxhernandez|7 years ago
aje403|7 years ago
lewis500|7 years ago
Maybe one day vicapow and I will make a triumphant return to the explorables space, but life has a way of getting in the way as you get older.
dang|7 years ago
sannee|7 years ago
Multivariable function extrema? Just look at the eigenvalues of the hessian. Jacobi method convergence? Eigenvalues of the update matrix. RNN gradient explosion? Of course, eigenvalues.
danlugo92|7 years ago
[0] https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...
olskool|7 years ago
toppy|7 years ago
https://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.p...
WhompingWindows|7 years ago
akvadrako|7 years ago
abiox|7 years ago
asafira|7 years ago
deepdiving12|7 years ago
lr4444lr|7 years ago
KasianFranks|7 years ago
abiox|7 years ago
haskellandchill|7 years ago
whatshisface|7 years ago
To imagine finding the eigenset, just ask, could I draw a line through 0,0 such that any point I put on it would stay on it after the matrix acted?
pizza|7 years ago