A somehow-much-less-popular way to come at representation theory is via generalized Fourier transforms. The irreducible representations are basically different "frequencies" which you can use to decompose functions on a group. And in fact, all of the major Fourier theorems carry over perfectly... There's even analogues of the fast Fourier transform, via chains of subgroups.
Representation theory, the subject of the submitted article, is also at the core of our understanding of the particle zoo in physics. A tutorial aimed at mathematicians is here:
Great link, thanks! I alway enjoy reading summaries of other fields written by/for mathematicians, since the assumption of mathematical background allows a lot of fluff to be cut out (compared to, e.g. reading a textbook about physics intended for physics students).
For background reading, the notes by Teleman 2005, "Representation Theory" [1] are a good intro to the topic.
Learning about representation theory helped me understand the power of thinking about mathematical objects in terms of their action on other objects. Just as representation theory studies group actions on vector spaces, the theory of modules is best described as the study of ring actions on commutative groups. Many things happen to be rings (e.g. endomorphism rings of functions, where addition=pointwise addition and multiplication=composition), and modules allow us to apply (almost all of) vector space theory to better understand ring-like objects.
Very good article in that it taught me something I didn't know existed, and explained it very well.
However, I wish the article gave an example on how to actually construct a mapping between e.g. a small finite group and the actual matrices in the representation, to - sort of - get a feel for why that actually works.
I suspect there must be some sort of canonical method/algorithm to get from the "multiplication table" of a finite group to each matrix in a representation, but I haven't been able to find a reference.
Would anyone have pointers?
Also, jumping from the group to the character table (which seem to imply that there is indeed an algorithm to compute all possible representations) without having been told how the mapping is constructed feels like a rather big mental jump (and what makes the trace of the matrices important, btw - rather than, say, the determinant?).
Given the multiplication table of a (finite) group there are a couple of "obvious" representations you can pick. Firstly theres always the trivial representation that just sends everything to the identity.
More interestingly you can use the group elements as labels for an orthonormal basis for a vector space (e.g. your orthonormal set is {v_g for all g in G}). Then have the representation act by the usual group operation on the labels R_h[v_g] = v_{hg}. Then once you have an operation defined on a basis extending it linearly to the rest of the vector space is easy. This is (left) regular representation.
Intuition for why the trace is important comes from looking at permutation matrices - the trace is the sum of the diagonal elements of the matrix and the number of 1s on the diagonal is exactly the number of points that are invariant under the permutation - e.g. the permutation
[[1,0,0,0],
[0,1,0,0],
[0,0,0,1],
[0,0,1,0]]
has trace 2, and the permutation leaves the first two elements unchanged and flips the last two.
For trace: there is a perspective that a trace is a generalized dimension counting operator. If we have a projection matrix (a matrix such that AA = A), then the trace of this matrix gives us the number of dimensions it projects onto. Indeed, requiring this property and linearity uniquely* defines trace. So, trace is the linear-algebraization of dimension counting.
/Actual matrices/ : Given a particular representation R of a group, you want to break it up into a direct sum of irreducible representations, R = R0 (+) R1 (+) ... (+) Rk. Based on the character theory, you can write down this decomposition without actual matrices... But suppose you want some actual matrixes to play with. The 'direct sum' statement is the same as saying that there's a basis for R in which the group action is block diagonal, with the block elements being something from R0, R1, ..., Rk respectively. The 'concrete' requirement probably means you want to find a particular change-of-basis operation for your preferred basis of R and the block diagonal one; ie, find P such that:
P R(g) P^t = R0(g) (+) ... (+) Rk(g)
for every element g in the group.
My lazy answer, then, is that if you have a concrete (eg, in-matrices) knowledge of R and each of the irreps, you can just solve the (many) linear equations to find P. (Remember that each g gives us another matrix equation here, which all hold simultaneously for P, so this is massively overdetermined.)
Another way would be to construct projections pi from R into each of the irreducible representations, and rearrange R as pi_inverse(Image) (+) Kernel(pi) for each of these projections.
This is a really good article on group theory and their representations but the article title is unfortunate. It could have just been "Representation theory and how it transformed mathematics".
I like the title, although maybe it could have included the phrase "representation theory". I did not know that Burnside expected so little of representation theory; I think it's an interesting and important part of the article.
Sometimes I go deep wiki some nights in mathematics. Representation theory and Set Theory are the only sane ways I know of on how to approach higher mathematics. But it’s their practical applications in software that amazes me.
Take Young Tabs (https://en.wikipedia.org/wiki/Young_tableau) and Hook Lengths for example. I’ve been playing with the concept that you could use Young Tabs and Hook Lengths to represent groups of FSMs in metric space if you wanted to know mathematically if one FSM could be topologically sorted into a congruent FSM.
>“Mathematicians basically know everything there is to know about matrices. It’s one of the few subjects of math that’s thoroughly well understood,” said Jared Weinstein of Boston University.
I'm genuinely curious how such a statement can be made. I've recently been wondering about if it were possible to prove that one's theorems are exhaustive about a 'space' of possible theorems in an axiomatic system (or some subset thereof, since I'd assume such a space might be infinite (perhaps)).
How can we know that there aren't some really surprising properties of matrices that we've previously been unaware of? As far as I can see, we can merely make a statement about that it fits well together with other (limited) findings we've made so far?
It's more that in linear algebra, we have "the nicest theorems we can hope for", so there's no "nasty surprises" lurking around. This is qualitatively different from pretty much all other algebraic structures one studies, where there is some amount of nastiness which we attempt to control.
Most axiomatic system have an infinite number of theorems. I would say that a theory is "known" if there is an algorithm to decide which statements follow from the theory. An example is plane geometry, which has an algorithm due to Tarski. (The algorithm is pretty slow, though -- doubly exponential. Maybe a better criterion is a polynomial-time algorithm, though I don't know of a system that has such.)
Practically, questions about matrices are easy. (Matrices over the real and complex numbers can probably be reduced to the same Tarski algorithm, actually.) If you can reduce a question to a linear algebra question, you're probably going to solve it. There are open questions in linear algebra if you force the entries to be integers, for example, but odds are you don't have one of those.
This is of course just a hyperbole, especially in its original context.
For one thing, people routinely use representation theory to study groups of matrices. That is, represent the group of matrices by other matrices, acting on other spaces. If matrices were this elementary "thoroughly understood" object, one presumably wouldn't have needed to do that.
The real power of the representation theory is not that its matrices, but the notion of irreducibility. This allows you to decompose a complex action into simpler blocks and understand it that way.
The sense in which this is true is similar to what they mean when they say that a car engine mechanic “knows everything there is to know about car engines.” Sure, conceivably you could do with a car engine something that it is not designed for and which would be beyond what the mechanic would know or expect, or you could (unlikely) invent a new material (a “number” system for matrix elements) which would lead to results that are hard to predict in advance, but the “engine science” itself would hardly benefit from such things...
Just this year I read that a new insight about eigenvalues/eigenvectors had just been discovered, shocking the math world that a new insight from linear algebra could be discovered.
To your last question, Godel's incompleteness theorem prevents use from proving everything about linear algebra. So there might be surprising statements that are true but can never be proved to be true given the axioms.
Are there any textbooks that provide a good introduction into Representation Theory, assuming a decent level of undergraduate mathematics as a foundation?
Not a standard source for algebraists, but Percy Diaconis's book, mentioned in other comments, is an excellent introduction. Also contains applications in probability.
I believe the Burnside quote comes from the preface of his 1897 book:
> It may then be asked why, in a book which professes to leave all applications on one side, a considerable space is devoted to substitution groups; while other particular modes of representation, such as groups of linear transformations, are not even referred to. My answer to this question is that while, in the present state of our knowledge, many results in the pure theory are arrived at most readily by dealing with properties of substitution groups, it would be difficult to find a result that could be most directly obtained by the consideration of groups of linear transformations.
[+] [-] biddlesby|5 years ago|reply
This article gave me both in a very understandable and engaging way. Thank you to the author!!!
[+] [-] sdenton4|5 years ago|reply
[+] [-] dhosek|5 years ago|reply
http://www.amazon.com/exec/obidos/ASIN/0123392519/donhosek
[+] [-] quickthrower2|5 years ago|reply
[+] [-] IngoBlechschmid|5 years ago|reply
John Baez, John Huerta. The Algebra of Grand Unified Theories. https://arxiv.org/abs/0904.1556
[+] [-] benrbray|5 years ago|reply
For background reading, the notes by Teleman 2005, "Representation Theory" [1] are a good intro to the topic.
Learning about representation theory helped me understand the power of thinking about mathematical objects in terms of their action on other objects. Just as representation theory studies group actions on vector spaces, the theory of modules is best described as the study of ring actions on commutative groups. Many things happen to be rings (e.g. endomorphism rings of functions, where addition=pointwise addition and multiplication=composition), and modules allow us to apply (almost all of) vector space theory to better understand ring-like objects.
[1] https://math.berkeley.edu/~teleman/math/RepThry.pdf
[2] Another favorite from John Baez: https://groups.google.com/d/msg/sci.physics.research/aiMUJrO...
[+] [-] ur-whale|5 years ago|reply
However, I wish the article gave an example on how to actually construct a mapping between e.g. a small finite group and the actual matrices in the representation, to - sort of - get a feel for why that actually works.
I suspect there must be some sort of canonical method/algorithm to get from the "multiplication table" of a finite group to each matrix in a representation, but I haven't been able to find a reference.
Would anyone have pointers?
Also, jumping from the group to the character table (which seem to imply that there is indeed an algorithm to compute all possible representations) without having been told how the mapping is constructed feels like a rather big mental jump (and what makes the trace of the matrices important, btw - rather than, say, the determinant?).
[+] [-] eigenket|5 years ago|reply
More interestingly you can use the group elements as labels for an orthonormal basis for a vector space (e.g. your orthonormal set is {v_g for all g in G}). Then have the representation act by the usual group operation on the labels R_h[v_g] = v_{hg}. Then once you have an operation defined on a basis extending it linearly to the rest of the vector space is easy. This is (left) regular representation.
https://en.wikipedia.org/wiki/Regular_representation
Edit:
Intuition for why the trace is important comes from looking at permutation matrices - the trace is the sum of the diagonal elements of the matrix and the number of 1s on the diagonal is exactly the number of points that are invariant under the permutation - e.g. the permutation
[[1,0,0,0], [0,1,0,0], [0,0,0,1], [0,0,1,0]]
has trace 2, and the permutation leaves the first two elements unchanged and flips the last two.
[+] [-] bollu|5 years ago|reply
[+] [-] sdenton4|5 years ago|reply
My lazy answer, then, is that if you have a concrete (eg, in-matrices) knowledge of R and each of the irreps, you can just solve the (many) linear equations to find P. (Remember that each g gives us another matrix equation here, which all hold simultaneously for P, so this is massively overdetermined.)
Another way would be to construct projections pi from R into each of the irreducible representations, and rearrange R as pi_inverse(Image) (+) Kernel(pi) for each of these projections.
[+] [-] chmaynard|5 years ago|reply
https://www.math.columbia.edu/~woit/wordpress/?p=11776
[+] [-] memexy|5 years ago|reply
[+] [-] klyrs|5 years ago|reply
[+] [-] woopwoop|5 years ago|reply
[+] [-] emmelaich|5 years ago|reply
https://en.wikipedia.org/wiki/Geordie_Williamson
[+] [-] poletopole|5 years ago|reply
Take Young Tabs (https://en.wikipedia.org/wiki/Young_tableau) and Hook Lengths for example. I’ve been playing with the concept that you could use Young Tabs and Hook Lengths to represent groups of FSMs in metric space if you wanted to know mathematically if one FSM could be topologically sorted into a congruent FSM.
[+] [-] ssivark|5 years ago|reply
[+] [-] sva_|5 years ago|reply
I'm genuinely curious how such a statement can be made. I've recently been wondering about if it were possible to prove that one's theorems are exhaustive about a 'space' of possible theorems in an axiomatic system (or some subset thereof, since I'd assume such a space might be infinite (perhaps)).
How can we know that there aren't some really surprising properties of matrices that we've previously been unaware of? As far as I can see, we can merely make a statement about that it fits well together with other (limited) findings we've made so far?
[+] [-] bollu|5 years ago|reply
[+] [-] QuesnayJr|5 years ago|reply
Practically, questions about matrices are easy. (Matrices over the real and complex numbers can probably be reduced to the same Tarski algorithm, actually.) If you can reduce a question to a linear algebra question, you're probably going to solve it. There are open questions in linear algebra if you force the entries to be integers, for example, but odds are you don't have one of those.
[+] [-] cosmic_ape|5 years ago|reply
For one thing, people routinely use representation theory to study groups of matrices. That is, represent the group of matrices by other matrices, acting on other spaces. If matrices were this elementary "thoroughly understood" object, one presumably wouldn't have needed to do that.
The real power of the representation theory is not that its matrices, but the notion of irreducibility. This allows you to decompose a complex action into simpler blocks and understand it that way.
[+] [-] Koshkin|5 years ago|reply
[+] [-] nvusuvu|5 years ago|reply
[+] [-] FartyMcFarter|5 years ago|reply
On wikipedia: https://en.wikipedia.org/wiki/Computing_the_permanent
Scott Aaronson has written about it: https://www.scottaaronson.com/blog/?p=2925
[+] [-] BurningFrog|5 years ago|reply
Sure, some new interesting question may arise tomorrow.
[+] [-] mrfox321|5 years ago|reply
[+] [-] lawrenceyan|5 years ago|reply
[+] [-] benrbray|5 years ago|reply
[1] Teleman 2005: https://math.berkeley.edu/~teleman/math/RepThry.pdf
[2] Khonvanov List of Repr Theory Resources http://www.math.columbia.edu/~khovanov/resources/
[3] Huang 2010, "Fourier-Theorietic Probabalistic Inference over Permutations"
[4] Woit, Topics in Representation Theory Course, http://www.math.columbia.edu/~woit/repthy.html
[5] Gallier 2013, "Spherical Harmonics and Linear Representations of Lie Groups" https://www.cis.upenn.edu/~cis610/sharmonics.pdf
[+] [-] cosmic_ape|5 years ago|reply
[+] [-] krackers|5 years ago|reply
[+] [-] nickray|5 years ago|reply
[+] [-] CrazyStat|5 years ago|reply
> It may then be asked why, in a book which professes to leave all applications on one side, a considerable space is devoted to substitution groups; while other particular modes of representation, such as groups of linear transformations, are not even referred to. My answer to this question is that while, in the present state of our knowledge, many results in the pure theory are arrived at most readily by dealing with properties of substitution groups, it would be difficult to find a result that could be most directly obtained by the consideration of groups of linear transformations.