A somehow-much-less-popular way to come at representation theory is via generalized Fourier transforms. The irreducible representations are basically different "frequencies" which you can use to decompose functions on a group. And in fact, all of the major Fourier theorems carry over perfectly... There's even analogues of the fast Fourier transform, via chains of subgroups.
Yeah, I've always found generalized Fourier transforms and harmonic analysis to be really interesting! For those interested in reading more, some of the foundational work is the notion of a Haar measure and the Pontryagin duality. Basically, for any locally compact abelian group you can define a Fourier transform, and this Fourier transform is unique up to multiplicative identity. If you use this definition on the group of real numbers under addition you get the continuous Fourier transform, and if you use it on the group of integers under addition you get the discrete Fourier transform! And you can define the Fourier transform for more exotic groups, like on elliptic curves. In fact, this is how Shor's algorithm is able to factor large numbers, solve the discrete log problem, and break elliptic curve cryptography all in one algorithm: the algorithm is essentially just taking the Fourier transform of said function and measuring. Thanks to harmonic analysis, you can define a Fourier transform for elliptic curves and just plug that into Shor's algorithm
I mean it could be, but not in the way you're talking about. There's a common misconception that quantum algorithms being able to factor integers and solve the discrete log problem in polynomial time has some implications for P vs NP. However, integer factoring and the discrete log problems are NOT NP complete problems, and there is no quantum algorithm published that can solve an NP complete problem in polynomial time. It's actually an open problem if quantum computers, in terms of computability classes, are any more powerful than classical computers (I think even assuming P!=NP, but I'm not a computing theory person haven't looked into more recent work on it).
Interesting you mention this. I wrote a Julia library a few years ago to generate both real and unitary irreducible representations for O3 and the symmetric group. It also implements the Fourier transform for the symmetric group and the fast Fourier transform for O3. I really miss that type of interesting work in grad school... unfortunately I don't have any projects that involve representation theory in industry anymore :(
I wouldn’t call this 3blue1brown video a reference, but until I watched this (coming from an audio background), I didn’t quite grasp that a continuous stencil of any complex silhouette could be described in terms of a set of contributing vectors. Helped click that there is a perfect visual analogy to the deconstruction of complex audio to sine waves.
Despite Fermat's brilliance, it's barely feasible that such a proof exists after 300 years of attempts by professionals and amateurs alike.
Ironically though, the tantalising idea of the existence of a simple proof - motivated by that infamous quote - contributed massively (along with the simplicity of the conjecture itself) to the theorem's notoriety and the myriad attempts to solve it.
He wrote the note to himself, a long time before he died. He then publicly announced a proof for the n=4 case, and nothing about the general case. It strongly suggests that he realized his proof was wrong.
It seems there's a fair chance it was a flawed proof, in which case I doubt we'll ever be particularly confident although we could possibly surface some candidates.
Something I always wondered is if we know of the existence of a proof that is both simple and also non-obviously flawed and as such could have been Fermat's solution?
Yes, Lamé solution, which assumes unique factorization in the rings of integers of cyclotomic fields, and can in fact be salvaged to prove the case of regular primes
Thanks while I don't fully understand what this means I think I get the idea. I assume it's a proof that has the "n" over infinite many cases (primes) but not all of them? Googling it was hard because there are other Lamé things. I'll just imagine he wrote this proof. Not the full proof, but not lame!
Representation theory, the subject of the submitted article, is also at the core of our understanding of the particle zoo in physics. A tutorial aimed at mathematicians is here:
Great link, thanks! I alway enjoy reading summaries of other fields written by/for mathematicians, since the assumption of mathematical background allows a lot of fluff to be cut out (compared to, e.g. reading a textbook about physics intended for physics students).
For background reading, the notes by Teleman 2005, "Representation Theory" [1] are a good intro to the topic.
Learning about representation theory helped me understand the power of thinking about mathematical objects in terms of their action on other objects. Just as representation theory studies group actions on vector spaces, the theory of modules is best described as the study of ring actions on commutative groups. Many things happen to be rings (e.g. endomorphism rings of functions, where addition=pointwise addition and multiplication=composition), and modules allow us to apply (almost all of) vector space theory to better understand ring-like objects.
Very good article in that it taught me something I didn't know existed, and explained it very well.
However, I wish the article gave an example on how to actually construct a mapping between e.g. a small finite group and the actual matrices in the representation, to - sort of - get a feel for why that actually works.
I suspect there must be some sort of canonical method/algorithm to get from the "multiplication table" of a finite group to each matrix in a representation, but I haven't been able to find a reference.
Would anyone have pointers?
Also, jumping from the group to the character table (which seem to imply that there is indeed an algorithm to compute all possible representations) without having been told how the mapping is constructed feels like a rather big mental jump (and what makes the trace of the matrices important, btw - rather than, say, the determinant?).
Given the multiplication table of a (finite) group there are a couple of "obvious" representations you can pick. Firstly theres always the trivial representation that just sends everything to the identity.
More interestingly you can use the group elements as labels for an orthonormal basis for a vector space (e.g. your orthonormal set is {v_g for all g in G}). Then have the representation act by the usual group operation on the labels R_h[v_g] = v_{hg}. Then once you have an operation defined on a basis extending it linearly to the rest of the vector space is easy. This is (left) regular representation.
Intuition for why the trace is important comes from looking at permutation matrices - the trace is the sum of the diagonal elements of the matrix and the number of 1s on the diagonal is exactly the number of points that are invariant under the permutation - e.g. the permutation
[[1,0,0,0],
[0,1,0,0],
[0,0,0,1],
[0,0,1,0]]
has trace 2, and the permutation leaves the first two elements unchanged and flips the last two.
(1) The word 'orthonormal' doesn't have to be there; representations are a priori just linear actions; if they carry an invariant inner product, that is a bonus ('unitary representation'). For compact (including finite) groups, the invariant inner product is no extra information, because we could pick any (not necessarily invariant) inner product and average it with respect to Haar measure to get an invariant one. This product is unique up to a scalar for an irreducible representation, but not in general (and the left-regular representation is never irreducible for a non-trivial finite group).
(2) For people who are more used to cycle notation than the matrix representation of permutations, your permutation is (3 4). In roster notation, it's
For trace: there is a perspective that a trace is a generalized dimension counting operator. If we have a projection matrix (a matrix such that AA = A), then the trace of this matrix gives us the number of dimensions it projects onto. Indeed, requiring this property and linearity uniquely* defines trace. So, trace is the linear-algebraization of dimension counting.
/Actual matrices/ : Given a particular representation R of a group, you want to break it up into a direct sum of irreducible representations, R = R0 (+) R1 (+) ... (+) Rk. Based on the character theory, you can write down this decomposition without actual matrices... But suppose you want some actual matrixes to play with. The 'direct sum' statement is the same as saying that there's a basis for R in which the group action is block diagonal, with the block elements being something from R0, R1, ..., Rk respectively. The 'concrete' requirement probably means you want to find a particular change-of-basis operation for your preferred basis of R and the block diagonal one; ie, find P such that:
P R(g) P^t = R0(g) (+) ... (+) Rk(g)
for every element g in the group.
My lazy answer, then, is that if you have a concrete (eg, in-matrices) knowledge of R and each of the irreps, you can just solve the (many) linear equations to find P. (Remember that each g gives us another matrix equation here, which all hold simultaneously for P, so this is massively overdetermined.)
Another way would be to construct projections pi from R into each of the irreducible representations, and rearrange R as pi_inverse(Image) (+) Kernel(pi) for each of these projections.
This is a really good article on group theory and their representations but the article title is unfortunate. It could have just been "Representation theory and how it transformed mathematics".
> In the paper, Wigner observed that the mathematical structure of a physical theory often points the way to further advances in that theory and even to empirical predictions.
I think it's a good analogy since it applies exactly how Wigner meant it except in this case it is a branch of mathematics being applied to mathematics instead of physics. Representation theory has been "unreasonably useful and effective" which is what the original article was about.
In Frederik Pohl’s The years of the city, gendered pronouns are completely gone. Instead of he or she, there is “e”. His/her is “er”, and him/her is “um”. Too bad it didn’t catch on.
We collectively accepted that titles are supposed to be memetic though.
For instance readability is second to shortness. Accuracy is lower priority to impact. Our whole history with titles points to short and punchy catch copies.
Purely descriptive titles only work when the reader will read the paper anyway (e.g. peer reviewing), which is far from the setting we have on HN.
I like the title, although maybe it could have included the phrase "representation theory". I did not know that Burnside expected so little of representation theory; I think it's an interesting and important part of the article.
Sometimes I go deep wiki some nights in mathematics. Representation theory and Set Theory are the only sane ways I know of on how to approach higher mathematics. But it’s their practical applications in software that amazes me.
Take Young Tabs (https://en.wikipedia.org/wiki/Young_tableau) and Hook Lengths for example. I’ve been playing with the concept that you could use Young Tabs and Hook Lengths to represent groups of FSMs in metric space if you wanted to know mathematically if one FSM could be topologically sorted into a congruent FSM.
>“Mathematicians basically know everything there is to know about matrices. It’s one of the few subjects of math that’s thoroughly well understood,” said Jared Weinstein of Boston University.
I'm genuinely curious how such a statement can be made. I've recently been wondering about if it were possible to prove that one's theorems are exhaustive about a 'space' of possible theorems in an axiomatic system (or some subset thereof, since I'd assume such a space might be infinite (perhaps)).
How can we know that there aren't some really surprising properties of matrices that we've previously been unaware of? As far as I can see, we can merely make a statement about that it fits well together with other (limited) findings we've made so far?
It's more that in linear algebra, we have "the nicest theorems we can hope for", so there's no "nasty surprises" lurking around. This is qualitatively different from pretty much all other algebraic structures one studies, where there is some amount of nastiness which we attempt to control.
Indeed, “linear” and “nice” are almost synonymous in mathematics and, especially, in its applications. (For example, the entire differential calculus is based on the idea of approximating arbitrary “nice” functions with linear maps.)
Most axiomatic system have an infinite number of theorems. I would say that a theory is "known" if there is an algorithm to decide which statements follow from the theory. An example is plane geometry, which has an algorithm due to Tarski. (The algorithm is pretty slow, though -- doubly exponential. Maybe a better criterion is a polynomial-time algorithm, though I don't know of a system that has such.)
Practically, questions about matrices are easy. (Matrices over the real and complex numbers can probably be reduced to the same Tarski algorithm, actually.) If you can reduce a question to a linear algebra question, you're probably going to solve it. There are open questions in linear algebra if you force the entries to be integers, for example, but odds are you don't have one of those.
This is of course just a hyperbole, especially in its original context.
For one thing, people routinely use representation theory to study groups of matrices. That is, represent the group of matrices by other matrices, acting on other spaces. If matrices were this elementary "thoroughly understood" object, one presumably wouldn't have needed to do that.
The real power of the representation theory is not that its matrices, but the notion of irreducibility. This allows you to decompose a complex action into simpler blocks and understand it that way.
The sense in which this is true is similar to what they mean when they say that a car engine mechanic “knows everything there is to know about car engines.” Sure, conceivably you could do with a car engine something that it is not designed for and which would be beyond what the mechanic would know or expect, or you could (unlikely) invent a new material (a “number” system for matrix elements) which would lead to results that are hard to predict in advance, but the “engine science” itself would hardly benefit from such things...
Just this year I read that a new insight about eigenvalues/eigenvectors had just been discovered, shocking the math world that a new insight from linear algebra could be discovered.
Here is a write-up by Terry Tao. This doesn't disprove the premise that linear algebra is more or less solved. Although the result was unknown and a little surprising, once formulated the proof was relatively straightforward, and does not imply any major or minor upsets in the field.
To your last question, Godel's incompleteness theorem prevents use from proving everything about linear algebra. So there might be surprising statements that are true but can never be proved to be true given the axioms.
That's not what Gödel's first theorem says. What Gödel's theorem tells us about linear algebra is that either it's inconsistent, or there are statements which are true for some models of linear algebra, but false for others.
In fact, we do know that any statement in linear algebra that is true[1], is provable. That's because vector spaces are first-order structures, so that's covered by Godel's completeness theorem[2].
[1]: I.e. it's true for all models of linear algebra.
What about the moral matrix problem? Given a finite set of matrices with integer entries, determine if you can multiply them in some order, possibly with repetition, to get the zero matrix. It's proven undecidable for "a set of six or more 3 × 3 matrices, or a set of two 15 × 15 matrices" at least according to wikipedia. https://en.wikipedia.org/wiki/List_of_undecidable_problems . Is that because it incorporates the integers or something?
To say that a statement is undecidable relative to a set of axioms is to say that this set of axioms is satisfied by several structures and this statement is true for some of them, but false for some others. The structures which satisfy a set of axioms are called models of this set of axioms.
The whole deal with undecidable statements in mathematics is that in our language we make the illusion that there is only one structure deserving of the name "natural numbers", but "natural numbers" are defined by a set of axioms. What Gödel proved is that, when a set of axioms (and the language that is used) is powerful enough, then this set of axioms is either inconsistent (i.e. it has no model), or there are multiple models and there exist statements in the language which are true in some models, but not in others; so you could say that the "truth status" of these statements isn't decided by the set of axioms.
EDIT:
Related issue: given a class of structures, in general it may not be possible to write down a set of axioms for which this class of structures will be the class of all models of this set of axioms. An example of that in first-order logic is well-ordered sets. You need second-order logic for that (i.e. you need to be able to quantify over subsets, instead of just elements of universum).
So the way I think about all that is that sets of axioms are inherently imprecise. When you add another axiom, in order to restrict yourself to a smaller number of structures, you always jump over several of them. You're never able to throw out just one.
OK, so the general idea of OPs comment is true, that we have statements about linear algebra for which (in ZFC) we can neither prove nor disprove, like the mortal matrix problem, correct? It's just that his wording was inexact?
It's true that -we have statements about linear algebra for which (in ZFC) we can neither prove nor disprove-[1]. However, given that these statements simply aren't true, it doesn't tell us much: it's not that we can't know everything there is to know about linear algebra. It's that these things we can't know about linear algebra aren't facts about linear algebra to start with.
[1]: On a second thought, let me rephrase that: we have statements about structures partially described by linear algebra which (in ZFC) we can neither prove nor disprove for all of them at the same time.
You’ve given excellent explanations on this topic. It’s rare that someone writes about the meaning of the Incompleteness theorems correctly.
I think it’s worth pointing out or adding that when talking about The Natural Numbers most people are implicitly talking about the standard model with the first order Peano axioms. There are statements in this model that are true (have no counterexample) but which can’t be proven by this set of axioms.
I think many people making claims like,
“There are true statements about The Natural Numbers that aren’t provable.”
don’t realize exactly the nuances you’ve pointed out. Assuming The Natural Numbers are consistent then all true statements are provable in some axiom system. Just take the collection of all true statements as the axiom system. Now every true statement is trivially provable.
I hope what I’ve written doesn’t muddy your excellent explanations!
> Assuming The Natural Numbers are consistent then all true statements are provable in some axiom system. Just take the collection of all true statements as the axiom system. Now every true statement is trivially provable.
That axiom system wouldn't be particularly useful to humans though. When we talk about sets of axioms, we almost always talk about finite sets of axioms. This is what makes them useful to us, allows us to use them for describing things.
But you do have the right intuition here. The next step is using the compactness theorem[1].
It doesn’t have to be a finite set of axioms just recursively enumerable. The first order Peano axioms are not finite in number. One of them is an axiom schema. (I believe.)
My understanding of the subject is based on a course in mathematical logic which I took at a university. According to the lecturer, there are no good books on the subject. There was one book they referenced, but it was fat and unapproachable. So, unfortunately, I can't really give any recommendations.
Be careful with your phrasing there. The axioms for the real numbers [1] are complete and consistent (within ZFC). Hence I'm almost sure the axioms for each of the finite dimensional vector spaces will also be complete and consistent.
Part of the reason is that you can't define the integers within the context of the real numbers.
Are there any textbooks that provide a good introduction into Representation Theory, assuming a decent level of undergraduate mathematics as a foundation?
Not a standard source for algebraists, but Percy Diaconis's book, mentioned in other comments, is an excellent introduction. Also contains applications in probability.
I believe the Burnside quote comes from the preface of his 1897 book:
> It may then be asked why, in a book which professes to leave all applications on one side, a considerable space is devoted to substitution groups; while other particular modes of representation, such as groups of linear transformations, are not even referred to. My answer to this question is that while, in the present state of our knowledge, many results in the pure theory are arrived at most readily by dealing with properties of substitution groups, it would be difficult to find a result that could be most directly obtained by the consideration of groups of linear transformations.
This article gave me both in a very understandable and engaging way. Thank you to the author!!!