>“Mathematicians basically know everything there is to know about matrices. It’s one of the few subjects of math that’s thoroughly well understood,” said Jared Weinstein of Boston University.
I'm genuinely curious how such a statement can be made. I've recently been wondering about if it were possible to prove that one's theorems are exhaustive about a 'space' of possible theorems in an axiomatic system (or some subset thereof, since I'd assume such a space might be infinite (perhaps)).
How can we know that there aren't some really surprising properties of matrices that we've previously been unaware of? As far as I can see, we can merely make a statement about that it fits well together with other (limited) findings we've made so far?
It's more that in linear algebra, we have "the nicest theorems we can hope for", so there's no "nasty surprises" lurking around. This is qualitatively different from pretty much all other algebraic structures one studies, where there is some amount of nastiness which we attempt to control.
Indeed, “linear” and “nice” are almost synonymous in mathematics and, especially, in its applications. (For example, the entire differential calculus is based on the idea of approximating arbitrary “nice” functions with linear maps.)
Most axiomatic system have an infinite number of theorems. I would say that a theory is "known" if there is an algorithm to decide which statements follow from the theory. An example is plane geometry, which has an algorithm due to Tarski. (The algorithm is pretty slow, though -- doubly exponential. Maybe a better criterion is a polynomial-time algorithm, though I don't know of a system that has such.)
Practically, questions about matrices are easy. (Matrices over the real and complex numbers can probably be reduced to the same Tarski algorithm, actually.) If you can reduce a question to a linear algebra question, you're probably going to solve it. There are open questions in linear algebra if you force the entries to be integers, for example, but odds are you don't have one of those.
This is of course just a hyperbole, especially in its original context.
For one thing, people routinely use representation theory to study groups of matrices. That is, represent the group of matrices by other matrices, acting on other spaces. If matrices were this elementary "thoroughly understood" object, one presumably wouldn't have needed to do that.
The real power of the representation theory is not that its matrices, but the notion of irreducibility. This allows you to decompose a complex action into simpler blocks and understand it that way.
The sense in which this is true is similar to what they mean when they say that a car engine mechanic “knows everything there is to know about car engines.” Sure, conceivably you could do with a car engine something that it is not designed for and which would be beyond what the mechanic would know or expect, or you could (unlikely) invent a new material (a “number” system for matrix elements) which would lead to results that are hard to predict in advance, but the “engine science” itself would hardly benefit from such things...
Just this year I read that a new insight about eigenvalues/eigenvectors had just been discovered, shocking the math world that a new insight from linear algebra could be discovered.
Here is a write-up by Terry Tao. This doesn't disprove the premise that linear algebra is more or less solved. Although the result was unknown and a little surprising, once formulated the proof was relatively straightforward, and does not imply any major or minor upsets in the field.
To your last question, Godel's incompleteness theorem prevents use from proving everything about linear algebra. So there might be surprising statements that are true but can never be proved to be true given the axioms.
That's not what Gödel's first theorem says. What Gödel's theorem tells us about linear algebra is that either it's inconsistent, or there are statements which are true for some models of linear algebra, but false for others.
In fact, we do know that any statement in linear algebra that is true[1], is provable. That's because vector spaces are first-order structures, so that's covered by Godel's completeness theorem[2].
[1]: I.e. it's true for all models of linear algebra.
What about the moral matrix problem? Given a finite set of matrices with integer entries, determine if you can multiply them in some order, possibly with repetition, to get the zero matrix. It's proven undecidable for "a set of six or more 3 × 3 matrices, or a set of two 15 × 15 matrices" at least according to wikipedia. https://en.wikipedia.org/wiki/List_of_undecidable_problems . Is that because it incorporates the integers or something?
To say that a statement is undecidable relative to a set of axioms is to say that this set of axioms is satisfied by several structures and this statement is true for some of them, but false for some others. The structures which satisfy a set of axioms are called models of this set of axioms.
The whole deal with undecidable statements in mathematics is that in our language we make the illusion that there is only one structure deserving of the name "natural numbers", but "natural numbers" are defined by a set of axioms. What Gödel proved is that, when a set of axioms (and the language that is used) is powerful enough, then this set of axioms is either inconsistent (i.e. it has no model), or there are multiple models and there exist statements in the language which are true in some models, but not in others; so you could say that the "truth status" of these statements isn't decided by the set of axioms.
EDIT:
Related issue: given a class of structures, in general it may not be possible to write down a set of axioms for which this class of structures will be the class of all models of this set of axioms. An example of that in first-order logic is well-ordered sets. You need second-order logic for that (i.e. you need to be able to quantify over subsets, instead of just elements of universum).
So the way I think about all that is that sets of axioms are inherently imprecise. When you add another axiom, in order to restrict yourself to a smaller number of structures, you always jump over several of them. You're never able to throw out just one.
OK, so the general idea of OPs comment is true, that we have statements about linear algebra for which (in ZFC) we can neither prove nor disprove, like the mortal matrix problem, correct? It's just that his wording was inexact?
It's true that -we have statements about linear algebra for which (in ZFC) we can neither prove nor disprove-[1]. However, given that these statements simply aren't true, it doesn't tell us much: it's not that we can't know everything there is to know about linear algebra. It's that these things we can't know about linear algebra aren't facts about linear algebra to start with.
[1]: On a second thought, let me rephrase that: we have statements about structures partially described by linear algebra which (in ZFC) we can neither prove nor disprove for all of them at the same time.
You’ve given excellent explanations on this topic. It’s rare that someone writes about the meaning of the Incompleteness theorems correctly.
I think it’s worth pointing out or adding that when talking about The Natural Numbers most people are implicitly talking about the standard model with the first order Peano axioms. There are statements in this model that are true (have no counterexample) but which can’t be proven by this set of axioms.
I think many people making claims like,
“There are true statements about The Natural Numbers that aren’t provable.”
don’t realize exactly the nuances you’ve pointed out. Assuming The Natural Numbers are consistent then all true statements are provable in some axiom system. Just take the collection of all true statements as the axiom system. Now every true statement is trivially provable.
I hope what I’ve written doesn’t muddy your excellent explanations!
> Assuming The Natural Numbers are consistent then all true statements are provable in some axiom system. Just take the collection of all true statements as the axiom system. Now every true statement is trivially provable.
That axiom system wouldn't be particularly useful to humans though. When we talk about sets of axioms, we almost always talk about finite sets of axioms. This is what makes them useful to us, allows us to use them for describing things.
But you do have the right intuition here. The next step is using the compactness theorem[1].
It doesn’t have to be a finite set of axioms just recursively enumerable. The first order Peano axioms are not finite in number. One of them is an axiom schema. (I believe.)
My understanding of the subject is based on a course in mathematical logic which I took at a university. According to the lecturer, there are no good books on the subject. There was one book they referenced, but it was fat and unapproachable. So, unfortunately, I can't really give any recommendations.
Be careful with your phrasing there. The axioms for the real numbers [1] are complete and consistent (within ZFC). Hence I'm almost sure the axioms for each of the finite dimensional vector spaces will also be complete and consistent.
Part of the reason is that you can't define the integers within the context of the real numbers.
I'm genuinely curious how such a statement can be made. I've recently been wondering about if it were possible to prove that one's theorems are exhaustive about a 'space' of possible theorems in an axiomatic system (or some subset thereof, since I'd assume such a space might be infinite (perhaps)).
How can we know that there aren't some really surprising properties of matrices that we've previously been unaware of? As far as I can see, we can merely make a statement about that it fits well together with other (limited) findings we've made so far?