As a math professor, I think you can get by without defining vector spaces -- I think it is a little bit difficult to come up with examples, other than R^n or C^n, which seem motivated and interesting to the beginner. The truly important example (IMHO) is R^n without a choice of basis -- but I think this can only be well motivated after you've seen a lot of linear algebra, not before.
Mechanics of matrix operations are not pleasant to teach, they make the subject seem like a bunch of contrived and confusing examples. Same for the "row echelon" stuff. Alright, you now have an algorithm for solving systems of equations... but presenting linear algebra as an algorithm sells it short.
What is really important in my view is the geometry of the subject, which is already very interesting in two dimensions. Problem: Here is a linear transformation, given as a 2x2 matrix. Draw a picture which illustrates what this does to the plane. If you ask me, this is much more important than most of the crap that gets taught and tested in most courses in linear algebra.
This is how I learned linear algebra, as a series of algorithmic steps to perform to reach the solution. Only a handful of homework examples even hinted that matrices were powerful and useful. I learned the semester before that class how matrices can be used to represent physical locations and how multiplying these matrices can move you along the coordinate frames for the joints in a robot arm all the way to the end effector. In fact, you can take your final transformation matrix after you multiply each joint together and use that to translate your points in your native/world coordinate system into coordinates in your tool coordinate system. I later learned that matrices can be used for quantum computation/pyhsics and all sorts of other things like weather prediction, physics, graph theory, Bayesian networks, and so on.
Now that I'm out of school, I really want to learn how to properly use matrix algebra. I don't care about HOW to find a determinant or HOW LU decomposition works as I've already done those, but rather why I would want to perform that operation on the matrix in the first place.
The ability for matrices to model real processes is something that really fascinates me.
Do you have any other good reading material for someone like myself?
> Do you have any other good reading material for someone like myself?
I'm afraid I don't personally (I'm into abstract and theoretical math, which I'm guessing is not your cup of tea). But I should dig something up before I teach the subject again, so I would be as interested to read replies as you.
Gilbert Strang is an applied mathematician and also the author of a popular linear algebra book; I would guess that his books might interest you. But this is speculation, I haven't read any of them myself.
So what is a good Linear Algebra textbook from that perspective? Assume someone has completed a course in Axiomatic Set Theory and is conversant with proofs etc, but now wants to get into Linear Algebra. Any textbook suggestions?
I'm guessing "Linear Algebra Done Right" by Axler; I haven't read it yet but I heard good things about it and currently have it on order from Amazon. Hoffman + Kunze is the classic. For now, I don't know a book in the subject I really like, but I haven't looked very hard yet.
One way would be to take a course on a topic that uses linear algebra heavily, rather than taking a generic linear algebra course. Since all the ideas from linear algebra are then being used to actually do something, you'll get at least one example for why you would use whatever it is.
I would suggest trying these videos: http://www.stanford.edu/~boyd/ee263/videos.html. The prerequisites are very low and a main focus is on interpreting the abstract concepts in applications.
I'll second the Strang recommendation. I've recently read the chapter in Linear Algebra and it's Applications about how the FFT algorithm is in part a matrix calculation! Fascinating (and baffling) stuff.
I think it is a little bit difficult to come up with examples, other than R^n or C^n, which seem motivated and interesting to the beginner
My introductory linear algebra class also used polynomials over R, which led to examples like using projection to construct a polynomial approximation of a non-polynomial function.
I completely agree. In college, my first set of Linear Algebra courses were taught by the Math department and were completely algebra based.
Later on, taking the engineering LA courses where they stressed the geometry of all the operations really cemented the concepts and simplicity of LA into my mind.
As a math professor, I think you can get by without defining vector spaces -- I think it is a little bit difficult to come up with examples, other than R^n or C^n, which seem motivated and interesting to the beginner. The truly important example (IMHO) is R^n without a choice of basis -- but I think this can only be well motivated after you've seen a lot of linear algebra, not before.
Mechanics of matrix operations are not pleasant to teach, they make the subject seem like a bunch of contrived and confusing examples. Same for the "row echelon" stuff. Alright, you now have an algorithm for solving systems of equations... but presenting linear algebra as an algorithm sells it short.
What is really important in my view is the geometry of the subject, which is already very interesting in two dimensions. Problem: Here is a linear transformation, given as a 2x2 matrix. Draw a picture which illustrates what this does to the plane. If you ask me, this is much more important than most of the crap that gets taught and tested in most courses in linear algebra.