I am confused, this seems more like a general math cheatsheet. The only thing that truly has anything to do with theoretical CS are the definitions for the O-notation on the first page (and most CS students don't need a cheat-sheet for that) and the master theorem. Nothing on P, NP, NP-hardness or completeness, formal languages (Chomsky hierarchy?), finite automata, Turing machines, halting problem, decision problem, reduction proofs, Pumping lemma, Gödels incompleteness theorem, etc. This sheet would have been of very little help in any theoretical computer science exam I ever took. The only cheat sheet I ever needed in theoretical CS was Schönings book "Theoretische Informatik - kurz gefasst" ("A short overview of theoretical CS", well, it has ~180 pages...), which is pretty standard in German universities. It was completely worn out after my first semester + exam of theoretical CS... here is a TOC: http://www.gbv.de/dms/hebis-mainz/toc/094349797.pdf
More of a "Math for CS" cheatsheet. I agree that we can't call it a CS sheet as it has very few actual CS concepts (besides the O notation, I would also consider Graph Theory as CS), but I also wouldn't call it a general math sheet because it encompasses only an arbitrary small subset of math. Having said that, as a Math-for-CS sheet I found the section on matrices a bit light (missing quite a lot of linear algebra), and I missed the Fourier transform.
Maybe it's just me, but when a cheatsheet reaches 10 pages and includes multiple areas like this one I'd just split it up into multiple ones. It almost feels like this needs a table of contents.
And I think a bit more theoretical CS stuff, maybe a few diagrams, would be more helpful than e.g. the Pythagorean theorem or the powers of 2 which shouldn't be a problem to just memorize at that level.
This is only a showcase of TeX, the actual value of this cheat-sheet seems pretty low.
I used to know most of this when I was in uni, including how to derive it when I needed it. Back then this kind of cheat-sheet would not have helped me apply my knowledge. The actual knowledge was very clear in my head from solving problems, or I remembered some key idea used during derivation and from there could work out the rest.
If you need this kind of cheat-sheet you don't understand the concepts deep enough.
Unfortunately most of this knowledge is now 15 years later either very fuzzy or completely lost from my mind. I guess that happens when it is not being refreshed/applied during the career I chose.
This cheat-sheet serves a reminder of how much knowledge I have lost ;)
I dunno, this seems like a perfectly reasonable tool. Saying "what was the double angle formula for sin, again?" or "what was the error term in Stirling's approximation" doesn't mean that you don't understand the fundamentals.
> I find I don't really need to remember trig as long as i remember how imaginary exponentials and the unit circle works.
Or, to phrase it differently, trigonometry is just a different language for the restriction of the complex exponential to the imaginary axis, and you are remembering the facts in translated form. (Which I agree is much better—it's certainly the only way I can ever remember the thicket of identities! But I'd argue that it's still remembering trigonometry.)
I clearly remember my heureka moment when I discovered back in university that I don't have to actually learn and remember all that stuff that was taught but to recognize patterns and tricks that will reduce the problem domain down to a few things from which everything else can be deduced. After that most of the lectures boiled down to: ok, what is important to learn and remember here :-)
Not just math, but properties of matter, physical constants, thermodynamics and fluids (oblique shocks!), electricity (Semiconductors, Verilog!), solid mechanics, and random stuff like screw threads.
They had us get this in undergrad, all the Engineering students at Oxford know it as HLT.
Interesting that the periodic table doesn't go beyond element 103 (as Lw, not Lr), we are at 118 (Og) now. Periodic table are a good way to date things. Looking at the footnote, the edition is from 1972.
Because that doesn't seem to make sense. It could make sense if we read O as an operator (so f is the upper bound on g, but the definition is the other way around) but saying that a function is equal to an upper bound is just odd, even leaving aside the fact that the definition uses "f(x)", i.e., the application of f, to express the function itself.
Some textbooks use set membership notation instead: f(x) ∈ O(g(x)). This makes sense if you think of O(g(x)) as the set of all functions with that upper bound. However, the (abuse of) equality notation is often more convenient. For example, f(x) = x^2 + O(x) reads “f(x) is x^2 plus (something bounded by) a linear term”. Writing f(x) = x^2 + g(x) where g(x) ∈ O(x) is more of a hassle.
Anything using = for a relation that is not symmetric is cursed IMO. More generally, any relation using a symbol that is visually vertically symmetric should be symmetric.
Naively, one would assume that that implies O( f(n) ) - O(n) = O(n^2), which is not correct. However, O( f(n) ) = O(n^2) + O(n) implies O( f(n) ) = O(n^2), which would have been obvious if proper notation had been used.
It's a convention. Same with small-o and e.g. Taylor polynomials in first-year calculus. Writing e.g. sin(x) = sin(a) + (x - a)cos(a) + o(x - a). We used an equals sign, even though the meaning of this string of characters is "belonging to a class" and not "being equal". It's kind of convenient for long calculations.
Useful. I would add maybe abstract algebra, definitions of fields, rings, etc. and their respective operations. LaPlace/Z-Transformation from system theory.
It reminds me of the cheatsheets we used to make before math competitions. Many of those things are forever etched in my brain. The only difference is that the ones I made tended to have a lot more freaking triangles.
A nice collection formuleas that seem usefull when doing theoretical computer science. Maybe could improve it with adding the definition for NP-complete, context free grammars, Turing complete, and such.
1. I was a pure math major so I've always liked neat representations of pi such as Wallis' identity and Brouncker's continued fraction, both given on that cheat sheet, but I've never actually seen them used for anything. Where do they come up in CS?
2. If you have to deal with a lot of sums involving hypergeometric terms (such as a many series involving binomial coefficients), the book "A=B" by Marko Petkovsek, Herbert Wilf, and Doron Zeilberger might be of interest. It is downloadable from Wilf's site [1].
It's just math. In my view, CS theory is about computations as a whole. That is – formalisations, languages, algorithms, structures, and design principles.
I don’t know if Pythagorean theorem needs to go on any cheat sheet, whether it’s Math or CS. If you can’t remember that, maybe you are in the wrong place to begin with!
For anyone complaining about the length of the document, I suggested it as a shorter alternative to a original submission [0] where apparently a 212 page document was called a cheatsheet and in fact TCS Cheat Sheet was seen as too concise!
I just had this thought that it might be super interesting to see a cheat sheet with the current state of academic knowledge in computer science compressed to as few pages as possible. Things that we didn't know at - let's say - 10 or 20 years ago in a format that could be understood by people from those times.
As someone who has no training at all in related subject I really like the graphical aspect of this pdf, it brought me a sense of complete alienness and makes me want to explore. It almost look like something you can find at an art book fair.
The original source is very old. Written in plain TeX. Most source files in the TUG TeX Showcase archive [1] are dated 1998. Undoubtedly the first versions were much earlier.
As the TeX Showcase master page [2] notes, the author was Steve Seiden, from LSU. He died in a bike accident in 2002.