One problem solving approach which I adopted around the time that I was very frequently reading Terry's blog was to personify the problem as a combatant. For example, when trying to prove an inequality that seemed straightforward I would focus on how it could be false, and to then consider what structure the resulting 'conspiracy' of variables would have to have. I have a feeling that this is something that Terry did (and I emulated), but I can't remember a specific post.
He also has a very nice post on 'amplification and arbitrage' as tricks for strengthening inequalities.
I think this was also Albert Einstein's strategy. In the version we read in India, Young Einstein's uncle Jacob is supposed to have taught him Algebra in high school by "x is the animal you are hunting for"....finally after tracking the clues you find x and hunt him down.
Interesting. I wonder if personification is analogous to a "memory palace": it's a way to hijack our innate brain capacity for working on something more abstract. I've always felt like physics derivations were like mystery stories - the best ones had a surprise twist of reasoning that leads the detective to the solution from the scattered clues.
The most shocking thing is that reading “How To Solve It” and all other “how to get smarter” pop-sci books doesn’t help jack. Only grinding does. Gamers got it right!
The inability to understand intelligence must be the most visible failure of science today.
Grinding is training pattern recognition. At bottom I really do think that's what intelligence is. The raw sensory experience of confronting a novel problem and having that flash of realization, "I've seen this before, and I can get what I want."
It would be interesting to live in a world where people had tricks like this for thinking about complex non-mathematical problems, and the desire to reach as accurate as possible answers. I wonder what it would be like.
>> and the desire to reach as accurate as possible answers
Something that I believe I have genuinely noticed is that some/many[1] Rationalists have a tendency to apply some sort of "rational" processing to any given idea and then reach their conclusion, but whether the processing that is actually applied (as opposed to what the person perceives themselves to have applied) is substantially different than standard human heuristic processing with post-hoc rationalization[2] (as opposed to rationalism) applied on top of it seems quite questionable to me.
[1] Any hypothesis is true to the degree that it is actually true - everyone is free to have their opinion, but the actual objective state of reality "is what it is".
[2] To what degree is the difference between Rationalist (as opposed to rational) and "normal" thinking & belief formation a function of differences before or after the post-hoc divide?
Very roughly I think you are saying that it is unclear whether rationalists are thinking differently, or whether the think in the same fashion and just think that they think differently?
I suspect their are elements of both: I believe some members of the rationality community do routinely think in a different way to many other people, but I also believe that there is some post hoc rationalisation (as you put it) in the community as well.
Basically yes. Of course, they do think differently to some degree, but my intuition is that the end result is likely not nearly as impressive as they perceive it to be - post-hoc rationalization (and its various siblings) is a powerful force with substantial cloaking abilities, and while many intelligent people have abstract/academic knowledge of this, really "knowing" it does not come naturally - in fact, the opposite seems to be the case...such is the nature of the evolved human mind.
One strategy that was on my mind today: Whenever you're writing an API start on the "consumer" side. If you had the perfect version of this interface, what would the code that's using it look like? I find that if I start on the "implementation" side I'm more likely to come up with a solution that closely reflects what was easiest/most direct to implement, whereas starting thinking from the perspective of the user of the interface tends to lead me to better abstractions.
You can also use it when writing a block of code where you haven't decided what kind of functional abstractions or data structures you want. Write the code you wish you could write. Then fill in the code needed to support that.
This is 100% my coding strategy in every context where it makes sense. Code to an imagined ideal API that makes what I'm currently writing clear and concise. Once I have that written, I just repeat the process from the next level of abstraction down. This strategy has served me well over the years.
I like having static analysis for undefined names in my IDE for exactly this reason: I can write the program in the natural order (high level to low level), and then have a little reminder about what I haven't finished yet.
This is a great technique. I find it particularly useful if I'm stuck on writing some aspect of the implementation that I can't quite grok. Often pulling back and writing it from the point of view of a consumer of the implementation will help get me moving again on thinking about the problem.
One important aspect of leetcode/coding contest problems is that they have a input size constraint and a time limit constraint.
You can use the two to figure out what is the time complexity for a solution that would work. This simplifies the search for a solution by quite a bit. Here's a blog post about this idea (going from the input constraint to the possible algorithm): https://www.infoarena.ro/blog/numbers-everyone-should-know
Other than that, understanding a set of frequent data structures and algorithms helps a ton. Here's a short course from stanford on preparing for coding contests http://web.stanford.edu/class/cs97si/
https://codeforces.com/blog/entry/20548 is old but pretty good. for e.g. #2 is specific cases: "you get a problem for a tree. Consider its variant where the tree degenerates into a path".
Leetcode starts with an examination of the structure of the problem - the data structures present, the best possible time complexity. Knowing both allows a ton of narrowing down of the method that should be used.
of course, it's important (for better or worse) to just grind out a representative sample until you understand most common patterns, e.g: https://seanprashad.com/leetcode-patterns/
I recently took a graduate real analysis course and modern algebra course at the same time, and in the former course, many of the strategies listed in Tao's article came up. However, one important caveat is that several of them are specific to analysis. Since it heavily relies on classical logic and reals, indirect arguments are often used whenever possible. OTOH in algebra, more direct arguments are favored.
This is analogous to how different programming paradigms have specific ways of organizing programs and abstracting details. Likewise, in measure theory one is at liberty to say "let f : N -> Q be an enumeration of the rationals" and carry on, whereas such a statement in algebra would likely need a more explicit construction.
I enjoyed D&F. Worked through up until Ring theory.
One thing I would suggest is find suggested problems based on a university course. Since some consecutive problems can be similar, you don't get as good of ROI as you would spending time on a smaller and more diverse selection.
It was actually entirely done through the professor’s notes. I’m not sure what the reference materials were but it seemed pretty standard (group theory up to free groups and relations, ring theory up to principal ideals).
Not that I'm a real mathematician but I think I can add one more.
Roughly, Suppose you have a group of axioms and you want to show that a minimal set/system satisfying those axioms exists. Define your objects as the object that exist according to the axioms plus every application of an operation specified by the axioms. Then show the set itself is closed under those operations. The sort of approach used to define the real numbers, for Godel's Constructible Universe [1], the Löwenheim–Skolem theorem [2] and a variety of other places.
He also has a very nice post on 'amplification and arbitrage' as tricks for strengthening inequalities.