mental models are terrible. This entire farnam street /rationalist culture is terrible, fallacy screaming falls into the same category.
The reason why it's terrible is because mental models are like a sort of jumbled set of tools, copied over from somewhere else. Deep and genuine knowledge is tacit, integrated, incorporates experience and is organic. It's acquired through training and expertise and often can't be verbalised or formalised.
Relying on mental models is a sort of crutch. Maybe a good comparison is chess playing. If you're a new chess player you learn that pieces are worth X points, and that such and such squares are important, and that's basically like these sorts of models. Really good chess players don't consciously think like that, at least not in a sort of pronounced way. They've incorporated knowledge and play by intuition. They can make 'good moves' in a second or two.
I don't know why these 'fallacies' and 'mental model' things have taken off so much within certain circles, I feel like it's because people do nothing else other than reading these blogs. Real expertise or knowledge isn't gained by sitting around and 'thinking through stuff', it's by coming into interaction with the real world and immersing yourself in what you want to get better at.
> The reason why it's terrible is because mental models are like a sort of jumbled set of tools, copied over from somewhere else.
I disagree with this slightly, I think Mental Models are great for the mental modeler but almost impossible to transfer.
> Deep and genuine knowledge is tacit, integrated, incorporates experience and is organic. It's acquired through training and expertise and often can't be verbalized or formalized.
This is gold. Deep and genuine knowledge is created from repeated interactions and pattern matching born of errors and successes fortified repeatedly till they become reflex. No matter how many Charlie Munger articles and speeches and case studies I read, I can never invert like Charlie Munger.
>Relying on mental models is a sort of crutch. Maybe a good comparison is chess playing. If you're a new chess player you learn that pieces are worth X points, and that such and such squares are important, and that's basically like these sorts of models. Really good chess players don't consciously think like that, at least not in a sort of pronounced way. They've incorporated knowledge and play by intuition. They can make 'good moves' in a second or two.
Exactly and they find it very difficult to accurately explain what they are doing. Even if they can, their explanations fall apart under scrutiny. Like asking a dog how it catches a frisbee. A university educated dog who talks might spew some garbage about solving a set of differential equations in its head but really it is just evolved instinct which has been honed. Much knowledge is like that layered with a ex-post rationalization.
I agree with what you said, but I don't think it invalidates the usefulness of "mental model" thinking. I like that you used the word "tacit," because I think that's exactly what somebody like Charlie Munger would point out: when dealing with complex issues, people often rely on their tacit knowledge and, because of their own limitations and biases, miss some of the most fundamental and obvious aspects of their problem. An example he uses is the introduction of New Coke: Munger describes a number of extremely basic biological and psychological concepts that could have predicted the failure of New Coke, things that are part of every 100-level course in those subjects, and yet the executives at Coke still managed to completely fail to integrate that knowledge into their decision-making. I'm not all on the hype train, but I do think the idea of applying a broad set of concepts to complex problems can be really, really useful, and sometimes will produce better results than relying on your tacit knowledge. There are of course problems where this is not true, and knowing the difference is important. Mental model thinking will probably not generate the next big breakthrough in some highly technical field like math or physics, for example. But that doesn't mean it has no value.
Yeah, I think your comment adds some interesting ideas.
Following the original chess comparisons, we could also say that new players will learn more and faster when given some models of what are good and bad moves, than if they need to learn from scratch with nothing else. Eventually they will outgrow that and their knowledge will exceed what can fit into a "mental model", but models are not useless. But still with chess, there's also the so called Kotov syndrome, which is "a situation when a player thinks very hard for a long time in a complicated position but does not find a clear path, then, running low on time, quickly makes a poor move, often a blunder". Overthinking, getting too deep into a certain path, can also be a problem. Intelligent people sometimes make obvious mistakes due to getting lost on deep thoughts. And this is a problem related to deep knowledge, not just models. Nothing is foolproof.
What I think is productive here is to distinguish between different types of models: if a model can help you acknowledge certain problems or patterns, that can be pretty useful; but if it tries to be the all-encompassing explanation to something, then it's probably wrong and might lead you the wrong way more often than not, constraining your vision instead of expanding it. You need experience to make good decisions, not just be given a model, but experience is also not enough or not always there, and models can help fill some of those gaps too.
Agreed. This is basically folk cognitive science, slapping together random post-hoc explanations from popularized cog-sci terms and other domains from a few generations ago. I believe the target market is people who want to feel smart as they acquire the propositional knowledge of these so-called mental models.
Moden research demonstates wisdom, which is very roughly a measure of being a good, relevant general problem solver, doesn’t seem to work that way. There is no correlation of number of mental models in ones “head” (their words, embodied cognition weeps) with being a good general problem solver. At the very least, the problem of knowing which mental model to apply to what problem is still required and there is no meta-algorithm that can be developed for that.
Additionally, as you point out, tacit knowledge comes with participating in an activity, carrying out a procedure, acquiring a specific percpetive, all of which constitute different types of knowledge, which memorizing a random menu of “mental models” can’t ever substitute for.
But I applaud their ability to squeeze 2 volumes of 10$ books out of this material. Apparently their money making heuristic is working just fine, and I bet they haven’t used any of their own models for that because it actually seems to work.
Yup. Really almost everything sold online is a "shortcut" to some arbitrary destination. Beauty products, workout supplements, podcasts that give you a small view but seem like they are the way to figure out life, etc.
People want to expend as little energy as possible, which is understandable. But that lets them get tricked into all of this stuff.
Good to realize that "you can't get something for nothing".
Real expertise and knowledge is gained by doing both. Just interacting with things gets you to a certain level, but if you want to go beyond that, you need to interface with the problem at an abstract level. Anyone with actual expertise in a complex, intellectual subject does both.
Mental models are extremely important, even if you already understand a subject well. Mental models, when they're good, tend to deal with the limits to knowledge, and reasoning in the absence of perfect information.
Experts often do these things implicitly in their field of expertise. Naming and abstracting these things allows people to see them as the general principles that they are, and apply them more readily in other domains.
If you’re not an expert, you can quickly improve your thinking using some of these concepts. For a variety of reasons, it’s not possible for most people to only do things they have mastered.
YOU used a mental model - 'the game of chess' - yet failed to see that it doesn't map to the real world.
If you had paid more attention to mental models, you would have realised that chess is a very poor model to use to compare to the world of business.
Just take the very first in that list - 'the map is not the territory.'
Have you ever seen decisions being made using Spreadsheet data that hold fine in formulas or projections? (But then don't translate to the real world?)
This is a very common problem in business and part of the reason I will move very cautiously before hiring an MBA once more.
'Really good chess players don't consciously think like that, at least not in a sort of pronounced way. They've incorporated knowledge and play by intuition. They can make 'good moves' in a second or two.'
This analogy simply does not hold.
The best investors/entrepreneurs as proven by their records DO swear by mental models.
Not just Munger. Chuck Akre has a ridiculously simple mental model - the three-legged stool. Jeff Bezos is also a big user of the concept.
I interact with investors regularly. Many of them also use mental models, unsuccessfully. The difference between them and the best is that their mental models are woeful (Post hoc ergo propter hoc). They'd have a better understanding of their mistakes if they were to dedicate more time to learning more deeply about this topic.
~
Why is a chess player more worthy of your respect than Munger, Buffett or Bezos, who are playing infinitely more complex games? And at a much higher level of success than you or I?
Do you need reminding that they are not just dealing with the psychology of ONE other player, but that of _millions_?
What is it about these giants that causes you to immediately discard their wisdom?
I think a better question is one you should be asking yourself:
"What am I missing here?"
Because by definition, if you can muster the humility to see it, you are clearly missing something
It's just another one of the millions of buzzword pseudoscience brospeaks.
Also, labeling things could help you explain the world better... but it doesn't always lend to the creation of new things. Like just knowing about these "mental models" isn't going to spur you on to think of a new one. A new one will be made organically to respond to events in the present. Not in pondering the past or future.
I agree with much of your excellent argument. I aim to strengthen it.
It's very expensive to fittingly defend any conceptions of //real// and //knowledge// in my experience. It's hard to put into words, but I take representations (including what may seem as menial as mental models) made explicit to at least sometimes be necessary though insufficient (and not epiphenomenal) for being a good thinking thing like us. I am worried that some circles do not appreciate the anti-codificationist intuitions about virtue (of a practice) that you are speaking about. At the very least, some rails, footholds, rules-of-thumb, and oversimplified models can provide an accelerated ladder for us, and that may continue even for experts in at least some cases. Without denying the radical value of immersive experience in developing tacit knowledge (nor the necessity of fast emotion for slow cognition), I think there's plenty of room to debate about even the value of crutch-models, compressions, explicitly exploring, story-telling, and codifying instrumental reasoning for even the virtuoso (including with other virtuosos).
I think the one thing we cannot open for debate, because it is evident given any framework, is the impact of risk avoidance in knowledge acquisition. I once read that given a room with a table and some blocks, the idiot would wait for instructions; the smart person would try to surmise the intent placing the blocks on the table, the table in the room, and the subject (them) at the table; however, the genius would find novel ways to combine the blocks, the table, and the room, often at the bewilderment of any observer.
Frameworks are what smart people use to get results. I believe risk is the tool of geniuses.
It'd be obvious, for example, to add Bold, Italic, and Underline buttons for usability to the textbox I'm typing in. To me, however, the buttons, the textbox, even the text-styling characters are arbitrary constraints. I want to know if this "reply" feature could be built in-line, without a textbox? If I could eliminate the constraints one-by-one, how many ways could you and I interact over this specific topic that would be both novel and, perhaps, more engaging for us both?
I'd like to lovingly point out that your comment notes how a book you read describes how a genius thinks about a situation - and then you wrote a paragraph to demonstrate your similar genius to the comment box. What was your intent?
Your comment (and my comment) strike me as seeking connection and seeking recognition - both classic behaviors on Maslow's hierarchy of needs (a model).
I think that knowledge of the models helps one see both common ground and opposition - and the richest life experience may come from thinking like an idiot, smart person and a genius.
I agree. But I would like to point out that I did not initialize my thinking from Maslow's hierarchy. A discernible connection is coincidental rather than intentional. And that's what I intended to say through describing risk. It is safe to begin with the framework and work within its constraints to achieve a relatively predictable result. However, that also averts any risk inherent to failures along processes that discover novel relationships, test outcomes, compare against known outcomes. That may yield more desirable results, but it's more often riddled with stochastic failure. It's natural to feel repulsion to it, but the ability to persist against continuous discomfort is what separates the Bezos, Musks, Gates, et. al. from nearly everyone else.
On a related note, Bill Gates' recent doubts about the utility of all-electric long distance trucks was surprising. I don't know of nearly any other time Gates just reached into a field he has zero experience in and reacted to his own repulsion of his discomfort with the subject and its context. It was an odd, uncharacteristic move, and one that makes me wonder if his value set has changed enough that he cannot provide value and authoritative influence in consumer and enterprise hardware and software solutions anymore. People change, so maybe it's true. But man, I idolized Bill as a kid. It's hard to not feel like he's the reason people take people like me seriously, even today.
Thanks for your response and agreement. No comments on bill but I'd like to respond to this:
>It is safe to begin with the framework and work within its constraints to achieve a relatively predictable result. However, that also averts any risk inherent to failures along processes that discover novel relationships, test outcomes, compare against known outcomes.
I'm abstracting this to a question about creative process and whether you should use a conventional process or try something new. I think this depends on the organization the process exists in - the answer could be both or either.
You may find the book "Teeming" that speaks to anthropology and organizational evolution interesting as it speaks to creating organizations that have both structure and fluidity.
Forgive me for interrupting. I've been enjoying thinking about your conversation. If you have the time and inclination, I would like to know your feelings or thoughts (even just your gut instincts), if any, about Metamodernism (whatever this word means to you).
Sure, it's extremely frustrating to have someone come in and throw the baby out with the bathwater (wait is that a "mental model"?) and apply a bunch of surface pattern matching to a problem on which you have genuine expertise.
On the other hand, we don't always have the deep expertise necessary to find optimal solutions to any given problem. Sometimes lateral thinking is useful, and I see no problem with having a bunch of established patterns that you can use as different lenses to approach difficult problems.
The other thing is that these high-level models serve as an aid to communication. Of course this is risky since "the map is not the territory" (sorry couldn't resist), but as an expert the first thing you must realize is that the vast majority of people (even smart ones) will never understand your area of expertise to a tenth of the depth you do. These mental models can give a shortcut to understanding in a more general way than ad-hoc metaphors; it won't be perfect but it's better in most cases than a complete failure to be understood.
There's people in the language meetup circles that can discuss a language at length, etc and not even speak it well. They spent so much time learning factoids etc about it without actually immersing and gaining that "on the ground" ability.
No amount of research is going to let you speak colloquial spanish in the countryside of Colombia, or wherever.
So why are "mental models" the best way to make intelligent decisions? What other ways to make intelligent decisions exist? The blog post failed to explain that, it largely appears to be marketing material.
I think most people use mental models to geht to decisions. It is just that these mental models are usually implicit — so if you would ask somebody what their mental model behind a specific decision was they might not be able to tell you.
These mental models are explicit, this is good because it moves a process that was subconscious into the light of day.
And why is this good? Because we all encounter situations in which our existing ways of thinking and deciding will fail. Isn't it good to then be able to shape the direction of your own thoughts in such a situation? Of course this takes a little practise: especially in stress situations most people wouldn't think about the way they are thinking even for a moment, because it is maybe a bit too meta and needs to be thought on purpose.
In my experience this is the best thing you can do in stress situations (unless it is a matter of life and death) is to give yourself a moment, observe yourself from the outside and actively decide which way you are going to think about the solution.
So mental models can be a way to break up your own thinking and avoid doing something out of reflex or because you always do it that way. Of course blindly following mental models and applying them to everything can also be a bad thing, like every tool could if you just use it wrong enough.
The overhead of explicit mental models means their terrible for any complex decision making process. Try an apply them to an extremely well studied problem like say chess and you still get stomped by people with even basic skills. Less obviously all real world problems you need to actually consider end up being complex with a huge number of subtle interactions.
What’s actually useful 99.9% of the time is either external models that can run on computers, or implicit models developed by synthesis of knowledge and practice.
PS: Mental models can be useful in extremely stressful situations where there is no need to act quickly or come up with complex solutions. There is a hurricane forecasted to hit this area in 5 days doesn’t need a great solution just a set of reasonable choices.
It is true that mental models are a natural way to think about the world. I think it's so natural that it doesn't need to be called out. It's better to focus on honing your models than it is to look at the meta-process surrounding the existence of models, I think.
I mean, for example, why worry about filling a perceived gap in your mental models (e.g., ponder the existence of the Overton Window as a model of group decision making and political discourse), when just learning about the Overton Window (without all this mental model meta-tagging) is already a useful thing to think about when making decisions.
I'm glad that the "General thinking concepts" are not being cast as "mental models", that would be a meta-classification error. (Aphorisms != Models the same way as Information!=Data or Knowledge!=wisdom).
But seriously, this is just a book marketing page that keeps getting re-written and re-posted over and over.
I posted this comment in an earlier thread on Mental Models but here it goes again.
Remember when someone is pushing something ask yourself - what is their agenda? Farnam Street's Shane Parrish is the newsletter and subscription business, just like CNBC is in the financial infotainment business not in the investment or wealth creation business. They want you think of them as the authority by peddling ideas from an place of authority without actually providing a track record of success. CNBC's Jim Cramer's stock recommendations on evaluation had a worse than S&P record and subsequently pulled from his site.
Mental model are quite interesting to read but applying them is another matter. It like asking Tiger Woods for golf lessons. I have been reading mental models for a couple decades now. I decided to pick one model and focus on applying it - invert always invert. This one simple rule has been really hard to apply. One, it is really really hard to remember to apply in the heat of decision making. Two, you cannot just apply it. That requires practice with an expert practitioner like an apprenticeship (guilds), extended internship (law/finance) or residency (medicine). You cannot just pick it up by reading a damn article or book.
Let me give an analogy of trying to reason about physical properties from first principles which is another "mental model". This is another popular exhortation - think from first principles just like Feynman or Fermi. Apparently, Fermi figured out the yield from the first test nuclear blast by tossing paper confetti over his head and calculating the yield from the scatter from the blast wave. I can think from first principles in one extremely narrow field where I happen to have half a decade of education and 20 years of experience. To reason from first principles like Fermi or Feynman in a broad domain like physics requires a world class mind, a world class education and a world of experience. Seriously, WTF.
Most mental model writing is akin to consuming youtube fitness porn. It looks easy to do, you look cool doing it and the end results are just spectacular. However, like Arnold or David Goggins it requires an inhuman dedication, purpose, ability to withstand pain, bounce back from trauma and just keep sacrificing attitude for a couple of decades with a healthy dose of failure. Most of the time the only person benefiting is the video creator from the ad-roll.
I appreciate the posts but I now believe these mental models are incredibly hard to do and like most of the self-improvement/growth hacking genre is just good for entertainment and commerce 99.99% of the time.
Definitely useful in my experience. Mental models provide a flexible framework for so many types of thinking that it becomes an extension of your mind that sublimates into the background.
What’s an example, cause they seem awesome on paper but I wonder if they change the decision in real life because of the emotional or I like this aspect
A clear example would be the mental model of an animal cell’s functions. In reality it’s such a sophisticated construct that no one could actually understand everything about it in entirety, there has to be a reduction, an abstraction so to speak, somewhere along the way. And voila, turns out mental models are the best way of doing this.
The best areas for applying this in general are the areas simply too complex without reduction.
This book talks about in mental models in depth, "Deep Survival" by Lawrence Gonzales [1]. Mental models can contribute to death in survival situations because they are biased by past experience. Being adaptable and practicing resilience is a much better survival strategy. A dark sense of humor helps too.
It's a really great read on how the brain works. It helped me understand some of my own fears about the COVID pandemic. It also helps me understand (but not agree with) why some people are so resistant to wearing masks when it might keep their loved ones alive.
The book gives examples of mountaineering accidents. The mental model says that the group is roped in together. Symbolically, everyone is a team, in it together. But guess how much physical force is generated by gravity when 150+ pounds of person and gear when a fall reached the end of a 50ft rope? More than what physical strength and crampons can support. That one mental model comes up a lot in mountaineering accident reports.
These are not what I would call mental models. These are strategies (in various forms) and, as such, relatively vague strategies for assembling mental models.
Mental models live on a continuum ranging from directly incorrect to accurate. You always have them, mostly the incorrect ones early on in your career. With wisdom comes the accurate mental models and I would kill (not literally of course) for obtaining the good ones :)
I very much enjoy mental models. A close friend and myself have a weekly mastermind where we go over what's happening for us on both a personal and professional level. It keeps us both in check and allows us to come to conclusions more effectively.
For the past couple months we've introduced going over a mental model every week as well. We'll summarize it in our own words, discuss examples of how we can use it - or when it makes sense to avoid it. They're not end-all-be-all, but they're a great thing to use as a starting point.
The reason why it's terrible is because mental models are like a sort of jumbled set of tools, copied over from somewhere else. Deep and genuine knowledge is tacit, integrated, incorporates experience and is organic. It's acquired through training and expertise and often can't be verbalised or formalised.
Relying on mental models is a sort of crutch. Maybe a good comparison is chess playing. If you're a new chess player you learn that pieces are worth X points, and that such and such squares are important, and that's basically like these sorts of models. Really good chess players don't consciously think like that, at least not in a sort of pronounced way. They've incorporated knowledge and play by intuition. They can make 'good moves' in a second or two.
I don't know why these 'fallacies' and 'mental model' things have taken off so much within certain circles, I feel like it's because people do nothing else other than reading these blogs. Real expertise or knowledge isn't gained by sitting around and 'thinking through stuff', it's by coming into interaction with the real world and immersing yourself in what you want to get better at.