Hacker News new | past | comments | ask | show | jobs | submit login
List of Cognitive Biases (wikipedia.org)
214 points by plibither8 on July 2, 2019 | hide | past | favorite | 64 comments



I was a big fan of cognitive psychology and biases and tried to read as much books as possible on the subject. Even had a mind-map on finished and TODO books (https://ic.pics.livejournal.com/buybackoff/8746464/10862/108... upper-right corner).

I think the best practical material on the subject is Charlie Munger talks. Particularly his talk "On the psychology of human misjudgement" (https://buffettmungerwisdom.files.wordpress.com/2013/01/mung...) and "Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger", which is an edited collectoin of his many talks.

My main conclusion from reading all the books/talks is that you could only be aware of the existence of the biases, but cannot realize which one(s) is(are) at play now in your brain and cannot "fix" a bias with any cognitive effort. So "sleeping on"/delaying an important decision is the best practical way I have found to mitigate the always present pervasive biases.


There are efforts to train biases out of your system. A long time ago I did such a training from a renowned scholar online (he made a simple web page). I forgot how it was called, but I remember one clear thing that I use in every day life.

When you estimate something, never ever estimate a single value. Always estimate within a range. He showed in his training that, for me, I would come to more sane averages/point estimates. It helped me. Unfortunately, that's simply anecdata.

In any case, when you google on "debias training" or something similar, you will see that many efforts are underway.


IIRC the book "How to measure anything" contains some advice like this: https://www.amazon.com/dp/1118539273/

The author also offers webinars, so maybe it was from him: https://www.howtomeasureanything.com


I think it was from a pretty prominent researcher that had a .edu site, but I couldn't find it and I really tried.


In Epistemology and Psychology of Human Judgment [1] the authors recommend a similar approach:

1) Make a point estimate 2) Imagine that you're wrong (what direction are you wrong?) 3) Make a second point estimate 4) Average the two

Excellent advice from an excellent book!

[1] https://www.amazon.com/Epistemology-Psychology-Judgment-Mich...


Why you think you couldn't realize which ones are at play now in your brain? Humans generally are capable of this "reality check" (fancy word: metacognition). Not saying it's easy, but from readings on the topic, and my experience, it's possible, isn't it?


Kahneman writes that biases are mostly caused by slow brain and old system such as amygdala which are before the slow brain, and it's extremely hard for the slow modern brain to control the fast one and override its signals. It's like biases are in BIOS not in an executing program.

It's possible to feel that my current thinking is not perfectly rational and not emotionally detached. But even when having that feeling I cannot just pick e.g. "I'm having biases 1, 7, 14 and 19 from the list and I have to do that and that to overcome them...". Better to go to sleep, run 10K, go to a bar, etc. when there are hints that the brain in a state that is not perfect for decision making. Somehow in the background the rational analysis never ends, and when one reaches a clear, calm, and emotionally detached state of mind the best decision is usually already obvious.


In the early part of his book "Thinking, Fast and Slow" Kahneman essentially says the book will help one identify the biases in other people but we're somewhat helpless to see them within ourselves.

For anyone who hasn't read this book, it's worthwhile if only to humble oneself into realizing we're rarely as rational as we'd like to think.


It's "hard" as I wrote, but it's possible. You actually do it yourself on a daily basis, every time you don't act purely on your "animal brain" instincts but override them with your "human brain". This is almost the same.


There is a catch that the system that you use to override a biased system is also biased, or even catch-22 - because it's the same system.


You can compensate for these biases, but noticing them in the act is different.

Familiarity means you will notice some containers in the store before others. At that point bias has already occurred before conscious thought.


Ubrelated, but can I ask what mind mapping software you used?


Mindjet's MindManager. But it's rather expensive and I stopped using the mind map concept at all. Either because other tools are so bad in comparison with this one or because PowerPoint+Excel do the job (visualization + keeping lists of things) better.


I’ve been wanting to use a basic one to keep track of silly things. Like movies I want to watch, movies I want to rewatch, or the music i need to sync to my phone, YouTube links of interesting things that have stood out to me. Just things that don’t belong in a app with lists and also not include these things into my notes app. Maybe it’s a silly approach. Just an idea I’ve been vacillating with.


I use Freeplane + a private git repo to 'store it in the cloud'.

I don't mindmap sensitive data yet, if I would, then the private git repo would be on a Raspberry Pi.


Thanks. I’ll take a look. I was hoping it was a standalone application.


I am wondering which of these are "solid" enough to be considered? I am asking because of the "replication crisis" which affected also Kahneman et. al.

EDIT: I am genuinely interested in knowing, since it would be helpful to know which of these are reliable - in order to change my behavior accordingly.

https://www.theatlantic.com/science/archive/2018/11/psycholo...

https://replicationindex.com/category/kahneman/


Beware of potential recursion. https://en.wikipedia.org/wiki/Bias_blind_spot


How many of these will appear naturally in powerful AI systems?

Perhaps many! Maybe by trying to emulate a human brain we will end up recreating its flaws.

I am very excited in the progress of deep learning applied to symbolic, logical reasoning, like theorem proving. Theorem verification is easy and tractable, proving is not.

We can have heuristic algorithms come up with provably correct algorithms! That is vaguely analogous to a human writing a program then proving it correct. Now that will be useful.


I wonder if we might achieve better* results in some systems if we added in these biases on purpose. (I.e. treated them like features, not bugs or even emergent patterns.)


Some of which may be replicated.

I'd be tempted to down-vote myself for snarky trolling except that I work in the field of psychological research, and perhaps it is my bias, but many of the cognitive biases that came from social-psychology research do not stand up to scrutiny, too frequently resulting from bad statistical practice...at least two decades ago.


Can you expand on that? My impression is that Kahneman and Tversky "proved" that human cognition is not Bayesian and now much of cognitive psychology is turning around and saying, no, they didn't, and it is. As a layperson, I don't know whom to believe.


Richard Nisbett claims that with training a lot of the biases can be overcome (I'm paraphrasing)

There is an interesting course of his on Coursera ( https://www.coursera.org/learn/mindware)


As someone who had pseudo-scientific anti-bias techniques applied on myself as an infant, I tend to agree. Now that may actually be real bias...

I believe many biases listed here can substantiate themselves.


This is purely my opinion.

All theories within Psychology and Economics are based on people being 'rational'. Any thing contrary to their theory is branded 'irrational' and given a name. The name usually sounds like a 'disease/ailment'.


Your opinion is close to the truth. But in a psychology there is an alternative viewpoint. It is the concept of rationality is a wrong and oversimplified, not the human mind. If we see some behaviour as irrational, it means that we do not understand why it is good to behave this way.

It works for perception research: when science needs to explain an illusion and why people are subjects for it, to be able to explain why people see or hear "wrong" things, or why this "wrong" things are not wrong at all, is a good science. Often times science use a very artificial setup for an experiment, the very setup is tuned so that people start to make mistakes. Take Ames Room as an example[1], it is artificially created environment when participant have not enough information to be sure, and his mind make a mistake. But mistake of the mind is a great achievement, if you try to do better with an AI, I suspect you'll end with the same result. Mind take into a consideration a lot of details, for Ames Room to work reliably, experimenter needs to draw skewed windows on the back wall, that would look as rectangles after projection. So the setup is highly unlikely a priori and mind makes a good bayesian decision that the most probable explanation is two people of different sizes.

For cognitive biases we also need to be wary, because the process of creating the right experimental setup could include a lot of tweaks to make people's decision process "to fail". Scientist needs an effect that could be shown with a statistics, so he/she tweaks setup until it works.

It leads to a conclusion, that if cognitive bias lacks explanation why it is a rational thing, we cannot say that this bias is a "disease/ailment".

[1] https://psychology.wikia.org/wiki/Ames_room


Well, people shared your intuition for a while but then new results came up. This literature started with the supposed rationality of compelling axioms of decision making by Ramsey, von Neumann, Savage, etc. These are in the end based on measurement theory. People noticed early that humans seem to remain rational even if they violate intuitively acceptable rationality postulate.

Take Luce's coffee cup example as an illustration. You prefer black coffee to sweet coffee. Suppose you compare coffee with no sugar to coffee with one grain of sugar added. You're indifferent: a~b. Then add another grain, and so on. You will get comparisons a~b, b~c, c~d, e~f, ..., j~k, and then suddenly a>k, a violation of the supposed transitivity of equipreference (aka indifference, equally good). But that seems to be rational.

So people relaxed rationality requirements and now there is the problem what 'rationality' actually means.

Fast forward a few years and empirical studies found the following strange behavior: If you mention a high number before asking people for some fictional charity contribution, then people tend to be be willing to pay more than if you mention a low number before, and it does not matter in which way you mention the numbers. (Actual experiments were made by making people roll a rigged lottery wheel before doing some completely different task, for example.) You can even tell participants about the observed effect before, it will still be observed.

I see no way how this "anchoring effect" could be described as being rational.

But many people nowadays share your opinion, and there is a whole field called "ecological rationality" in which scholars try to re-interpret supposedly irrational biases as good and rational heuristics increasing e.g. evolutionary fitness. I don't think they're right in general, though. Some of the biases are just flaws. If I flash a number before your eyes and this affects your subsequent decision making, then that's not a useful heuristics, it's a flaw in your brain processing. My 2 cents, others disagree with me.


>Some of the biases are just flaws.

It's probably a trait that is (or was) advantageous in one context, that is disadvantageous in this new or less vital context.


There is also the trend of 'positive psychology': https://en.wikipedia.org/wiki/Positive_psychology


I once tried memorizing this list of cognitive biases but eventually came to the conclusion they were ill-defined and in some cases, not even biases at all, but a heuristic to keep me alive and well-functioning.


I tried to memorize them, but recency bias meant I only remember the last one in the list.


I tried to think of others, but availability bias means I'm stuck on recency bias.


I memorized them as well, and although I agree with you on some level, I think it does actually help me gain a better understanding of how other people think (or past me). I'm not sure it actually made me better at making decisions, tho.


A lot of the anti-bias discourse is aimed at getting people to stop being human and start being libertarian robots.


It's an alphabetically sorted list, sure one can read the whole list top to bottom, but it just doesn't flow very well.

If you're interested in rationality and cognitive biases, I'd highly recommend reading Eliezer Yudkowsky's "Rationality: A-Z" sequences: https://www.lesswrong.com/rationality


Thinking Fast and Slow explains many cognitive biases.

https://en.wikipedia.org/wiki/Thinking%2C_Fast_and_Slow



I had a class at Babson about “Decisions”. Best class ever. My favorite case was about decision making process at NASA that led to the Discovery disaster. Along the case (you can find multiple versions online and it is an awesome read) there was this HBR article about flaws in decision making process. “The Hidden Traps in Decision Making” by by John S. Hammond, Ralph L. Keeney, and Howard Raiffa. https://www.researchgate.net/publication/12948100_The_Hidden...


i went to babson as well -- too bad i missed this class, sounds interesting!

i grabbed a copy of that HBR article and will read it later. thanks!


I think human rational thinking is completely f*cked. We are just not capable of thinking very logically/rationally.

I think all the more reason to meditate, be mindful and adopt philosophies that are not always rational, but good instead.

Also, the truth is often very complex or very dark, so thinking is only going to bring incorrect simplified (black/white) conclusions or negativity/resentment.


You just reminded me... https://www.xkcd.com/1163/


Being reminded of cognitive biases on a regular basis does wonders for staying grounded! I currently use a browser plugin for that but this poster seems like a better alternative — https://designhacks.co/products/cognitive-bias-codex-poster



What's the browser extension?



Anyone want to work with me to create a cognitive debiasing AI algorithm/chat bot?


That would be really interesting project. Only up for it if it's open source.


According to Daniel Kahneman the research on whether biases can be overcome is "not encouraging". https://getpocket.com/explore/item/the-cognitive-biases-tric...


The list is missing "The Bias Bias in Behavioral Economics" (https://www.nowpublishers.com/article/Details/RBE-0092)


Gerd Gigerenzer's work and the book Simple Rules are far more efficient ways of making better decisions than reading a list of 100+ "biases" and trying to overcome them. A good intro is Risk Savvy. https://www.youtube.com/watch?v=KnRWVmWQG24


A more digestible format for this: https://busterbenson.com/piles/cognitive-biases/


Which biases are startup founders more likely to fall into?


Do we suffer from normalcy bias when dealing with the effects of global warming? It sure looks that way.


While I think it's good for pedagogical purposes to have a catalogue of many examples of where our thinking goes wrong, I worry that these lists can give off the wrong idea that our thinking is broken in so many "different" ways.

In some sense, many of these biases seem like specific instances of a more general phenomenon. For example, illusion of control and pareidolia both seem like they'd arise if you buy into the brain as doing [predictive processing](https://slatestarcodex.com/2017/09/05/book-review-surfing-un...). So it's not exactly that we have over 100 ways that our thinking goes wrong, but that the same types of mistakes occur in different ways.

In which case, for preventative reasons, knowing the core mechanism at play seems much more important. Similarly, I feel that lists of mental models might also be missing the point; no one can really go through a list of 100+ items to figure out which one is at play. You're going to need a smaller, more general toolkit.


what toolkit do you use?


I don't have something that feels comprehensive yet, but another more general concept aside from predictive processing that I've been thinking about is substituting a hard thing for an easy thing, if the easy thing looks like the hard thing.

Examples: - Following through the steps of a proof vs covering up the proof and doing it yourself - Asking yourself if something sounds familiar instead of trying to summarize it - Criticizing an idea instead of adding a new one or suggesting an improvement


Wow, there are now 194 listed Cognitive Biases. The number keeps growing.

It is an awful sign for a scientific community when they are working on a theory that includes 196 different exceptions and adjustments that have to be made in order to make a model fit the data. It means that your underlying model probably isn't right.

This reminds me of when Astronomers thought the universe revolves around the Earth, rather than the Sun. The earth-centered theory made sense until we got better data, and then sometimes planets appeared to go backwards. Sometimes they appeared to swirl around a line. Sometimes there were swirls within the swirls, and sometimes swirls within those: https://invisible.college/attention/dissertation/retrogrades...

Astronomers had to account for this data with a complex set of retrograde motions and epicycles layered upon epicycles. These complexities only increased as telescopes and charting techniques improved, uncovering more distortions from in the idealized orbital lens. Take, for instance, the numerous parameterized gears required for an early Galilean planetary model: https://invisible.college/attention/dissertation/galileo2.jp...

Only when Copernicus and Kepler put the sun in the center of the universe could the models be simplified. Suddenly, each planet's orbit fit a perfect elipse -- no epicycles, no retrograde motions.

We can do the same thing for Economic theory, by moving the center of the utility function from the future to the present. Right now, Economics models humans as optimizing future outcomes. The modeled humans are focused on the future: they allocate infinite attention to computing the optimal action for the future. But real humans have scarce attention for computing the future. When they run out of attention, these 194 heuristics and biases display themselves in full effect.

We solve this dilemma when we evaluate the utility function in the present, rather than the future. Instead of assuming humans have infinite attention, the utility function itself predicts how humans allocate their scarce attention. The new utility function evaluates the utility of attention itself.

And it turns out that we can empirically measure this value of this utility function, by running controlled experiments online with 1,000s of participants, and paying them different amounts of money to attend to different tasks. This lets us measure how much utility people ascribe to paying attention to television shows, sexy pictures, video games, advertisements, iPhone screens, or reddit posts. We can measure it in pennies per second.

This new model is a measurable Attention Economics: https://invisible.college/attention/dissertation.html


As I understand it cognitive biases aren't exceptions or adjustments to a model; they are observed phenomena. Any model of psychology or economics which purports to unify these phenomena must be able to explain/predict each bias individually. How does "attention economics" predict the planning fallacy?


Cognitive biases are exceptions or adjustments to the rational model: https://en.wikipedia.org/wiki/Rational_choice_theory On the whole, the field of Behavioral Economics is a correction to the Rational Economic Model. Behavioral Economics says "people are rational, except for the ways in which they are biased." They call it "bounded" rationality.

For instance, the planning fallacy is a correction to the idea that people will rationally predict how much time something will take. So we first estimate how long something might take, and then the planning fallacy teaches us to increase it to account for our bias.

> Any model of psychology or economics which purports to unify these phenomena must be able to explain/predict each bias individually

That's close to correct, but I'd like to distinguish explaining the bias vs. the data. The new theory should explain the data, not the biases in the old theory. Consider that Kepler's elliptical orbit theory didn't explain each individual planetary epicycle -- it didn't need to. Kepler's theory didn't need epicycles at all to explain the data.

Likewise, Attention Economics doesn't need a "Planning Fallacy", because it doesn't assume humans are good planners. It rather looks at how people actually allocate their attention while planning. Consider that if people allocate more attention to their plans, they are likely to make better estimates. So how are they allocating their attention when planning? In the "planning fallacy" [1], Kahneman and Tversky envisage "that planners focus on the most optimistic scenario for the task, rather than using their full experience of how much time similar tasks require." I haven't run the experiments myself, but one could certainly test for this in an Attention Economic experiment, by seeing how much more attracted people are to focus on the most optimistic scenario for their task, rather than the pessimistic scenarios. And then we can learn why they focus on the optimistic scenario, by manipulating other variables until we see which ones lead people to consider optimistic vs. pessimistic scenarios when planning.

[1] https://en.wikipedia.org/wiki/Planning_fallacy


Thanks for your detailed response. I'm afraid I'm still not sure how this simplifies much. The fact that people focus specifically on an optimistic scenario rather than having a random variance like a normal distribution seems very significant to me. If you can't predict this from first principles then you have to add in some extra explanatory factors, and then you still get your cursed epicycles.


> It means that your underlying model probably isn't right.

Or you just overestimate the validity of the biases as defined.

You probably have doubt yourself, otherwise the reference to astronomers wouldn't be here.


Defense mechanisms are far more interesting


Are these just made up or found by scientific method?


Those two things are not necessarily mutually exclusive. But a lot of them have been studied pretty extensively, I know I've seen a lot on the "Anchoring".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: